# DevOps Training (Create VPC and subnet, NAT Gateway, Internet Gateway, EKS Cluster A-Z using Terraform)
###### tags: `Devops` `AWS` `Terraform`
I am using VSCode to writing the Code you can use as your choice.
Create a Folder with name of Terraform where you can write your code.
Open that folder in VS Code and Create a file **provide.tf**
```java=

```
## Amazon EKS polices Githubs
https://github.com/SummitRoute/aws_managed_policies/tree/master/policies
## Github link I am follwoing:
https://github.com/antonputra/tutorials/tree/main/lessons/038/terraform
### Reference to understand Virtual route inside VPC between subnets in different availability zone in same region: (Khassi Router).

**The Architecture we are implementing:**

### Open the provider.tf and Write the code.
1: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc
2: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet
```javascript=
provider "aws" {
profile = "Terraform"
region = "ap-northeast-2"
}
# VPC Code
resource "aws_vpc" "main" {
cidr_block = "192.168.0.0/16"
instance_tenancy = "default"
tags = {
Name = "main"
Location = "Seoul"
}
}
# Subnet under the existing VPC
resource "aws_subnet" "subnet1" {
vpc_id = aws_vpc.main.id
cidr_block = "192.168.1.0/24"
tags = {
Name = "subnet1"
}
}
```

### Next Step to Go to Terminal and Folder location and see the files and run:
```javascript=
# terraform fmt
```
This above command can help to check the format of the file or code:

### Initializing the Terraform
```javascript=
# terraform init
```

### Check the plan of your VPC
This will give you the detail about your VPC. If the details is ok then we can create VPC on AWS.
```javascript=
# terraform plan
```

### Apply the Terraform Script to create resources on AWS
```javascript=
# terraform apply
```

### Now you can check on AWS you vpc and subnet is created or not:
#### VPC

#### Subnet

## Adding the Internet Gateway resource:
add another resource in the provider.tf file:
```javascript=
# resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main"
}
}
```

### Again apply Terraform and see the output of cmdline.

### Check on the AWS console your Internet-Gateway is created.
make sure everything under same vpc:

### Creating Subnet 1st Availbility-Zone 1 Public and 1 Private subnet, 2nd availbility zone 1 Public and 1 Private subnet
```javascript=
# Subnet under the existing VPC public 1
resource "aws_subnet" "public_1" {
vpc_id = aws_vpc.main.id
cidr_block = "192.168.0.0/18"
availability_zone = "ap-northeast-2a"
map_public_ip_on_launch = true
tags = {
Name = "public_1"
"kubernetes.io/cluster/eks" = "shared"
"kubenretes.io/role/elb" = 1
}
}
# Subnet under the existing VPC private 1
resource "aws_subnet" "private_1" {
vpc_id = aws_vpc.main.id
cidr_block = "192.168.128.0/18"
availability_zone = "ap-northeast-2a"
tags = {
Name = "private_1"
"kubernetes.io/cluster/eks" = "shared"
"kubenretes.io/role/internal-elb" = 1
}
}
# Subnet under the existing VPC public 2
resource "aws_subnet" "public_2" {
vpc_id = aws_vpc.main.id
cidr_block = "192.168.64.0/18"
availability_zone = "ap-northeast-2b"
map_public_ip_on_launch = true
tags = {
Name = "public_2"
"kubernetes.io/cluster/eks" = "shared"
"kubenretes.io/role/elb" = 1
}
}
# Subnet under the existing VPC private 2
resource "aws_subnet" "private_2" {
vpc_id = aws_vpc.main.id
cidr_block = "192.168.192.0/18"
availability_zone = "ap-northeast-2b"
tags = {
Name = "private_2"
"kubernetes.io/cluster/eks" = "shared"
"kubenretes.io/role/internal-elb" = 1
}
}
```

### Terraform output
```javascript=
# terraform fmt
# terraform init
# terraform plan
# terraform apply
```


### AWS Console output

### Create eip
```javascript=
#Create Elastic IP Addresses NAT1
resource "aws_eip" "nat1" {
depends_on = [aws_internet_gateway.igw]
}
#Create Elastic IP Addresses NAT2
resource "aws_eip" "nat2" {
depends_on = [aws_internet_gateway.igw]
}
```

### Create NAT Gateway
```javascript=
#Create NAT Gateway 1
resource "aws_nat_gateway" "gw1" {
allocation_id = aws_eip.nat1.id
subnet_id = aws_subnet.public_1.id
tags = {
Name = "NAT 1"
}
}
#Create NAT Gateway 2
resource "aws_nat_gateway" "gw2" {
allocation_id = aws_eip.nat2.id
subnet_id = aws_subnet.public_2.id
tags = {
Name = "NAT 2"
}
}
```

### Destory and Run all again: The output is:
**Destroy:**

**Apply Again:**

**The complete output is:**
```javascript=
PS C:\Users\sohail.anjum.AEWIN\Desktop\Terraform> terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_eip.nat1 will be created
+ resource "aws_eip" "nat1" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags_all = (known after apply)
+ vpc = (known after apply)
}
# aws_eip.nat2 will be created
+ resource "aws_eip" "nat2" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags_all = (known after apply)
+ vpc = (known after apply)
}
# aws_internet_gateway.main will be created
+ resource "aws_internet_gateway" "main" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "main"
}
+ tags_all = {
+ "Name" = "main"
}
+ vpc_id = (known after apply)
}
# aws_nat_gateway.gw1 will be created
+ resource "aws_nat_gateway" "gw1" {
+ allocation_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "NAT 1"
}
+ tags_all = {
+ "Name" = "NAT 1"
}
}
# aws_nat_gateway.gw2 will be created
+ resource "aws_nat_gateway" "gw2" {
+ allocation_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "NAT 2"
}
+ tags_all = {
+ "Name" = "NAT 2"
}
}
# aws_subnet.private_1 will be created
+ resource "aws_subnet" "private_1" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "ap-northeast-2a"
+ availability_zone_id = (known after apply)
+ cidr_block = "192.168.128.0/18"
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "private_1"
+ "kubenretes.io/role/internal-elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ tags_all = {
+ "Name" = "private_1"
+ "kubenretes.io/role/internal-elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ vpc_id = (known after apply)
}
# aws_subnet.private_2 will be created
+ resource "aws_subnet" "private_2" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "ap-northeast-2b"
+ availability_zone_id = (known after apply)
+ cidr_block = "192.168.192.0/18"
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "private_2"
+ "kubenretes.io/role/internal-elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ tags_all = {
+ "Name" = "private_2"
+ "kubenretes.io/role/internal-elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public_1 will be created
+ resource "aws_subnet" "public_1" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "ap-northeast-2a"
+ availability_zone_id = (known after apply)
+ cidr_block = "192.168.0.0/18"
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "public_1"
+ "kubenretes.io/role/elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ tags_all = {
+ "Name" = "public_1"
+ "kubenretes.io/role/elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public_2 will be created
+ resource "aws_subnet" "public_2" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "ap-northeast-2b"
+ availability_zone_id = (known after apply)
+ cidr_block = "192.168.64.0/18"
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "public_2"
+ "kubenretes.io/role/elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ tags_all = {
+ "Name" = "public_2"
+ "kubenretes.io/role/elb" = "1"
+ "kubernetes.io/cluster/eks" = "shared"
}
+ vpc_id = (known after apply)
}
# aws_vpc.main will be created
+ resource "aws_vpc" "main" {
+ arn = (known after apply)
+ cidr_block = "192.168.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = true
+ enable_dns_support = true
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Location" = "Seoul"
+ "Name" = "main"
}
+ tags_all = {
+ "Location" = "Seoul"
+ "Name" = "main"
}
}
Plan: 10 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_vpc.main: Creating...
aws_vpc.main: Still creating... [10s elapsed]
aws_vpc.main: Creation complete after 15s [id=vpc-051baee0f6da3a812]
aws_internet_gateway.main: Creating...
aws_subnet.public_2: Creating...
aws_subnet.public_1: Creating...
aws_subnet.private_2: Creating...
aws_subnet.private_1: Creating...
aws_subnet.private_2: Creation complete after 2s [id=subnet-04441ddd00cb87d5c]
aws_subnet.private_1: Creation complete after 2s [id=subnet-09e437eac9f5f3196]
aws_internet_gateway.main: Creation complete after 2s [id=igw-05394420f5f890d61]
aws_eip.nat1: Creating...
aws_eip.nat2: Creating...
aws_eip.nat2: Creation complete after 1s [id=eipalloc-0333e309649e492e8]
aws_eip.nat1: Creation complete after 1s [id=eipalloc-0b33187c89bfcac96]
aws_subnet.public_2: Still creating... [10s elapsed]
aws_subnet.public_1: Still creating... [10s elapsed]
aws_subnet.public_2: Creation complete after 12s [id=subnet-039bd052a1fb6786a]
aws_nat_gateway.gw2: Creating...
aws_subnet.public_1: Creation complete after 12s [id=subnet-0ed6c1efa6a8d14b1]
aws_nat_gateway.gw1: Creating...
aws_nat_gateway.gw2: Still creating... [10s elapsed]
aws_nat_gateway.gw1: Still creating... [10s elapsed]
aws_nat_gateway.gw2: Still creating... [20s elapsed]
aws_nat_gateway.gw1: Still creating... [20s elapsed]
aws_nat_gateway.gw2: Still creating... [30s elapsed]
aws_nat_gateway.gw1: Still creating... [30s elapsed]
aws_nat_gateway.gw2: Still creating... [40s elapsed]
aws_nat_gateway.gw1: Still creating... [40s elapsed]
aws_nat_gateway.gw2: Still creating... [50s elapsed]
aws_nat_gateway.gw1: Still creating... [50s elapsed]
aws_nat_gateway.gw2: Still creating... [1m0s elapsed]
aws_nat_gateway.gw1: Still creating... [1m0s elapsed]
aws_nat_gateway.gw2: Still creating... [1m10s elapsed]
aws_nat_gateway.gw1: Still creating... [1m10s elapsed]
aws_nat_gateway.gw2: Still creating... [1m20s elapsed]
aws_nat_gateway.gw1: Still creating... [1m20s elapsed]
aws_nat_gateway.gw2: Still creating... [1m30s elapsed]
aws_nat_gateway.gw1: Still creating... [1m30s elapsed]
aws_nat_gateway.gw1: Creation complete after 1m40s [id=nat-076654fc2af36e8a9]
aws_nat_gateway.gw2: Still creating... [1m40s elapsed]
aws_nat_gateway.gw2: Creation complete after 1m50s [id=nat-0903b427e7c022b8c]
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
```
### Create the Routing Table resouce:
```javascript=
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "public"
}
}
resource "aws_route_table" "private1" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.gw1.id
}
tags = {
Name = "private1"
}
}
resource "aws_route_table" "private2" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.gw2.id
}
tags = {
Name = "private2"
}
}
```

### Create the Routing table association:
```javascript=
resource "aws_route_table_association" "public1" {
subnet_id = aws_subnet.public_1.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "public2" {
subnet_id = aws_subnet.public_2.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "private1" {
subnet_id = aws_subnet.private_1.id
route_table_id = aws_route_table.private1.id
}
resource "aws_route_table_association" "private2" {
subnet_id = aws_subnet.private_2.id
route_table_id = aws_route_table.private1.id
}
```

### Routing table creation using Terraform

### Routing Table on AWS

## EKS Cluster Creation
```javascript=
resource "aws_iam_role" "eks_cluster" {
name = "eks-cluster"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "amazon_eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
# Resource: aws_eks_cluster
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster
resource "aws_eks_cluster" "eks" {
# Name of the cluster.
name = "eks"
# The Amazon Resource Name (ARN) of the IAM role that provides permissions for
# the Kubernetes control plane to make calls to AWS API operations on your behalf
role_arn = aws_iam_role.eks_cluster.arn
# Desired Kubernetes master version
version = "1.18"
vpc_config {
# Indicates whether or not the Amazon EKS private API server endpoint is enabled
endpoint_private_access = false
# Indicates whether or not the Amazon EKS public API server endpoint is enabled
endpoint_public_access = true
# Must be in at least two different availability zones
subnet_ids = [
aws_subnet.public_1.id,
aws_subnet.public_2.id,
aws_subnet.private_1.id,
aws_subnet.private_2.id
]
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_cluster_policy
]
}
```

## EKS Cluster Node Creation:
```javascript=
# Resource: aws_iam_role
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
# Create IAM role for EKS Node Group
resource "aws_iam_role" "nodes_general" {
# The name of the role
name = "eks-node-group-general"
# The policy that grants an entity permission to assume the role.
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
# Resource: aws_iam_role_policy_attachment
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy_general" {
# The ARN of the policy you want to apply.
# https://github.com/SummitRoute/aws_managed_policies/blob/master/policies/AmazonEKSWorkerNodePolicy
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
# The role the policy should be applied to
role = aws_iam_role.nodes_general.name
}
resource "aws_iam_role_policy_attachment" "amazon_eks_cni_policy_general" {
# The ARN of the policy you want to apply.
# https://github.com/SummitRoute/aws_managed_policies/blob/master/policies/AmazonEKS_CNI_Policy
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
# The role the policy should be applied to
role = aws_iam_role.nodes_general.name
}
resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
# The ARN of the policy you want to apply.
# https://github.com/SummitRoute/aws_managed_policies/blob/master/policies/AmazonEC2ContainerRegistryReadOnly
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
# The role the policy should be applied to
role = aws_iam_role.nodes_general.name
}
# Resource: aws_eks_node_group
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group
resource "aws_eks_node_group" "nodes_general" {
# Name of the EKS Cluster.
cluster_name = aws_eks_cluster.eks.name
# Name of the EKS Node Group.
node_group_name = "nodes-general"
# Amazon Resource Name (ARN) of the IAM Role that provides permissions for the EKS Node Group.
node_role_arn = aws_iam_role.nodes_general.arn
# Identifiers of EC2 Subnets to associate with the EKS Node Group.
# These subnets must have the following resource tag: kubernetes.io/cluster/CLUSTER_NAME
# (where CLUSTER_NAME is replaced with the name of the EKS Cluster).
subnet_ids = [
aws_subnet.private_1.id,
aws_subnet.private_2.id
]
# Configuration block with scaling settings
scaling_config {
# Desired number of worker nodes.
desired_size = 1
# Maximum number of worker nodes.
max_size = 1
# Minimum number of worker nodes.
min_size = 1
}
# Type of Amazon Machine Image (AMI) associated with the EKS Node Group.
# Valid values: AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64
ami_type = "AL2_x86_64"
# Type of capacity associated with the EKS Node Group.
# Valid values: ON_DEMAND, SPOT
capacity_type = "ON_DEMAND"
# Disk size in GiB for worker nodes
disk_size = 20
# Force version update if existing pods are unable to be drained due to a pod disruption budget issue.
force_update_version = false
# List of instance types associated with the EKS Node Group
instance_types = ["t3.small"]
labels = {
role = "nodes-general"
}
# Kubernetes version
version = "1.18"
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy_general,
aws_iam_role_policy_attachment.amazon_eks_cni_policy_general,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}
```


## Creation EKS Clsuter and Nodes using Terraform


## AWS Console output creating K8s Cluster

## Connect kubernetes cluster:
### Check the users we have

### Allow Terraform user to access k8s cluster

### Install the Kubectl
#### please follow the version:
https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
#### Open the PowerShell
```javascript=
# curl -o kubectl.exe.sha256 https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/windows/amd64/kubectl.exe.sha256
```
#### Create folder:

#### Copy the downloaded file in created foler bin:

#### Add the Environment variables:

## Quit the Powershell and open again and check kubectl version and get svc:

## Create the Nginx Service:
```javascript=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: internal-nginx-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
---
apiVersion: v1
kind: Service
metadata:
name: external-nginx-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
```

### Apply the nginx app.yaml

### Console output for Nginx

### Open the Nginx on Browser:

# Next: Note ( EKS Doesn't support IAM Role Users so we have to create a user.)
## update the kube-config
```javascript=
# aws eks --region ap-northeast-2 update-kubeconfig --name eks --profile Terraform
```

## Create Cluster role and Role Binding:
```javascript=
# This role will allow users to get list and watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: reader
rules:
- apiGroups: ["*"]
resources: ["deployments", "configmaps", "pods", "secrets", "services"]
verbs: ["get", "list", "watch"]
---
# Bind the above cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: reader
subjects:
- kind: Group
name: reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: reader
apiGroup: rbac.authorization.k8s.io
```

## Apply the yamal file for Role Binding.

## Create AWS Policy to allow users to access eks cluster
-> IAM -> Polices -> Create Polices:

## Create the Policy and Select Json

## Update the Json Code with:
```javascript=
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:AccessKubernetesApi",
"ssm:GetParameter",
"eks:ListUpdates",
"eks:ListFargateProfiles"
],
"Resource": "*"
}
]
}
```
### Aws Console:
If you have any error in the code you can see in the highlighed part: Errors:0

### Click the Next AddTag: Skip and Go to Review Policy and add the AmazonEKSDeveloperPolicy and Create policy:

### The Policy is Created:

### Now Create a group to attach the policy:
-> User Group -> Create New Group -> -> attach the Policy -> Add the Group name -> Click Create:

### Group is Created:

### Now Create the user to grant those permission to access the cluster:
-> Users -> Create User -> Add username -> Select the programmtic and console permission -> Next:Permissions

### Select eks-developer and Next

### Review and Create user

### User is created: downliad credentials .csv file:

### New profile configuration using Terminal

### Copy the userarn:

### Edit the Config Map using kubectl for new user:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
```javascript=
# curl -o aws-auth-cm.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-10-29/aws-auth-cm.yaml
```
### Open the Downloaded file and Edit via VSCode


### Apply the Kubectl

### Update the kubeconfig with developer profile:

### Verify that we can use developer profile run:
```javascript=
# kubectl config view --minify
```

### Verify the permissions