# IRSA in EKS within same and across AWS Accounts This is a gist of examples also mentioned in the blog [IAM Roles for Service Accounts (IRSA) in AWS EKS within and cross AWS Accounts](https://platformwale.blog/2023/08/02/iam-roles-for-service-accounts-irsa-in-aws-eks-within-and-cross-aws-accounts/). Prerequisite for this gist is to create the EKS Cluster as explained in my earlier blog [Create Amazon EKS Cluster within its VPC using Terraform](https://platformwale.blog/2023/07/15/create-amazon-eks-cluster-within-its-vpc-using-terraform/), OR you can use this [github repository](https://github.com/piyushjajoo/my-eks-tf/blob/master/README.md). ## Running Example for IRSA within same account Assuming you have the EKS Cluster running and your AWS CLI is configured to talk to the AWS Account where your EKS Cluster is running. If not please follow the our earlier blog on [How to create an EKS Cluster using Terraform](https://platformwale.blog/2023/07/15/create-amazon-eks-cluster-within-its-vpc-using-terraform/) and have the EKS Cluster up and running OR you can also directly use this [README](https://github.com/piyushjajoo/my-eks-tf/blob/master/README.md) to deploy the EKS Cluster. - Retrieve Kubeconfig and configure your terminal to talk to AWS EKS Cluster as follows, this should update the current kubeconfig context to point to the cluster - ```bash export EKS_CLUSTER_NAME=<your eks cluster name> export EKS_AWS_REGION=<aws region where you created eks cluster> aws eks update-kubeconfig --region ${EKS_AWS_REGION} --name ${EKS_CLUSTER_NAME} # validate kubecontext as below, should point to your cluster kubectl config current-context ``` - Create a namespace `irsa-test` and service account in that namespace named `irsa-test` as follows - ```bash # create namespace kubectl create namespace irsa-test # create serviceaccount kubectl create serviceaccount --namespace irsa-test irsa-test ``` - Retrieve OIDC Issuer Id from the EKS Cluster ```bash export EKS_CLUSTER_NAME=<your eks cluster name> export EKS_AWS_REGION=<aws region where you created eks cluster> export OIDC_ISSUER_ID=$(aws eks describe-cluster --name ${EKS_CLUSTER_NAME} --region ${EKS_AWS_REGION} --query "cluster.identity.oidc.issuer" | awk -F'/' '{print $NF}' | tr -d '"') ``` - Create IAM Role with attached AWS managed `AmazonS3FullAccess` policy and configure `TrustRelationship` for ServiceAccount name `irsa-test` in namespace `irsa-test` as follows - ```bash export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)" export EKS_AWS_REGION="<replace with aws region where you created eks cluster>" export EKS_OIDC_ID=$(echo $OIDC_ISSUER_ID) export NAMESPACE="irsa-test" export SERVICE_ACCOUNT_NAME="irsa-test" cat > trust.json << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.eks.${REGION}.amazonaws.com/id/${EKS_OIDC_ID}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.${REGION}.amazonaws.com/id/${EKS_OIDC_ID}:sub": "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}" } } } ] } EOF # create IAM role and attach trust policy aws iam create-role --role-name irsa-test --assume-role-policy-document file://trust.json # remove trust.json file rm trust.json # attach AmazonS3FullAccess Permissions policy to the iam role aws iam attach-role-policy --role-name irsa-test --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess ``` - Annotate ServiceAccount `irsa-test` in namespace `irsa-test` with IAM Role as follows - ```bash export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)" export NAMESPACE="irsa-test" export SERVICE_ACCOUNT_NAME="irsa-test" # annotate service account kubectl annotate serviceaccount --namespace ${NAMESPACE} ${SERVICE_ACCOUNT_NAME} eks.amazonaws.com/role-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:role/irsa-test ``` - Deploy a pod using the `irsa-test` service account in `irsa-test` namespace as follows - ```bash kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: irsa-test namespace: irsa-test labels: app: aws-cli spec: selector: matchLabels: app: aws-cli template: metadata: labels: app: aws-cli spec: serviceAccountName: irsa-test containers: - name: aws-cli image: amazon/aws-cli command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 39000; done;" ] EOF ``` - Make sure the pod is running, using irsa-test serviceAccount and irsa-test serviceAccount is annotated with IAM Role you created above as follows - ```bash # check pod is running kubectl get po -n irsa-test irsa-test # check pod is deployed with irsa-test serviceaccount kubectl get deploy -n irsa-test irsa-test -o jsonpath='{.spec.template.spec.serviceAccountName}' # check service account is annotated with IAM Role kubectl get sa -n irsa-test irsa-test -o jsonpath='{.metadata.annotations}' ``` - Exec into the pod deployed above and create an s3 bucket and validate s3 bucket is created successfully as follows, this will prove that your pod is configured for IRSA successfully - ```bash # exec into the pod export POD_NAME=$(kubectl get po -n irsa-test | grep irsa-test | awk -F ' ' '{print $1}') kubectl exec -it -n irsa-test ${POD_NAME} -- bash # run following commands inside the pod export BUCKET_NAME="irsa-test-sample-$(date +%s)" # create s3 bucket aws s3api create-bucket --bucket ${BUCKET_NAME} --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2 # validate s3 bucket is created, there shouldn't be any error message on stdout aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2 # delete s3 bucket, there shouldn't be any errors on stdout aws s3 rm s3://${BUCKET_NAME} --region us-west-2 --recursive aws s3api delete-bucket --bucket ${BUCKET_NAME} --region us-west-2 # validate s3 bucket is deleted, you should see 404 error message on stdout aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2 ``` ## Running Example Cross Account IRSA Assuming you have the setup from the **Running Example for IRSA within same account** section earlier. If not, please read and follow the section earlier before proceeding with the examples below - - Make sure your AWS CLI is now configured to talk to the AWS Account2 where you want your pod running in AWS Account1 to create resources. - Create IAM Role with `AmazonS3FullAccess` permissions in `Account2` and `TrustRelationship` to allow this IAM Role to be assumed by `irsa-test` IAM Role created earlier as follows - ```bash export AWS_ACCOUNT1_ID="AWS Account1 Number" cat > trust.json << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::${AWS_ACCOUNT1_ID}$:role/irsa-test" }, "Action": "sts:AssumeRole" } ] } EOF # create IAM role and attach trust policy aws iam create-role --role-name irsa-test --assume-role-policy-document file://trust.json # remove trust.json file rm trust.json # attach AmazonS3FullAccess Permissions policy to the iam role aws iam attach-role-policy --role-name irsa-test --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess ``` - Login to AWS Account1, go to IAM Role `irsa-test` and create an `inline` policy with permissions to allow Account1 to talk to Account2 as below - ```json { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<ACC2_NUMBER>:role/irsa-test" }, "Action": "sts:AssumeRole" } ``` Replace `ACC2_NUMBER` with AWS Account2's number. - Assuming your kubeconfig is still pointing to the EKS Cluster created earlier, also the `irsa-test` pod is still running. Make sure to exit from the interactive terminal of `irsa-test` pod, if it's still active from earlier session. - Exec into the `irsa-test` pod and assume the IAM Role in Account2 and retrieve the credentials, configure AWS CLI to now talk to Account2. Create s3 bucket and validate that it gets created in Account2 as follows - ```bash # exec into the irsa-test pod export POD_NAME=$(kubectl get po -n irsa-test | grep irsa-test | awk -F ' ' '{print $1}') kubectl exec -it -n irsa-test ${POD_NAME} -- bash # Account2's number export AWS_ACCOUNT2_NUMBER=<replace with Account2 number> # retrieve Account2's credentials aws sts assume-role --role-arn "arn:aws:iam::${AWS_ACCOUNT2_NUMBER}:role/irsa-test" --role-session-name "create-bucket-session" # copy AccessKeyId, SecretAccessKey and SessionToken from the output and set following environment variables ASSUME_ROLE_OUTPUT=$(aws sts assume-role --role-arn "arn:aws:iam::${AWS_ACCOUNT2_NUMBER}:role/irsa-test" --role-session-name "create-bucket-session") export AWS_ACCESS_KEY_ID=$(echo $ASSUME_ROLE_OUTPUT | grep -o '"AccessKeyId": "[^"]*"' | cut -d'"' -f4) export AWS_SECRET_ACCESS_KEY=$(echo $ASSUME_ROLE_OUTPUT | grep -o '"SecretAccessKey": "[^"]*"' | cut -d'"' -f4) export AWS_SESSION_TOKEN=$(echo $ASSUME_ROLE_OUTPUT | grep -o '"SessionToken": "[^"]*"' | cut -d'"' -f4) # set bucket name BUCKET_NAME="cross-irsa-test-$(date +%s)" # make sure you are logged into AWS Account2 now aws sts get-caller-identity --query Account --output text # create s3 bucket aws s3api create-bucket --bucket ${BUCKET_NAME} --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2 # validate s3 bucket is created, there shouldn't be any error message on stdout aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2 # delete s3 bucket, there shouldn't be any errors on stdout aws s3 rm s3://${BUCKET_NAME} --region us-west-2 --recursive aws s3api delete-bucket --bucket ${BUCKET_NAME} --region us-west-2 # validate s3 bucket is deleted, you should see 404 error message on stdout aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2 ``` This proves that now Pod running inside an EKS Cluster in Account1 can talk AWS Services in Account 2.