CAPA - AWS IAM Configuration
====
# Configuration
The AWS IAM Configuration controls the creation of resources for use by Kubernetes clusters and Kubernetes Cluster API Provider AWS.
The AWS Role requires a specific name suffix per the ClusterAPI AWS (CAPA) documentation
> Reference: https://cluster-api-aws.sigs.k8s.io/crd/index.html
The default name suffix is ```.cluster-api-provider-aws.sigs.k8s.io```
##### Example Role Name
```
konvoy.cluster-api-provider-aws.sigs.k8s.io
```
## Policies
### Prerequisites
The following is the required infrastructure needed to set up restrictive IAM policies:
```
control-plane.cluster-api-provider-aws.sigs.k8s.io
controller.cluster-api-provider-aws.sigs.k8s.io
nodes.cluster-api-provider-aws.sigs.k8s.io
```
### Least Privilege Policies
These policies are the minimum set of permissions that are required to run a Konvoy cluster in AWS with existing infrastructure:
Set of permissions used by Kubernetes Cloud controller
```
control-plane.cluster-api-provider-aws.sigs.k8s.io
```
Set of permissions used by ClusterAPI AWS controller on control plane nodes
```
controller.cluster-api-provider-aws.sigs.k8s.io
```
Set of permissions used by Cluster API AWS controller on worker nodes
```
nodes.cluster-api-provider-aws.sigs.k8s.io ---
```
##### control-plane.cluster-api-provider-aws.sigs.k8s.io
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:DescribeVpcs",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"kms:DescribeKey"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT}:targetgroup/*/*",
"arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT}:loadbalancer/*",
"arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT}:listener/app/*/*/*",
"arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT}:listener/net/*/*/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:vpc/${VPC-ID}",
"arn:aws:ec2:${REGION}:${ACCOUNT}:security-group/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:volume/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:instance/*"
],
"Effect": "Allow"
}
]
}
```
##### controller.cluster-api-provider-aws.sigs.k8s.io
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeImage",
"ec2:DescribeInstances",
"ec2:DescribeAddresses",
"ec2:DescribeInternetGateways",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcs",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeTags"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:${REGION}::image/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:subnet/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:volume/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:instance/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:security-group/*",
"arn:aws:secretsmanager:${REGION}:${ACCOUNT}:secret:*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:network-interface/*",
"arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT}:loadbalancer/*"
],
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": "ec2:ModifyNetworkInterfaceAttribute",
"Resource": [
"arn:aws:ec2:${REGION}:${ACCOUNT}:network-interface/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:security-group/*"
]
},
{
"Action": "ec2:TerminateInstances",
"Resource": [
"arn:aws:ec2:${REGION}::image/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:subnet/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:volume/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:instance/*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:security-group/*",
"arn:aws:secretsmanager:${REGION}:${ACCOUNT}:secret:*",
"arn:aws:ec2:${REGION}:${ACCOUNT}:network-interface/*",
"arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT}:loadbalancer/*"
],
"Effect": "Allow",
"Condition": {
"StringEquals": { "aws:ResourceTag/sigs.k8s.io/cluster-api-provider-aws/cluster/${CLUSTER_NAME}": "owned"
}
}
},
{
"Action": "ec2:CreateTags",
"Resource": "arn:aws:ec2:${REGION}:${ACCOUNT}:instance/*",
"Effect": "Allow",
"Condition": {
"StringEquals": {
"aws:ResourceTag/sigs.k8s.io/cluster-api-provider-aws/cluster/${CLUSTER_NAME}": "owned"
}
}
},
{
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:TagResource"
],
"Resource": [
"arn:aws:secretsmanager:${REGION}:${ACCOUNT}:secret:*",
"arn:*:secretsmanager:${REGION}:${ACCOUNT}:secret:aws.cluster.x-k8s.io/*"
],
"Effect": "Allow"
},
{
"Action": "secretsmanager:DeleteSecret",
"Resource": [
"arn:aws:secretsmanager:${REGION}:${ACCOUNT}:secret:*",
"arn:*:secretsmanager:${REGION}:${ACCOUNT}:secret:aws.cluster.x-k8s.io/*"
],
"Effect": "Allow",
"Condition": {
"StringEquals": {
"aws:ResourceTag/sigs.k8s.io/cluster-api-provider-aws/cluster/${CLUSTER_NAME}": "owned"
}
}
},
{
"Action": [
"iam:PassRole"
],
"Resource": [
"arn:*:iam::${ACCOUNT}:role/*.cluster-api-provider-aws.sigs.k8s.io"
],
"Effect": "Allow"
}
]
}
```
##### nodes.cluster-api-provider-aws.sigs.k8s.io
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"secretsmanager:DeleteSecret",
"secretsmanager:GetSecretValue"
],
"Resource": [
"arn:aws:secretsmanager:${REGION}:${ACCOUNT}:secret:*",
"arn:*:secretsmanager:${REGION}:${ACCOUNT}:secret:aws.cluster.x-k8s.io/*"
],
"Effect": "Allow"
},
{
"Action": [
"ssm:UpdateInstanceInformation",
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel",
"s3:GetEncryptionConfiguration"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
}
```
Roles / InstanceProfile
control-plane.cluster-api-provider-aws.sigs.k8s.io
- with all the three policies
nodes.cluster-api-provider-aws.sigs.k8s.io
- with just node.cluster-api-provider-aws.sigs.k8s.io policy
### Multiple cluster deployment using a single AWS account
If you want to have both the management and workload clusters in the same AWS account, you could either remove the condition section from your Policies or update it to include the name of the workload cluster.
A sample condition is shown below:
```json
"Condition": {
"StringLikeIfExists": {
"aws:ResourceTag/sigs.k8s.io/cluster-api-provider-aws/cluster/dgoel-test-1": "owned",
"aws:ResourceTag/sigs.k8s.io/cluster-api-provider-aws/cluster/dgoel-test-workload": "owned"
}
}
```
### Multiple cluster deployment using multiple AWS accounts
If you want to have the management cluster in one AWS account and workload cluster in another AWS account then please follow the following steps:
1. Follow all the prerequisite steps in both the management and workload accounts
2. Create all policies and roles in management and workload accounts
3. Establish a trust relationship in workload account for the management account
4. Goto your workload account
5. Search for the role control-plane.cluster-api-provider-aws.sigs.k8s.io
6. Goto Trust Relationship tab and click Edit Trust Relationship
7. Add following relationship
```json
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${mgmt-aws-account}:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
},
"Action": "sts:AssumeRole"
}
```
8. Give permission to role in the management account to call sts:AssumeRole API
9. Login to management aws account and attach the following inline policy to control-plane.cluster-api-provider-aws.sigs.k8s.io role
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::${workload-aws-account}:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
]
}
]
}
```
10. Update AWSCluster object with following details
```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AWSCluster
metadata:
spec:
identityRef:
kind: AWSClusterRoleIdentity
name: cross-account-role
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AWSClusterRoleIdentity
metadata:
name: cross-account-role
spec:
allowedNamespaces: {}
roleARN: "arn:aws:iam::${workload-aws-account}:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
sourceIdentityRef:
kind: AWSClusterControllerIdentity
name: default
```