EKS Blueprints for Terraform === This workshop helps you build a multi-team platform on top of EKS. It will enable multiple development teams at your organization to deploy workloads freely without the platform team being the bottleneck. We walk through the baseline setup of an EKS cluster and gradually add add-ons to easily enhance its capabilities, such as enabling ArgoCD, Rollouts, GitOps and other common open-source add-ons. Acting as the Development Team, we then deploy a static website via GitOps using ArgoCD. ![eks_cluster_1](https://hackmd.io/_uploads/BkkX2_v1C.svg) Introduction --- In this section, we will go over the following topics to ensure that we have a fundamental understanding of what EKS Blueprints is along with the benefits of building one on Amazon EKS. We will discuss the following: 1. What is EKS Blueprints? 2. What are the components and tools used in order to build EKS Blueprints? 3. What are the benefits of using EKS Blueprints? 4. How do different personas, i.e, platform teams and application teams, leverage EKS Blueprints? 5. What is a reference architecture diagram for EKS Blueprints? 6. How is compute infrastructure defined, and how do you provision and manage it across environments using EKS Blueprints? 7. How are teams onboarded to the EKS Blueprints, and then supplied with control to access shared EKS clusters? 8. How are workloads in multiple environments onboarded to the EKS Blueprints? 9. In the next section, we will cover what EKS Blueprints is. What is EKS Blueprints? --- The EKS Blueprints is an open-source development framework that abstracts the complexities of cloud infrastructure from developers and allows them to deploy workloads with ease. Containerized environments on AWS are composed of multiple AWS or open source products and services, including services for running containers, CI/CD pipelines, capturing logs/metrics, and security enforcements. The EKS Blueprints framework packages these tools into a cohesive whole and makes them available to development teams as a service. From an operational perspective, the framework allows companies to consolidate tools and best practices for securing, scaling, monitoring, and operating containerized infrastructure into a central platform that can then be used by developers across an enterprise. How is the EKS Blueprints built? --- The EKS Blueprints is built on top of Amazon EKS and all the various components that we need to efficiently address Day 2 operations. A blueprint is defined via Infrastructre-as-Code best practices through AWS CDK or Hashicorp Terraform, through two open-source projects: * [The EKS Blueprints for Terraform ](https://github.com/aws-ia/terraform-aws-eks-blueprints) * [The EKS Blueprints for CDK](https://github.com/aws-quickstart/cdk-eks-blueprints) What can I do with a Blueprint? --- Customers can leverage the EKS Blueprints to: * Deploy EKS clusters across any number of accounts and regions, following best practices. * Manage cluster configuration, including add-ons that run in each cluster, from a single Git repository. * Define teams, namespaces, and their associated access permissions for your clusters. * Leverage GitOps-based workflows for onboarding and managing workloads for your teams. There is also an [EKS Blueprints Patterns examples directory](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples) that provides a library of different deployment options defined as constructs, which include the following: * Analytics clusters with Spark or EMR on EKS * Fully private eks clusters * IPV6 eks clusters * EKS clusters scaling with Karpenter * Clusters with observability tools * and much more... In the next section, we will talk about the benefits of following the EKS Blueprints model. # Benefits of EKS Blueprints Why leverage the EKS Blueprints? --- The ecosystem of tools that have developed around Kubernetes and the Cloud Native Computing Foundation (CNCF) provides cloud engineers with a wealth of choice when it comes to architecting their infrastructure. Determining the right mix of tools and services, however, in addition to how they integrate, can be a challenge. As your Kubernetes estate grows, managing configuration for your clusters can also become a challenge. AWS customers are building internal platforms to tame this complexity, automate the management of their Kubernetes environments, and make it easy for developers to onboard their workloads. However, these platforms require an investment of time and engineering resources to build. The goal of this project is to provide customers with a tool chain that can help them deploy a platform on top of EKS with ease and best practices. EKS Blueprints provide logical abstractions and prescriptive guidance for building a platform. **Ultimately, we want to help EKS customers accelerate time to market for their own platform initiatives.** Separation of Concerns: Platform Teams vs Application Teams --- Platform teams build the tools that provision, manage, and secure the underlying infrastructure, while application teams are free to focus on building the applications that deliver business value to customers. Application teams need to focus on writing code and quickly shipping products, but there must be certain standards that are uniform across all production applications to make them secure, compliant, and highly available. EKS Blueprints provide a better workflow between platform and application teams and also provide a self-service interface for developers to use that is streamlined for developing code. The platform teams have full control to define standards on security, software delivery, monitoring, and networking that must be used across all applications deployed. This allows developers to be more productive because they don’t have to configure and manage the underlying cloud resources themselves. It also gives operators more control over making sure production applications are secure, compliant, and highly available. What does good look like? --- EKS Blueprints will look slightly different between organizations depending on the requirements, but all of them look to solve the same set of problems listed below: ![Screenshot 2024-03-31 at 18.23.54](https://hackmd.io/_uploads/B1NhRuv1C.png) The reason why you would want to do this on top of AWS is because the breadth of services offered by AWS, paired with the vast open-source ecosystem backed by the Kubernetes community, provides a limitless number of different combinations of services and solutions to meet your specific requirements and needs. It is much easier to think about the benefits in the context of the core principles that EKS Blueprints was built upon, which include the following: * Security and Compliance * Cost Management * Deployment Automation * Provisioning of infrastructure * Telemetry In the next section, we will talk about the different personas that are involved in leveraging EKS Blueprints. # How does it affect different individuals? What can each individual on your team expect from EKS Blueprints? --- Now that we have an understanding of why we are using the EKS Blueprints, let's take some time to understand how this will benefit the various roles on each team that we will be working with. Team topologies vary by environment; however, one topology that is prevalent across many organizations is having a Platform Team provision and manage infrastructure. And also having multiple Application Teams that need to focus on deploying features in an agile manner. Many companies face a big challenge in enabling multiple developer teams to freely consume a platform with proper guardrails. The objective of our workshop is to show you how you can provision a platform based on EKS to remove these barriers. The workshop focuses on two key enterprise teams: a **Platform Team** and an **Application Team**. The Platform Team will provision the EKS cluster and onboard the Developer Team. The Application Team will deploy a workload to the cluster. Platform Team --- Acting as the Platform Team, we will use the [EKS Blueprints for Terraform](https://github.com/aws-ia/terraform-aws-eks-blueprints) which is a solution entirely written in Terraform HCL language. It helps you build a shared platform where multiple teams can consume and deploy their workloads. The EKS underlying technology is Kubernetes of course, so having some experience with Terraform and Kubernetes is helpful. You will be guided by our AWS experts (on-site) as you follow along in this workshop. Application Team --- Once the EKS cluster has been provisioned, an Application Team (Riker Team) will deploy a workload. The workload is a basic static web app. The static site will be deployed using GitOps continuous delivery. # Getting Started Running the workshop at an AWS Event --- To complete this workshop, you are provided with an AWS account via AWS Workshop Studio and a link to that will be shared by our event staff. AWS Workshop Studio allows AWS field teams to run Workshops, GameDays, Bootcamps, Immersion Days, and other events that require hands-on access to AWS accounts. :::warning **Important** If you are currently logged in to an AWS Account, you can logout using this [link](https://console.aws.amazon.com/console/logout!doLogout) ::: ***Access AWS Workshop Studio*** 1. [Click here](https://catalog.us-east-1.prod.workshops.aws/join/) to access AWS Workshop Studio 2. Choose your preferred sign-in method as follows. For AWS Guided events, use Email OTP method. ![setup_ws_signin1](https://hackmd.io/_uploads/H13fGgt10.png) 3. Enter the code provided by the event organizer, in the text box. You will usually find this code on a slide that is being shown, or a paper printout at your table. ![setup_ws_signin2](https://hackmd.io/_uploads/HJFHGgFkR.png) 4. Read and agree to the Terms and Conditions and click Join Event. ![setup_ws_signin3](https://hackmd.io/_uploads/Syk_feYyA.png) 5. Join the event. You can access the console of your personal AWS account for the event by clicking the link in the sidebar. ![Screenshot 2024-04-01 at 20.54.44](https://hackmd.io/_uploads/S1p6mlY1A.png) Starting AWS Cloud9 IDE --- Your account should be pre-configured with your Cloud9. Follow this [link](https://console.aws.amazon.com/cloud9/home) to access your **eks-blueprints-for-terraform-workshop** Cloud9 environment, and Open the IDE. ![c9-open-ide](https://hackmd.io/_uploads/SkczEeFJC.png) Verify your AWS identity: ```= aws sts get-caller-identity ``` You should have something similar to ```= { "UserId": "AROA6NAAL5J5H22JSBCPA:i-09e1d15b60696663c", "Account": "0123456789", "Arn": "arn:aws:sts::0123456789:assumed-role/eks-blueprints-for-terraform-workshop-admin/i-09e1d15b60696663c" } ``` :::info If you don't see **eks-blueprints-for-terraform-workshop-admin** in the output, just try type bash in the terminal to reload the environment ::: Use this command to verify that you are good to go: ```= bash aws sts get-caller-identity --query Arn | grep eks-blueprints-for-terraform-workshop-admin -q && echo "IAM role valid" || echo "IAM role NOT valid" ``` If the IAM role is not valid, **DO NOT PROCEED** Try the following commands to verify the identity again: ```= aws cloud9 update-environment --environment-id $C9_PID --managed-credentials-action DISABLE aws sts get-caller-identity --query Arn | grep eks-blueprints-for-terraform-workshop-admin -q && echo "IAM role valid" || echo "IAM role NOT valid" ``` Check you have the backup code source for the Workshop: ```= ls -la ~/environment/code-eks-blueprint ``` At an AWS Event this will already contain files, if its empty, in this case run the following commands to retrieve the Workshop Code: ```= curl 'https://static.us-east-1.prod.workshops.aws/public/8e45955c-68f9-4b13-bf1d-ad47716531db/assets/code-eks-blueprint.zip' -o code-eks-blueprint.zip unzip -o code-eks-blueprint.zip -d ~/environment/code-eks-blueprint ``` When ready, go to next section to provision an Amazon EKS Cluster # Provision an Amazon EKS Cluster We are ready to get started! The next step will guide you through creating Terraform files, which we will use throughout the workshop and gradually modify. In this section, we are going to manually create our Terraform project step by step. Create shared environment --- We separate the environment creation from the EKS cluster creation in case we want to be able to adopt a seamless blue/green or canary migration later. So first, we will create a Terraform stack for our environment that will contain shared resources such as VPC. ![environment](https://hackmd.io/_uploads/rkdzWU5yC.png) Configure our environment --- In this section, we will be setting up our Terraform project. **1. Create our Terraform project** Create a new folder in your file system ```= mkdir -p ~/environment/eks-blueprint/environment cd ~/environment/eks-blueprint/environment ``` Create a file called ```versions.tf``` that indicates which versions of Terraform and providers our project will use: ```= cat > ~/environment/eks-blueprint/environment/versions.tf << 'EOF' terraform { required_version = ">= 1.4.0" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.0.0" } random = { version = ">= 3" } } } EOF ``` **2. Define our project's variables** Our environment's Terraform stack will have some variables that we can configure: * Environment name. * The AWS region to use. * The VPC cidr we want to create. * A suffix that will be used to create a secret for ArgoCD later. ```= cat > ~/environment/eks-blueprint/environment/variables.tf << 'EOF' variable "environment_name" { description = "The name of environment Infrastructure stack, feel free to rename it. Used for cluster and VPC names." type = string default = "eks-blueprint" } variable "aws_region" { description = "AWS Region" type = string default = "us-west-2" } variable "vpc_cidr" { description = "CIDR block for VPC" type = string default = "10.0.0.0/16" } variable "argocd_secret_manager_name_suffix" { type = string description = "Name of secret manager secret for ArgoCD Admin UI Password" default = "argocd-admin-secret" } EOF ``` **3. Define our project's main file** We are going to create our ```main.tf``` file in several steps, so we can explain what each part does. **Configure the environment** First, we define: * An aws provider to interact with aws APIs that we configure for our region. * We trigger data to retrieve our active availability zones in our AWS region. * And creates some locals that will be used to configure our environment. * Some locals are created using the variables we previously defined. * The tags will be applied to AWS objects that our Terraform will create. ```= cat > ~/environment/eks-blueprint/environment/main.tf <<'EOF' provider "aws" { region = local.region } data "aws_availability_zones" "available" {} locals { name = var.environment_name region = var.aws_region vpc_cidr = var.vpc_cidr num_of_subnets = min(length(data.aws_availability_zones.available.names), 3) azs = slice(data.aws_availability_zones.available.names, 0, local.num_of_subnets) argocd_secret_manager_name = var.argocd_secret_manager_name_suffix tags = { Blueprint = local.name GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" } } EOF ``` **Create our VPC** Here we use the Terraform AWS VPC Module to provision an [Amazon Virtual Private Cloud](https://docs.aws.amazon.com/vpc/index.html) VPC and subnets. We also make sure we enable NAT Gateway, Internet Gateway (IGW), DNS Hostnames to connect to the cluster after provisioning. You can also see that we tag the subnets as required by EKS so that Amazon Elastic Load Balancer (ELB) knows they are used for our cluster. Use this command to add the declaration to our ```main.tf``` file. ```= cat >> ~/environment/eks-blueprint/environment/main.tf <<'EOF' module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "~> 5.0.0" name = local.name cidr = local.vpc_cidr azs = local.azs public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 6, k)] private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 6, k + 10)] enable_nat_gateway = true create_igw = true enable_dns_hostnames = true single_nat_gateway = true manage_default_network_acl = true default_network_acl_tags = { Name = "${local.name}-default" } manage_default_route_table = true default_route_table_tags = { Name = "${local.name}-default" } manage_default_security_group = true default_security_group_tags = { Name = "${local.name}-default" } public_subnet_tags = { "kubernetes.io/role/elb" = 1 } private_subnet_tags = { "kubernetes.io/role/internal-elb" = 1 } tags = local.tags } EOF ``` **Create additional resources** Finally, we will create some resources that will be shared by our clusters: * We will generate a password that will be used by our deployment of ArgoCD. * We will create an AWS Secret Manager secret with a prefix name we configure in our variables. This command completes the ```main.tf``` we started to create ```= cat >> ~/environment/eks-blueprint/environment/main.tf <<'EOF' #--------------------------------------------------------------- # ArgoCD Admin Password credentials with Secrets Manager # Login to AWS Secrets manager with the same role as Terraform to extract the ArgoCD admin password with the secret name as "argocd" #--------------------------------------------------------------- resource "random_password" "argocd" { length = 16 special = true override_special = "!#$%&*()-_=+[]{}<>:?" } #tfsec:ignore:aws-ssm-secret-use-customer-key resource "aws_secretsmanager_secret" "argocd" { name = "${local.argocd_secret_manager_name}.${local.name}" recovery_window_in_days = 0 # Set to zero for this example to force delete during Terraform destroy } resource "aws_secretsmanager_secret_version" "argocd" { secret_id = aws_secretsmanager_secret.argocd.id secret_string = random_password.argocd.result } EOF ``` **4. Create an outputs file** We will initially output the VPC and related subnets, and later we will add a command to add the newly created cluster to our kubernetes ```~/.kube/config ```configuration file, which will enable access to our cluster. Please add the following contents to the ```output.tf```: ```= cat > ~/environment/eks-blueprint/environment/outputs.tf <<'EOF' output "vpc_id" { description = "The ID of the VPC" value = module.vpc.vpc_id } EOF ``` **5. Provide variables** Finally, we will use a variable file to provide specific deployment data to our Terraform modules: ```= cat > ~/environment/eks-blueprint/terraform.tfvars <<EOF aws_region = "$AWS_REGION" environment_name = "eks-blueprint" eks_admin_role_name = "WSParticipantRole" EOF ``` Link this file into our environment directory: ```= ln -s ~/environment/eks-blueprint/terraform.tfvars ~/environment/eks-blueprint/environment/terraform.tfvars ``` :::info **Terraform State Management** This workshop uses local Terraform state. To learn about a proper setup, take a look at https://www.terraform.io/language/state ::: # Create the environment Next, run the following Terraform CLI commands to provision the AWS resources: ```= # Initialize Terraform so that we get all the required modules and providers cd ~/environment/eks-blueprint/environment terraform init ``` ```= # It is always a good practice to use a dry-run command terraform plan ``` If there are no errors, you can proceed with deployment: ```= # The auto-approve flag avoids you having to confirm that you want to provision resources. cd ~/environment/eks-blueprint/environment terraform apply -auto-approve ``` At this stage, we have created our VPC; you can see it in the console using this [deep link](https://console.aws.amazon.com/vpc/home?#vpcs:tag:Name=eks-blueprint) Next, we will create a basic Amazon EKS cluster with a managed node group # Creating an Amazon EKS cluster module In this section, we are going to write a local module to deploy our EKS cluster. Later, we will instantiate one or multiple versions of this module so we can create several EKS clusters in our VPC if needed. ![eks-blue](https://hackmd.io/_uploads/SJ0KrUqkC.png) Run the following commands to create a folder for this module: ```= mkdir -p ~/environment/eks-blueprint/modules/eks_cluster cd ~/environment/eks-blueprint/modules/eks_cluster ``` **Configure our eks-blueprint local module** :::info **Important** We heavily rely on Terraform modules in the workshop; you can read more about them [here](https://www.terraform.io/language/modules) ::: Similarly to what we did in the environment setup, let's create our needed Terraform files. **1. Create our Terraform project** ```= cat > ~/environment/eks-blueprint/modules/eks_cluster/versions.tf << 'EOF' terraform { required_version = ">= 1.4.0" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.0.0" } } } EOF ``` **2. Define our module's variables** Here we define a lot of variables that will be used by the solution: * ```environment _name``` refer to the environment we previously created. * ```service_name``` will refer to instances of our module (our EKS cluster names). * ```eks_admin_role_name``` is an additional IAM role that will be admin in the cluster. ```= cat > ~/environment/eks-blueprint/modules/eks_cluster/variables.tf << 'EOF' variable "aws_region" { description = "AWS Region" type = string default = "us-west-2" } variable "environment_name" { description = "The name of Environment Infrastructure stack, feel free to rename it. Used for cluster and VPC names." type = string default = "eks-blueprint" } variable "service_name" { description = "The name of the Suffix for the stack name" type = string default = "blue" } variable "cluster_version" { description = "The Version of Kubernetes to deploy" type = string default = "1.25" } variable "eks_admin_role_name" { type = string description = "Additional IAM role to be admin in the cluster" default = "" } variable "argocd_secret_manager_name_suffix" { type = string description = "Name of secret manager secret for ArgoCD Admin UI Password" default = "argocd-admin-secret" } EOF ``` **3. Create a locals file** We start by defining some locals values: ```= cat <<'EOF' > ~/environment/eks-blueprint/modules/eks_cluster/locals.tf locals { environment = var.environment_name service = var.service_name env = local.environment name = "${local.environment}-${local.service}" # Mapping cluster_version = var.cluster_version argocd_secret_manager_name = var.argocd_secret_manager_name_suffix eks_admin_role_name = var.eks_admin_role_name tag_val_vpc = local.environment tag_val_public_subnet = "${local.environment}-public-" tag_val_private_subnet = "${local.environment}-private-" node_group_name = "managed-ondemand" tags = { Blueprint = local.name GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" } } EOF ``` **4. Create our main module file** We are going to progressively create our ```main.tf``` file ```= cat <<'EOF' > ~/environment/eks-blueprint/modules/eks_cluster/main.tf # Required for public ECR where Karpenter artifacts are hosted provider "aws" { region = "us-east-1" alias = "virginia" } EOF ``` Now we continue by importing some data: * Our existing partition. * Our AWS identity. * The VPC we created in our environment. * The private subnets of our VPC. ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf data "aws_partition" "current" {} # Find the user currently in use by AWS data "aws_caller_identity" "current" {} data "aws_vpc" "vpc" { filter { name = "tag:Name" values = [local.tag_val_vpc] } } data "aws_subnets" "private" { filter { name = "tag:Name" values = ["${local.tag_val_private_subnet}*"] } } EOF ``` Now we tag the subnets with the name of our EKS cluster, which is the concatenation of the two locals: ```local.environment``` and ```local.service```, This will be used by our Load Balancer or Karpenter to know in which subnet our cluster is. ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf #Add Tags for the new cluster in the VPC Subnets resource "aws_ec2_tag" "private_subnets" { for_each = toset(data.aws_subnets.private.ids) resource_id = each.value key = "kubernetes.io/cluster/${local.environment}-${local.service}" value = "shared" } data "aws_subnets" "public" { filter { name = "tag:Name" values = ["${local.tag_val_public_subnet}*"] } } #Add Tags for the new cluster in the VPC Subnets resource "aws_ec2_tag" "public_subnets" { for_each = toset(data.aws_subnets.public.ids) resource_id = each.value key = "kubernetes.io/cluster/${local.environment}-${local.service}" value = "shared" } EOF ``` Finally, we import our secrets for ArgoCD from AWS Secret Manager: ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf data "aws_secretsmanager_secret" "argocd" { name = "${local.argocd_secret_manager_name}.${local.environment}" } data "aws_secretsmanager_secret_version" "admin_password_version" { secret_id = data.aws_secretsmanager_secret.argocd.id } EOF ``` **5. Amazon EKS Cluster** In this step, we are going to add the EKS core module and configure it, including the EKS managed node group. From the code below, you can see that we are pinning the main **terraform-aws-modules/eks** to version [19.15.1](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest) which corresponds to the GitHub repository release tag. It is a good practice to lock-in all your modules to a given, tried-and-tested version. ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf #tfsec:ignore:aws-eks-enable-control-plane-logging module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 19.15.2" cluster_name = local.name cluster_version = local.cluster_version cluster_endpoint_public_access = true vpc_id = data.aws_vpc.vpc.id subnet_ids = data.aws_subnets.private.ids #we uses only 1 security group to allow connection with Fargate, MNG, and Karpenter nodes create_node_security_group = false eks_managed_node_groups = { initial = { node_group_name = local.node_group_name instance_types = ["m5.large"] min_size = 1 max_size = 5 desired_size = 3 subnet_ids = data.aws_subnets.private.ids } } manage_aws_auth_configmap = true aws_auth_roles = flatten([ #module.eks_blueprints_platform_teams.aws_auth_configmap_role, #[for team in module.eks_blueprints_dev_teams : team.aws_auth_configmap_role], #{ # rolearn = module.karpenter.role_arn # username = "system:node:{{EC2PrivateDNSName}}" # groups = [ # "system:bootstrappers", # "system:nodes", # ] #}, { rolearn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${local.eks_admin_role_name}" # The ARN of the IAM role username = "ops-role" # The user name within Kubernetes to map to the IAM role groups = ["system:masters"] # A list of groups within Kubernetes to which the role is mapped; Checkout K8s Role and Rolebindings } ]) tags = merge(local.tags, { # NOTE - if creating multiple security groups with this module, only tag the # security group that Karpenter should utilize with the following tag # (i.e. - at most, only one security group should have this tag in your account) "karpenter.sh/discovery" = "${local.environment}-${local.service}" }) } EOF ``` **6. Get module outputs** We want our module to output some variables we could reuse later: * The EKS cluster ID * The command to configure our kubectl for the creator of the EKS cluster ```= cat <<'EOF' > ~/environment/eks-blueprint/modules/eks_cluster/outputs.tf output "eks_cluster_id" { description = "The name of the EKS cluster." value = module.eks.cluster_name } output "configure_kubectl" { description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" value = "aws eks --region ${var.aws_region} update-kubeconfig --name ${module.eks.cluster_name}" } output "eks_cluster_endpoint" { description = "The endpoint of the EKS cluster." value = module.eks.cluster_endpoint } output "cluster_certificate_authority_data" { description = "cluster_certificate_authority_data" value = module.eks.cluster_certificate_authority_data } EOF ``` Congrats! We have finished our local eks-blueprint module; now let's create an instance of it. # Provision an Amazon EKS "Blue" Cluster ![eks-blue](https://hackmd.io/_uploads/S1umYU9kR.png) Now we are going to create an "eks-blue" instance of our module: ```= mkdir -p ~/environment/eks-blueprint/eks-blue cd ~/environment/eks-blueprint/eks-blue ``` **1. Create the Terraform structure for our EKS blue cluster** ```= cat > ~/environment/eks-blueprint/eks-blue/providers.tf << 'EOF' terraform { required_version = ">= 1.4.0" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.0.0" } kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.20.0" } helm = { source = "hashicorp/helm" version = ">= 2.9.0" } kubectl = { source = "gavinbunney/kubectl" version = ">= 1.14" } } } EOF ``` **2. Create the variables for our cluster** ```= cat > ~/environment/eks-blueprint/eks-blue/variables.tf << 'EOF' variable "aws_region" { description = "AWS Region" type = string default = "us-west-2" } variable "environment_name" { description = "The name of Environment Infrastructure stack name, feel free to rename it. Used for cluster and VPC names." type = string default = "eks-blueprint" } variable "eks_admin_role_name" { type = string description = "Additional IAM role to be admin in the cluster" default = "" } variable "argocd_secret_manager_name_suffix" { type = string description = "Name of secret manager secret for ArgoCD Admin UI Password" default = "argocd-admin-secret" } EOF ``` **3. Link to our terraform.tfvars variable file** ```= ln -s ~/environment/eks-blueprint/terraform.tfvars ~/environment/eks-blueprint/eks-blue/terraform.tfvars ``` **4. Create main file** * We configure our providers for kubernetes, helm and kubectl. * We call our eks-blueprint module, providing the variables. ```= cat > ~/environment/eks-blueprint/eks-blue/main.tf << 'EOF' provider "aws" { region = var.aws_region } provider "kubernetes" { host = module.eks_cluster.eks_cluster_endpoint cluster_ca_certificate = base64decode(module.eks_cluster.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1beta1" command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks_cluster.eks_cluster_id] } } provider "helm" { kubernetes { host = module.eks_cluster.eks_cluster_endpoint cluster_ca_certificate = base64decode(module.eks_cluster.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1beta1" command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks_cluster.eks_cluster_id] } } } provider "kubectl" { apply_retry_count = 10 host = module.eks_cluster.eks_cluster_endpoint cluster_ca_certificate = base64decode(module.eks_cluster.cluster_certificate_authority_data) load_config_file = false exec { api_version = "client.authentication.k8s.io/v1beta1" command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks_cluster.eks_cluster_id] } } data "aws_eks_cluster_auth" "this" { name = module.eks_cluster.eks_cluster_id } module "eks_cluster" { source = "../modules/eks_cluster" aws_region = var.aws_region service_name = "blue" cluster_version = "1.25" environment_name = var.environment_name eks_admin_role_name = var.eks_admin_role_name argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix #addons_repo_url = var.addons_repo_url #workload_repo_url = var.workload_repo_url #workload_repo_revision = var.workload_repo_revision #workload_repo_path = var.workload_repo_path } EOF ``` **5. Define our Terraform outputs** We want our Terraform stack to output information from our eks_cluster module: * The EKS cluster ID * The command to configure our kubectl for the creator of the EKS cluster ```= cat > ~/environment/eks-blueprint/eks-blue/outputs.tf << 'EOF' output "eks_cluster_id" { description = "The name of the EKS cluster." value = module.eks_cluster.eks_cluster_id } output "configure_kubectl" { description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" value = module.eks_cluster.configure_kubectl } EOF ``` **6. Provision the Amazon EKS Cluster** Execute the following commands to provision the cluster: ```= # we need to do this again, since we added a new module. cd ~/environment/eks-blueprint/eks-blue terraform init ``` ```= # Always a good practice to use a dry-run command terraform plan ``` ```= # then provision our EKS cluster # the auto approve flag avoids you having to confirm you want to provision resources. terraform apply -auto-approve ``` :::info **The EKS cluster creation will take around 15 minutes to deploy.** Time to grab your beverage of choice! ::: # Accessing the Cluster When finished, your Terraform outputs should look something like: ``` outputs: configure_kubectl = "aws eks --region eu-west-1 update-kubeconfig --name eks-blueprint-blue" eks_cluster_id = "eks-blueprint-blue" ``` You can now connect to your EKS cluster using the output command which look something like: ```= aws eks --region $AWS_REGION update-kubeconfig --name eks-blueprint-blue ``` :::warning **Important** Make sure to use your own output command to configure your kubeconfig ::: :::info **update-kubeconfig** configures kubectl so that you can connect to an Amazon EKS cluster. **kubectl** is a command-line tool used for communication with a Kubernetes cluster's control-plane, using the Kubernetes API. ::: You can list the pods in all namespaces with: ```= kubectl get pods -A ``` ``` NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-h66pd 1/1 Running 1 (9h ago) 15h kube-system aws-node-qdtjx 1/1 Running 1 (9h ago) 15h kube-system aws-node-wdbsg 1/1 Running 1 (9h ago) 15h kube-system coredns-6bc4667bcc-sgbm2 1/1 Running 1 (9h ago) 16h kube-system coredns-6bc4667bcc-vkchc 1/1 Running 1 (9h ago) 16h kube-system kube-proxy-4csbd 1/1 Running 1 (9h ago) 15h kube-system kube-proxy-779xp 1/1 Running 1 (9h ago) 15h kube-system kube-proxy-dppr2 1/1 Running 1 (9h ago) 15h ``` :::info **Congratulations!** You just deployed your first EKS cluster with Terraform. ::: At this stage, we just installed a basic EKS cluster with the minimal addons required to work: * VPC CNI plugin, so we get AWS VPC support for our pods. * CoreDNS for internal Domain Name resolution. * Kube-proxy to allow the usage of Kubernetes services. We are going to see how we can improve our deployments in the next sections. # Team Management In the next section, you will learn how to configure Application and Platform teams using EKS Blueprints. You'll learn the differences between the two, as well as the specific configuration of an Application Team object and how it can be used by the application team. **Terminology** In this part of the lab, we will cover how EKS Blueprints helps you manage cluster access for multiple teams in the organization. Before diving into the technical part, we want to introduce terminology from the [EKS Blueprints solution](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/teams/), and also from the [AWS Well Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html): * **Component** (as defined in the AWS Well-Architected Framework) - is the code, configuration, and AWS resources that together deliver against a requirement. A component is often the unit of technical ownership and is decoupled from other components. * **Workload** - A set of components that together deliver business value. A workload is usually the level of detail that business and technology leaders communicate about. * **Application Team** - A representation of a group of users/roles that are responsible for managing a specific workload in a namespace. Creating an Application Team creates a dedicated namespace for all of that team's components. * **Platform Team** - This represents the cluster platform administrators who have admin access to the cluster. This construct doesn't create a dedicated namespace as the platform team has admin rights on the clusters. Note: A user or role that is configured in a Platform Team, can also be configured to act as one or more Application Teams in the cluster. After setting up the based terminology, in this section we will perform the following actions: * Add a Platform Team. * Add an Application Team that is responsible for the core-services workload. * Deploy a component into the application's team namespace. # Platform Team As described earlier, EKS Blueprints support creating multiple teams that have different permission levels on the cluster. This is supported by the dedicated [terraform-aws-eks-blueprints-teams](https://github.com/aws-ia/terraform-aws-eks-blueprints-teams) module. Head over to the ~/environment/eks-blueprint/modules/eks_cluster/main.tf file. ```= c9 open ~/environment/eks-blueprint/modules/eks_cluster/main.tf ``` **Add Platform Team** The first thing we need to do is add the Platform Team definition to our ```main.tf``` in the module ```eks_blueprints```. This is the team that manages the EKS cluster provisioning. ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf data "aws_iam_role" "eks_admin_role_name" { count = local.eks_admin_role_name != "" ? 1 : 0 name = local.eks_admin_role_name } module "eks_blueprints_platform_teams" { source = "aws-ia/eks-blueprints-teams/aws" version = "~> 0.2" name = "team-platform" # Enables elevated, admin privileges for this team enable_admin = true # Define who can impersonate the team-platform Role users = [ data.aws_caller_identity.current.arn, try(data.aws_iam_role.eks_admin_role_name[0].arn, data.aws_caller_identity.current.arn), ] cluster_arn = module.eks.cluster_arn oidc_provider_arn = module.eks.oidc_provider_arn labels = { "elbv2.k8s.aws/pod-readiness-gate-inject" = "enabled", "appName" = "platform-team-app", "projectName" = "project-platform", } annotations = { team = "platform" } namespaces = { "team-platform" = { resource_quota = { hard = { "requests.cpu" = "10000m", "requests.memory" = "20Gi", "limits.cpu" = "20000m", "limits.memory" = "50Gi", "pods" = "20", "secrets" = "20", "services" = "20" } } limit_range = { limit = [ { type = "Pod" max = { cpu = "1000m" memory = "1Gi" }, min = { cpu = "10m" memory = "4Mi" } }, { type = "PersistentVolumeClaim" min = { storage = "24M" } } ] } } } tags = local.tags } EOF ``` :::info **Important** The label **elbv2.k8s.aws/pod-readiness-gate-inject** injected here is used by the AWS Load Balancer Controller to only mark pods ready at Kubernetes levels when they are correctly registered in the associated load balancer. To learn more, see [EKS Best Practices](https://aws.github.io/aws-eks-best-practices/networking/loadbalancing/loadbalancing/) ::: Before applying this change, let's go over the code you've just added. First, we instanciate a new module eks_blueprints_platform_teams from the eks blueprint teams module. Our team-platform will be Admin of the EKS cluster, so we activate this option on **line 15**. The module will create a new IAM Role and we define on **line 18** which other entities (user or roles) will be able to impersonate this role and be able to gain Admin access on the cluster. For this we reuse the local configuration from our module variable to specify the additional IAM Role. We also want our platform-team to own a dedicated Kubernetes namespace so that they can deploy some cluster level Kubernetes objects, like Network policies, Security control manifest, Autoscaling configuration, etc. On **line 35** we create the team-platform namespace, and because this is a shared EKS cluster, we create resources_quota (**line 38**) for this namespace, and a limit_range (**line 50**) object. Now using the Terraform CLI, lets proceed to update the resources: ```= # We need to do this again since we added a new module. cd ~/environment/eks-blueprint/eks-blue terraform init ``` ```= # It is always a good practice to use a dry-run command terraform plan ``` ```= # the auto approve flag avoids you having to confirm you want to provision resources. terraform apply -auto-approve ``` This will create a dedicated role similar to ```arn:aws:iam::0123456789:role/team-platform-XXXXXXXXXXXX``` that will allow you to manage the cluster as an administrator. It also defines which existing users/roles will be allowed to assume this role via the ```users``` parameter, where you can provide a list of IAM arns. The new role is also configured in the EKS ```aws-auth``` Configmap to allow authentication into the EKS Kubernetes cluster. We can see, for instance, that a new namespace has been created: ```= kubectl get ns ``` The output should look as below (ignore the AGE column). As you can see, a new namespace called ```team-platform``` was created. ``` NAME STATUS AGE default Active 19h kube-node-lease Active 19h kube-public Active 19h kube-system Active 19h team-platform Active 43m ``` Next, if we run: ```= kubectl describe resourcequotas -n team-platform ``` We can see the resource quotas allowed for this new namespace. Also if we run: ```= kubectl describe limitrange -n team-platform ``` We will see the limit-range configuration that has been applied, allowing us to add default resources/limits quotas to our applications. There are several other resources created when you onboard a team, including a **Kubernetes Service Account** created for the team. This service account can also be used by our applications deployed into this namespace to inherit the IAM permissions associated with the IAM Role associated; you can see it with the special annotation **eks.amazonaws.com/role-arn**. ```= kubectl describe sa -n team-platform team-platform ``` You can see in more detail in the terraform state what AWS resources were created with our team module. For example, you can see the platform team details: ```= terraform state show 'module.eks_cluster.module.eks_blueprints_platform_teams.aws_iam_role.this[0]' ``` Output: ``` # module.eks_cluster.module.eks_blueprints_platform_teams.aws_iam_role.this[0]: resource "aws_iam_role" "this" { arn = "arn:aws:iam::518175083565:role/team-platform-20230606102245638700000002" assume_role_policy = jsonencode( { Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { AWS = "arn:aws:sts::518175083565:assumed-role/eks-blueprints-for-terraform-workshop-admin/i-0da6fe84e15a05ae3" } Sid = "AssumeRole" }, ] Version = "2012-10-17" } ) create_date = "2023-06-06T10:22:45Z" force_detach_policies = true id = "team-platform-20230606102245638700000002" managed_policy_arns = [] max_session_duration = 3600 name = "team-platform-20230606102245638700000002" name_prefix = "team-platform-" path = "/" tags = { "Blueprint" = "eks-blueprint-blue" "GithubRepo" = "github.com/aws-ia/terraform-aws-eks-blueprints" } tags_all = { "Blueprint" = "eks-blueprint-blue" "GithubRepo" = "github.com/aws-ia/terraform-aws-eks-blueprints" } unique_id = "AROAXRJNEJQWWVIG7RPDM" } ``` # Application Teams Now we are going to add additional Application Teams to our cluster. Continue with the ```~/environment/eks-blueprint/modules/eks_cluster/main.tf``` file. ```= c9 open ~/environment/eks-blueprint/modules/eks_cluster/main.tf ``` **Add *Riker* and *Burnham* Teams EKS Tenant** Our next step is to define a Development Team in the EKS Platform as a Tenant. To do that, we add the following section to the ```main.tf```. We can create every team in a separate module, as we did with the ```platform-team```, or we can declare multiple teams in one module using the ```for_each syntax```. Add the code below after the ```platform_teams``` we just added in the ```eks_blueprints``` module: ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf module "eks_blueprints_dev_teams" { source = "aws-ia/eks-blueprints-teams/aws" version = "~> 0.2" for_each = { burnham = { labels = { "elbv2.k8s.aws/pod-readiness-gate-inject" = "enabled", "appName" = "burnham-team-app", "projectName" = "project-burnham", } } riker = { labels = { "elbv2.k8s.aws/pod-readiness-gate-inject" = "enabled", "appName" = "riker-team-app", "projectName" = "project-riker", } } } name = "team-${each.key}" users = [data.aws_caller_identity.current.arn] cluster_arn = module.eks.cluster_arn oidc_provider_arn = module.eks.oidc_provider_arn labels = merge( { team = each.key }, try(each.value.labels, {}) ) annotations = { team = each.key } namespaces = { "team-${each.key}" = { labels = merge( { team = each.key }, try(each.value.labels, {}) ) resource_quota = { hard = { "requests.cpu" = "100", "requests.memory" = "20Gi", "limits.cpu" = "200", "limits.memory" = "50Gi", "pods" = "15", "secrets" = "10", "services" = "20" } } limit_range = { limit = [ { type = "Pod" max = { cpu = "2" memory = "1Gi" } min = { cpu = "10m" memory = "4Mi" } }, { type = "PersistentVolumeClaim" min = { storage = "24M" } }, { type = "Container" default = { cpu = "50m" memory = "24Mi" } } ] } } } tags = local.tags } EOF ``` This block of code allows us to configure for each team its namespace name, labels, namespace quotas, users, or AWS IAM roles that have access to this specific namespace, and also apply specific Kubernetes manifests as for resource quotas and limit-range. For simplicity, we will pass our current user to the team object (**line 24**). As we did previously, this will create 2 teams and kubernetes namespaces, ```team-burnham``` and ```team-riker```, with service accounts preconfigured with the IAM role, resource quotas, and limit-range pre-created. The created roles will be similar to: ``` arn:aws:iam::0123456789:role/team-riker-XXXXXXXXXX arn:aws:iam::0123456789:role/team-burnham-XXXXXXXXXX ``` Apply the changes: ```= cd ~/environment/eks-blueprint/eks-blue terraform init terraform apply -auto-approve ``` You can use **kubectl** to check the created objects: ```= #list new namespaces kubectl get ns #list resources quotas in all namespaces kubectl get resourcequota -A #list limit-range in all namespaces kubectl get limitrange -A #check the team-riker service account kubectl describe sa -n team-riker team-riker ``` :::info **Important** If you need to add other teams, you can just expand the ```for_each``` in **line 6** or create another module instance. ::: **Configure authentication for the team in the cluster** Our philosophy with multi-tenant access for teams in a cluster, is to rely on [GitOps](https://www.gitops.tech/) principles to manage writing in a cluster. The idea is to have a dedicated git repository for each team configured in their own namespace; we will discuss it in detail later. This means that by default, we will provide teams in their namespace with only read-only cluster access. This way, they can use kubectl to view what is deployed in the cluster but cannot make modifications outside of their GitOps workflow. Let's add our team role to the allowed list to authenticate in the EKS cluster. Open again the `eks_cluster/main.tf` file: ```= c9 open ~/environment/eks-blueprint/modules/eks_cluster/main.tf ``` Find and uncomment the following section (**lines 2 and 3 below**): ```= aws_auth_roles = flatten([ module.eks_blueprints_platform_teams.aws_auth_configmap_role, # <-- Uncomment [for team in module.eks_blueprints_dev_teams : team.aws_auth_configmap_role], # <-- Uncomment #{ # rolearn = module.karpenter.role_arn # username = "system:node:{{EC2PrivateDNSName}}" # groups = [ # "system:bootstrappers", # "system:nodes", # ] #}, { rolearn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${local.eks_admin_role_name}" # The ARN of the IAM role username = "ops-role" # The user name within Kubernetes to map to the IAM role groups = ["system:masters"] # A list of groups within Kubernetes to which the role is mapped; Checkout K8s Role and Rolebindings } ]) ``` Then apply changes: ```= cd ~/environment/eks-blueprint/eks-blue # It is always a good practice to use a dry-run command terraform plan # The auto approve flag avoids you having to confirm you want to provision resources. terraform apply -auto-approve ``` This will allow you to use the AWS Roles created for our `platform-team`, `team-riker` and `team-burnham` in EKS. You can see this by looking at the configuration in the aws-auth ConfigMap: ```= kubectl get cm aws-auth -n kube-system -o yaml ``` In the output, you should see something like: ``` - "groups": - "system:masters" "rolearn": "arn:aws:iam::012345678901:role/team-platform-20230531115750651100000001" "username": "team-platform" - "groups": - "team-burnham" "rolearn": "arn:aws:iam::012345678901:role/team-burnham-20230531130037207500000001" "username": "team-burnham" - "groups": - "team-riker" "rolearn": "arn:aws:iam::012345678901:role/team-riker-20230531130037207700000002" "username": "team-riker" ``` Now that our team roles are allowed to connect to the EKS cluster, let's add them to the Terraform Outputs so it will be easier to use them. Execute this command to add the Outputs to Terraform. We need to add them both in the module and in our stack: 1. Update the module: ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/outputs.tf output "eks_blueprints_platform_teams_configure_kubectl" { description = "Configure kubectl for Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" value = "aws eks --region ${var.aws_region} update-kubeconfig --name ${module.eks.cluster_name} --role-arn ${module.eks_blueprints_platform_teams.iam_role_arn}" } output "eks_blueprints_dev_teams_configure_kubectl" { description = "Configure kubectl for each Dev Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" value = [for team in module.eks_blueprints_dev_teams : "aws eks --region ${var.aws_region} update-kubeconfig --name ${module.eks.cluster_name} --role-arn ${team.iam_role_arn}"] } EOF ``` 2. Update the stack: ```= cat <<'EOF' >> ~/environment/eks-blueprint/eks-blue/outputs.tf output "eks_blueprints_platform_teams_configure_kubectl" { description = "Configure kubectl for Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" value = module.eks_cluster.eks_blueprints_platform_teams_configure_kubectl } output "eks_blueprints_dev_teams_configure_kubectl" { description = "Configure kubectl for each Dev Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" value = module.eks_cluster.eks_blueprints_dev_teams_configure_kubectl } EOF ``` Then apply the changes: ```= cd ~/environment/eks-blueprint/eks-blue # It is always a good practice to use a dry-run command terraform plan # The auto approve flag avoids you having to confirm you want to provision resources. terraform apply -auto-approve ``` The new Outputs should be similar to: ``` outputs: configure_kubectl = "aws eks --region eu-west-1 update-kubeconfig --name eks-blueprint-blue" eks_blueprints_dev_teams_configure_kubectl = [ "aws eks --region eu-west-1 update-kubeconfig --name eks-blueprint-blue --role-arn arn:aws:iam::798082067117:role/team-burnham-20230531130037207500000001", "aws eks --region eu-west-1 update-kubeconfig --name eks-blueprint-blue --role-arn arn:aws:iam::798082067117:role/team-riker-20230531130037207700000002", ] eks_blueprints_platform_teams_configure_kubectl = "aws eks --region eu-west-1 update-kubeconfig --name eks-blueprint-blue --role-arn arn:aws:iam::798082067117:role/team-platform-20230531115750651100000001" eks_cluster_id = "eks-blueprint-blue" ``` We will have different commands we can use depending on which role we want to impersonate. # Access the cluster with Teams credentials In the previous step, we created the EKS cluster, and the module outputs for the `kubeconfig `information, which we can use to connect to the cluster. Lets see an example of how we can use those instructions. **1. Connect to the cluster as Team Riker** At the time we created the EKS cluster, the current identity you are using in the workshop was automatically added to the Application `team-riker` Team thanks to the `users` parameter. :::info **Important** We also added the role provided by the `eks_admin_role_name` variable ::: If you want to get the command to configure kubectl for each team, you can add it to the output to retrieve it. Let's retrieve our connection commands: ```= terraform output ``` You will see the kubectl configuration command to share with members of **Team Riker**, Copy the aws eks `update-kubeconfig` ... command portion of the output that corresponds to **Team Riker**, This would be something similar to: ``` aws eks --region eu-west-1 update-kubeconfig --name eks-blueprint-blue --role-arn arn:aws:iam::798082067117:role/team-riker-20230531130037207700000002 ``` :::warning **Important** Make sure to copy the command from your own terraform output not the example above. The region, account's ID values, and role name should be different. ::: Now you can execute `kubectl` CLI commands in the team-riker namespace. Let's see if we can do the same commands as previously: ```= # list nodes ? yes kubectl get nodes # List pods in team-riker namespace ? yes kubectl get pods -n team-riker # list all pods in all namespaces ? no kubectl get pods -A # can i create pods in kube-system namespace ? no kubectl auth can-i create pods --namespace kube-system # list service accounts in team-riker namespace ? yes kubectl get sa -n team-riker # list service accounts in default namespace ? no kubectl get sa -n default # can i create pods in team-riker namespace ? no (readonly) kubectl auth can-i create pods --namespace team-riker # can i list pods in team-riker namespace ? yes kubectl auth can-i list pods --namespace team-riker ``` As expected, you can see that our `team-riker` Role, has read-only rights in the cluster, but only in the `team-riker` namespace. You can see the objects metrics in your namespace: ```= kubectl get resourcequotas -n team-riker ``` Output: ``` NAME AGE REQUEST LIMIT team-riker 80m pods: 0/100, requests.cpu: 0/100, requests.memory: 0/20Gi, secrets: 0/10, services: 0/20 limits.cpu: 0/200, limits.memory: 0/50Gi ``` It is best practice to not create Kubernetes objects with kubectl directly but to rely on continuous deployment tools. We are going to see in our next exercise how we can leverage ArgoCD for that purpose! **Connect with other teams** Take some time to authenticate with the other teams from the output and see what you can do in the cluster. :::info This is how you will be able to provide different access to different namespaces for your teams in your shared EKS cluster. ::: **Work with the Platform Team for the rest of the workshop** Configure `kubectl` back to the current creator of the EKS cluster, this is because we need admin access for the rest of the workshop: ```= aws eks --region $AWS_REGION update-kubeconfig --name eks-blueprint-blue ``` :::warning Verify you are using the correct `update-kubeconfig` command by using your own command from the outputs ::: In the next section, we are going to bootstrap a [GitOps](https://www.gitops.tech/) tool named [ArgoCD ](https://argoproj.github.io/cd/)that we will use to manage EKS add-ons and workload deployment inside our EKS cluster. Check that you can list pods in all namespaces: ``` NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-h66pd 1/1 Running 1 (14h ago) 21h kube-system aws-node-qdtjx 1/1 Running 1 (14h ago) 21h kube-system aws-node-wdbsg 1/1 Running 1 (14h ago) 21h kube-system coredns-6bc4667bcc-sgbm2 1/1 Running 1 (14h ago) 21h kube-system coredns-6bc4667bcc-vkchc 1/1 Running 1 (14h ago) 21h kube-system kube-proxy-4csbd 1/1 Running 1 (14h ago) 21h kube-system kube-proxy-779xp 1/1 Running 1 (14h ago) 21h kube-system kube-proxy-dppr2 1/1 Running 1 (14h ago) 21h ``` :::danger **Verify you are using Platform Team access (Admin) before continue** Dont continue if you cannot list pods in all namespaces, we need Admin rights for next steps of the workshop. ::: # Working with GitOps **What is GitOps?** ![Screenshot 2024-04-03 at 06.27.25](https://hackmd.io/_uploads/BJKBjaqyR.png) *Pioneered in 2017, GitOps is a way to do Kubernetes cluster management and application delivery. GitOps works by using Git as a single source of truth for declarative infrastructure and applications. With GitOps, the use of software agents can alert on any divergence between Git with what's running in a cluster, and if there's a difference, Kubernetes reconcilers automatically update or rollback the cluster depending on the case. With Git at the center of your delivery pipelines, developers use familiar tools to make pull requests to accelerate and simplify both application deployments and operations tasks to Kubernetes.* [ WeaveWorks "Guide to GitOps"](https://) [](https://www.weave.works/technologies/gitops/#what-is-gitops) GitOps can be summarized as these two things: * An operating model for Kubernetes and other cloud-native technologies, providing a set of best practices that unify Git deployment, management, and monitoring for containerized clusters and applications. * A path towards a developer experience for managing applications, where end-to-end CICD pipelines and Git workflows are applied to both operations and development. Companies want to go fast; they need to deploy more often, more reliably, and preferably with less overhead. GitOps is a fast and secure method for developers to manage and update complex applications and infrastructure running in Kubernetes. **GitOps vs IaC** Infrastructure as Code tools used for provisioning servers on demand have existed for quite some time. These tools originated from the concept of keeping infrastructure configurations versioned, backed up, and reproducible from source control. With Kubernetes being almost completely declarative, combined with the immutable container, it is possible to extend some of these concepts to managing both applications and their resource dependencies as well. The ability to manage and compare the current state of both your infrastructure and your applications so that you can test, deploy, rollback, and rollforward with a complete audit trail all from within Git is what encompasses the GitOps philosophy and its best practices. This is possible because Kubernetes is managed entirely through declarative, immutable configuration. **What is ArgoCD?** [Argo CD](https://argoproj.github.io/cd/) is a declarative GitOps continuous delivery tool for Kubernetes. The Argo CD controller in the Kubernetes cluster continuously monitors the state of your cluster and compares it with the desired state defined in Git. If the cluster state does not match the desired state, Argo CD reports the deviation and provides visualizations to help developers manually or automatically sync the cluster state with the desired state. Argo CD offers three ways to manage your application state: * **CLI** - A powerful CLI that lets you create YAML resource definitions for your applications and sync them with your cluster. * **User Interface** - A web-based UI that lets you do the same things that you can do with the CLI. It also lets you visualize the Kubernetes resources that belong to the Argo CD applications that you create. * Kubernetes manifests and Helm charts that are applied to the cluster. ![Screenshot 2024-04-03 at 06.24.37](https://hackmd.io/_uploads/ry3cca91A.png) There are alternatives to ArgoCD, like [Flux](https://fluxcd.io/flux/concepts/). In this workshop we will rely on ArgoCD mainly because of its UI. # Bootstrap ArgoCD In this section, we are going to bootstrap ArgoCD as our GitOps engine. We will indicate in our configurations that we want to use our forked workloads repository. What this means is that any apps the Developer Teams want to deploy will need to be defined in this repository so that ArgoCD is aware. :::info You can learn more about ArgoCD and how it implements GitOps [here](https://argo-cd.readthedocs.io/en/stable/). ::: We will also configure the `eks-blueprints-add-ons` repository to manage the EKS Kubernetes add-ons for our cluster using ArgoCD. Deploying the Kubernetes add-ons with GitOps has several advantages, like the fact that their state will always be synchronized with the git repository thanks to the ArgoCD controller. **Git Repositories** ![argocd-eks-blue](https://hackmd.io/_uploads/BkREapqk0.png) We will reuse our demo git repositories containing sample workloads to deploy with ArgoCD using `eks-blueprints-workloads`, and another one for our EKS add-ons: 1. The repository https://github.com/aws-samples/eks-blueprints-workloads.git will be used for deploying **workloads** with ArgoCD. ***Please fork this repository on your own GitHub account before continue*** 2. The repository https://github.com/aws-samples/eks-blueprints-add-ons.git will be used to install **add-ons** with ArgoCD. ***You don't need to fork this repository.*** :::warning Make sure you have forked the workloads repository on your GitHub account before continue. ::: **1. Add Argo application configuration** The first thing we need to do is augment our `locals.tf` definition in the `~/environment/eks-blueprint/modules/eks_cluster/locals.tf `file with the two new variables `addon_application` and `workload_application` as shown below. Replace the entire `locals` section with the command below: ```= cat <<'EOF' > ~/environment/eks-blueprint/modules/eks_cluster/locals.tf locals { environment = var.environment_name service = var.service_name env = local.environment name = "${local.environment}-${local.service}" # Mapping cluster_version = var.cluster_version argocd_secret_manager_name = var.argocd_secret_manager_name_suffix eks_admin_role_name = var.eks_admin_role_name #addons_repo_url = var.addons_repo_url #workload_repo_path = var.workload_repo_path #workload_repo_url = var.workload_repo_url #workload_repo_revision = var.workload_repo_revision tag_val_vpc = local.environment tag_val_public_subnet = "${local.environment}-public-" tag_val_private_subnet = "${local.environment}-private-" node_group_name = "managed-ondemand" tags = { Blueprint = local.name GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" } #--------------------------------------------------------------- # ARGOCD ADD-ON APPLICATION #--------------------------------------------------------------- #At this time (with new v5 addon repository), the Addons need to be managed by Terrform and not ArgoCD addons_application = { path = "chart" repo_url = local.addons_repo_url add_on_application = true } #--------------------------------------------------------------- # ARGOCD WORKLOAD APPLICATION #--------------------------------------------------------------- workload_application = { path = local.workload_repo_path # <-- we could also to blue/green on the workload repo path like: envs/dev-blue / envs/dev-green repo_url = local.workload_repo_url target_revision = local.workload_repo_revision add_on_application = false values = { labels = { env = local.env } spec = { source = { repoURL = local.workload_repo_url targetRevision = local.workload_repo_revision } blueprint = "terraform" clusterName = local.name #karpenterInstanceProfile = module.karpenter.instance_profile_name # Activate to enable Karpenter manifests (only when Karpenter add-on will be enabled in the Karpenter workshop) env = local.env } } } } EOF ``` **2. Configure additional parameters** Append this to our variables module: ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/variables.tf variable "workload_repo_url" { type = string description = "Git repo URL for the ArgoCD workload deployment" default = "https://github.com/aws-samples/eks-blueprints-workloads.git" } variable "workload_repo_revision" { type = string description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment" default = "main" } variable "workload_repo_path" { type = string description = "Git repo path in workload_repo_url for the ArgoCD workload deployment" default = "envs/dev" } variable "addons_repo_url" { type = string description = "Git repo URL for the ArgoCD addons deployment" default = "https://github.com/aws-samples/eks-blueprints-add-ons.git" } EOF ``` Append this to the `eks-blue `parameters: ```= cat <<'EOF' >> ~/environment/eks-blueprint/eks-blue/variables.tf variable "workload_repo_url" { type = string description = "Git repo URL for the ArgoCD workload deployment" default = "https://github.com/aws-samples/eks-blueprints-workloads.git" } variable "workload_repo_secret" { type = string description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository" default = "github-blueprint-ssh-key" } variable "workload_repo_revision" { type = string description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment" default = "main" } variable "workload_repo_path" { type = string description = "Git repo path in workload_repo_url for the ArgoCD workload deployment" default = "envs/dev" } variable "addons_repo_url" { type = string description = "Git repo URL for the ArgoCD addons deployment" default = "https://github.com/aws-samples/eks-blueprints-add-ons.git" } EOF ``` **3. Configure variables** First, export an environment variable with your Github UserName, which will be used in the next command: ```= export GITHUB_USER=<YOUR_GITHUB_USER> ``` And then execute this command to configure the variables into your `terraform.tfvars` configuration file: ```= cat >> ~/environment/eks-blueprint/terraform.tfvars <<EOF addons_repo_url = "https://github.com/aws-samples/eks-blueprints-add-ons.git" workload_repo_url = "https://github.com/${GITHUB_USER}/eks-blueprints-workloads.git" workload_repo_revision = "main" workload_repo_path = "envs/dev" EOF ``` :::warning **Important** Since we forked the workload repository, be sure to use your forked git url and that the url https://github.com/${GITHUB_USER}/eks-blueprints-workloads.git points to your fork. ::: **4. Pass the var from eks-blue to our module** We need the variables we just defined in `eks-blue` to be passed to our `eks_cluster` terraform module. For that, uncomment the lines in our `eks-blue/main.tf` file. You can do it manually or with the following command: ```= for x in addons_repo_url workload_repo_path workload_repo_url workload_repo_revision; do sed -i "s/^\(\s*\)#\($x\s*=\s*var.$x\)/\1\2/" ~/environment/eks-blueprint/eks-blue/main.tf done ``` Check the configuration: ```= cat ~/environment/eks-blueprint/eks-blue/main.tf ``` Output: ``` module "eks_cluster" { source = "../modules/eks_cluster" aws_region = var.aws_region service_name = "blue" cluster_version = "1.25" environment_name = var.environment_name eks_admin_role_name = var.eks_admin_role_name argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix addons_repo_url = var.addons_repo_url workload_repo_url = var.workload_repo_url workload_repo_revision = var.workload_repo_revision workload_repo_path = var.workload_repo_path } ``` **5. Update the mapping from variables to local** Execute this command: ```= for x in addons_repo_url workload_repo_path workload_repo_url workload_repo_revision; do sed -i "s/^\(\s*\)#\($x\s*=\s*var.$x\)/\1\2/" ~/environment/eks-blueprint/modules/eks_cluster/locals.tf done ``` The `locals.tf `file then should be similar to: ```= cat ~/environment/eks-blueprint/modules/eks_cluster/locals.tf ``` ``` locals { environment = var.environment_name service = var.service_name env = local.environment name = "${local.environment}-${local.service}" # Mapping cluster_version = var.cluster_version argocd_secret_manager_name = var.argocd_secret_manager_name_suffix eks_admin_role_name = var.eks_admin_role_name addons_repo_url = var.addons_repo_url workload_repo_path = var.workload_repo_path workload_repo_url = var.workload_repo_url workload_repo_revision = var.workload_repo_revision tag_val_vpc = local.environment tag_val_public_subnet = "${local.environment}-public-" tag_val_private_subnet = "${local.environment}-private-" node_group_name = "managed-ondemand" tags = { Blueprint = local.name GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" } ... ``` **6. Add the EKS Blueprint Addons Terraform module to the main file** Add the `kubernetes_addons` module at the end of our `main.tf`. To have ArgoCD manage cluster add-ons, we set the `argocd_manage_add_ons` property to `true`. This allows the Terraform framework to provision necessary AWS resources, such as IAM Roles and Policies, but it will not apply Helm charts directly via the Terraform Helm provider, allowing Argo to handle it instead. We also specify a custom set to configure Argo to expose the ArgoCD UI on an AWS load balancer. (Ideallly, we should do it using a secure ingress, but this will be easier for this workshop.) This will configure the ArgoCD add-on and allow it to deploy additional Kubernetes add-ons using GitOps. Execute this command for your `main.tf` file: ```= cat <<'EOF' >> ~/environment/eks-blueprint/modules/eks_cluster/main.tf module "kubernetes_addons" { source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=blueprints-workshops/modules/kubernetes-addons" eks_cluster_id = module.eks.cluster_name #--------------------------------------------------------------- # ARGO CD ADD-ON #--------------------------------------------------------------- enable_argocd = true argocd_manage_add_ons = true # Indicates that ArgoCD is responsible for managing/deploying Add-ons. argocd_applications = { addons = local.addons_application workloads = local.workload_application #We comment it for now } argocd_helm_config = { set_sensitive = [ { name = "configs.secret.argocdServerAdminPassword" value = bcrypt(data.aws_secretsmanager_secret_version.admin_password_version.secret_string) } ] set = [ { name = "server.service.type" value = "LoadBalancer" } ] } #--------------------------------------------------------------- # EKS Managed AddOns # https://aws-ia.github.io/terraform-aws-eks-blueprints/add-ons/ #--------------------------------------------------------------- enable_amazon_eks_coredns = true enable_amazon_eks_kube_proxy = true enable_amazon_eks_vpc_cni = true enable_amazon_eks_aws_ebs_csi_driver = true #--------------------------------------------------------------- # ADD-ONS - You can add additional addons here # https://aws-ia.github.io/terraform-aws-eks-blueprints/add-ons/ #--------------------------------------------------------------- enable_aws_load_balancer_controller = true enable_aws_for_fluentbit = true enable_metrics_server = true } EOF ``` Now that we’ve added the `kubernetes_addons` module and configured ArgoCD, we will apply our changes. ```= cd ~/environment/eks-blueprint/eks-blue # We added a new module, so we must init terraform init # It is always a good practice to use a dry-run command terraform plan # Apply changes terraform apply -auto-approve ``` # Validate ArgoCD deployment To validate that ArgoCD is now in our cluster, we can execute the following: ```= kubectl get all -n argocd ``` Wait about 2 minutes for the AWS load balancer creation and get its URL: ```= export ARGOCD_SERVER=`kubectl get svc argo-cd-argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'` echo export ARGOCD_SERVER=\"$ARGOCD_SERVER\" >> ~/.bashrc echo "https://$ARGOCD_SERVER" ``` Open a new browser and paste in the URL from the previous command. You will now see the ArgoCD UI. :::warning **Important** ArgoCD was exposed using a self-signed certificate, you'll need to accept the security exception in your browser to access it. ::: **Query for admin password** Retrieve the generated secret for the ArgoCD UI admin password. :::info **Important** Note: We could also instead create a Secret Manager's password for Argo with Terraform, see this [example](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/blue-green-upgrade/environment/main.tf#L110-L125) ::: ```= ARGOCD_PWD=$(aws secretsmanager get-secret-value --secret-id argocd-admin-secret.eks-blueprint | jq -r '.SecretString') echo export ARGOCD_PWD=\"$ARGOCD_PWD\" >> ~/.bashrc echo "ArgoCD admin password: $ARGOCD_PWD" ``` **Login with the CLI** :::info **Note** For the purpose of this lab, Argocd CLI has been installed for you. You can learn more about installing the CLI tool by following the [instructions](https://argo-cd.readthedocs.io/en/stable/cli_installation/). ::: ```= argocd login $ARGOCD_SERVER --username admin --password $ARGOCD_PWD --insecure ``` Then we can use the CLI to interact with ArgoCD: ```= #List ArgoCD Applications argocd app list ``` ``` NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET argocd/addons https://kubernetes.default.svc default Synced Healthy Auto-Prune <none> https://github.com/aws-samples/eks-blueprints-add-ons.git chart HEAD argocd/aws-load-balancer-controller https://kubernetes.default.svc kube-system default Synced Healthy Auto-Prune <none> https://github.com/aws-samples/eks-blueprints-add-ons.git add-ons/aws-load-balancer-controller HEAD argocd/metrics-server https://kubernetes.default.svc kube-system default Synced Healthy Auto-Prune <none> https://github.com/aws-samples/eks-blueprints-add-ons.git add-ons/metrics-server HEAD ``` **Login to the UI** Retrieve ArgoCD url: ```= echo "https://$ARGOCD_SERVER" ``` * The username is admin. * The password is the result of the Query for admin password command above. :::info Save the password on your web explorer as ArgoCD UI will log you out automatically ::: At this step, you should be able to see the Argo UI: ![argocdui](https://hackmd.io/_uploads/HJw-FyiJR.png) :::info For any future [available add-ons](https://aws-ia.github.io/terraform-aws-eks-blueprints/add-ons/) you wish to enable, simply follow the steps above by modifying the `kubernetes_addons` module within the modules/`eks_cluster/main.tf` file and terraform apply again in the `eks-blue` directory. ::: In the ArgoUI, you can see that we have several applications deployed: Add-Ons: * aws-load-balancer-controller * aws_for_fluentbit * metrics_server The EKS Blueprints can deploy Add-ons through [EKS managed add-ons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) when they are available, which is the case for the EBS CSI driver, CoreDNS, KubeProxy and VPC CNI. In this case, it's not ArgoCD that managed them. You can view them on the [EKS Console ](https://console.aws.amazon.com/eks/home?#/clusters/eks-blueprint-blue?selectedTab=cluster-add-ons-tab): ![eks-managed-addons](https://hackmd.io/_uploads/H1zTKkikA.png) For the next module, we will work as members of Team Riker. # Deploy Workload Now that the cluster is ready and the **Platform Team** has onboarded the **Application Team Riker**, they are ready to deploy their workloads. In the following exercise, you are going to work from your fork of [`eks-blueprints-workloads `](https://github.com/aws-samples/eks-blueprints-workloads.git) repository, as a member of Team Riker, and you will deploy your workloads only by interacting with the git repository. We will be deploying the Team Riker static site using an AWS Application Load Balancer. **Team Riker Goals** The team has a static website that they need to publish. Changes should be tracked by source control using GitOps. This means that if a feature branch is merged into the main branch, a “sync” is triggered, and the app is updated seamlessly. All of this work will be done within the Riker Team’s environment in EKS/Kubernetes. The following is a list of key features of this workload: * A simple static website featuring great ski photography. * n a real environment, we could add a custom FQDN and associated TLS Certificate. But in this lab, we can't have a custom domain, so we will stay at http on default domains. As we mentioned earlier in our workshop, we use Helm to package apps and deploy workloads. The Workloads repository is the one recognized by ArgoCD (already set up by the Platform Team). **Meet the ArgoCD Workload Application repository** We have created a [workload repository sample](https://github.com/aws-samples/eks-blueprints-workloads) respecting the [ArgoCD App of App pattern](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern). :::info Fork this repository if it's not already done, and check that it is correctly updated in the `locals.tf` file. ::: **Terraform configuration** In our `workload_application` configuration in the `locals.tf `file we previously added a configuration to declare an ArgoCD application. ```= grep -A 25 workload_application ~/environment/eks-blueprint/modules/eks_cluster/locals.tf ``` Output: ``` workload_application = { path = local.workload_repo_path # <-- we could also to blue/green on the workload repo path like: envs/dev-blue / envs/dev-green repo_url = local.workload_repo_url target_revision = local.workload_repo_revision add_on_application = false values = { labels = { env = local.env } spec = { source = { repoURL = local.workload_repo_url targetRevision = local.workload_repo_revision } blueprint = "terraform" clusterName = local.name #karpenterInstanceProfile = module.karpenter.instance_profile_name # Activate to enable Karpenter manifests (only when Karpenter add-on will be enabled in the Karpenter workshop) env = local.env } } } } ``` It uses variables we have defined in our `terraform.tfvars` file: ```= cat ~/environment/eks-blueprint/terraform.tfvars ``` You should have something similar updated with the current region, and your GITHUB_USER name where you forked the repository: ``` aws_region = "eu-west-1" environment_name = "eks-blueprint" eks_admin_role_name = "WSParticipantRole" addons_repo_url = "https://github.com/aws-samples/eks-blueprints-add-ons.git" workload_repo_url = "https://github.com/aws-samples/eks-blueprints-workloads.git" workload_repo_revision = "main" workload_repo_path = "envs/dev" ``` We configure the `workload_repo_path` path to be env/dev. That means that ArgoCD will synchronize the content of this repo/path into our EKS cluster. **The envs/dev repository** This is how the [target](https://github.com/aws-samples/eks-blueprints-workloads/tree/main/envs/dev) for our configuration looks like: ``` envs/dev/ ├── Chart.yaml ├── templates │ ├── team-burnham.yaml │ ├── team-carmen.yaml │ ├── team-geordi.yaml │ └── team-riker.yaml └── values.yaml ``` You can see that this structure is for a [Helm Chart](https://helm.sh/) in which we defined several teams workloads. So if you are familiar with Helm charts, Kudos! The directory [env/dev/values.yaml](https://github.com/aws-samples/eks-blueprints-workloads/blob/main/envs/dev/values.yaml) is configured with default values: ``` spec: destination: server: https://kubernetes.default.svc source: repoURL: https://github.com/aws-samples/eks-blueprints-workloads # This will be surcharged by our Terraform workload_application.values.spec.source.repoURL variable. targetRevision: main ... ``` but we are relying on our Terraform local `workloads_application.values` to surcharge those parameters (at least we changed the `source:repoURL` to point to your forked repository through Terraform). **The Team Riker Application** Now, let's have a look at the [team-riker.yaml](https://github.com/aws-samples/eks-blueprints-workloads/blob/main/envs/dev/templates/team-riker.yaml#L18) helm template file. It's an [ArgoCD Application](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications) defining the `team-riker` application with a code source from the same GitHub repository under the path `teams/team-riker/dev`. Let's look under the [teams/team-riker/dev ](https://github.com/seb-tmp/eks-blueprints-workloads/tree/main/teams/team-riker/dev)directory structure. ``` ├── Chart.yaml ├── templates │ ├── 2048.yaml │ ├── deployment.yaml │ ├── ingress.yaml │ └── service.yaml └── values.yaml ``` Again, it uses the Helm chart format. The files under the templates directory are rendered using Helm and deployed into the EKS cluster into the `team-riker` namespace. This is known as the [App of Apps pattern ](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) ![terraform-argo-app-app](https://hackmd.io/_uploads/By-OTJiyA.png) **Activate our workload GitOps with the Terraform code** Remember, in the `main.tf` file, when we configured Argo, we chose to activate only the addon repository. Go back to the cloud9 to update the `~/environment/eks-blueprint/modules/eks_cluster/main.tf` and uncomment the workloads application: ```= sed -i "s/^\(\s*\)#\(workloads = local.workload_application\)/\1\2/" ~/environment/eks-blueprint/modules/eks_cluster/main.tf ``` Check the result, which should be like: ```= grep -A 4 argocd_application ~/environment/eks-blueprint/modules/eks_cluster/main.tf ``` ``` argocd_applications = { addons = local.addon_application workloads = local.workload_application # <- uncomment this line } ``` And then apply changes: ```= cd ~/environment/eks-blueprint/eks-blue # It is always a good practice to use a dry-run command terraform plan terraform apply -auto-approve ``` Since our changes are pushed to the main branch of our workload git repository, ArgoCD is now aware of them and will automatically sync the main branch with our EKS cluster. Your ArgoCD dashboard should look like the following: ![argo_dashboard](https://hackmd.io/_uploads/H1qMCkokC.png) :::info If changes are not appearing, you may need to resync the workloads Application in ArgoCD UI: Click on *workloads* and click on the *sync* button. ::: In the ArgoUI, click on the team-riker box. You will see all the Kubernetes objects that are deployed in the team-riker namespace: ![team_riker_app](https://hackmd.io/_uploads/B1oD0JskC.png) **Add our website manifest for the new SkiApp** We were asked, as members of Team Riker, to deploy a new website in our Kubernetes namespace, the **SkiApp application**. To do this, we will need to add some kubernetes manifests to the `teams/team-riker/dev/templates `directory. :::info **Important** There are several ways to do this. You can either clone your repo, edit the files with your favorite IDE, and push them back to github, or you can use GitHub Codespaces to have a remote VsCode experience and make changes there, or you can use the GitHub interface to push your changes. ::: **Create a GitHub CodeSpace from your Fork (better with Chrome or Firefox)** ![github-codespace](https://hackmd.io/_uploads/rJ9-ygskC.png) We are going to create a new directory and files under `teams/team-riker/dev/templates`, which represent the website manifests we want to deploy. From the root directory of the git repository, run the following command: ```= mkdir -p teams/team-riker/dev/templates/alb-skiapp curl 'https://static.us-east-1.prod.workshops.aws/8e45955c-68f9-4b13-bf1d-ad47716531db/assets/alb-skiapp/deployment.yaml?Key-Pair-Id=K36Q2WVO3JP7QD&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9zdGF0aWMudXMtZWFzdC0xLnByb2Qud29ya3Nob3BzLmF3cy84ZTQ1OTU1Yy02OGY5LTRiMTMtYmYxZC1hZDQ3NzE2NTMxZGIvKiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMjc1OTAwNH19fV19&Signature=QAHdxODcaOVkxWlU-PCtKGMDKhoeOetyLKlZSbSoWiIkuPUaLL20l-aGgbwcj-Sq9YnO2D6XaAY7rBBqJU3bNT7Oz40FY~7jGEBzMXbzYSmHBjkKTz~FuDaakhiwyb54SHvKYKxFy3jRzNiHNC1FYWADZQXg~Uwlm1WLJf49ytbOVDErnYFw8k32Xfs3PO27pMTBn2h9yWcXRZNkZR48zxzOkU0j9RgMhcfFWznw1IYkFiBiTAcpCNPtUVpHPSsUUJR6YscT2pTINS3P6O41G--9JvwWckZ5ToLy~Y8-kFWr-uK5ijJrERJ8VBhLdv9kHKtsGKDVeaeH3T-kOk5tkg__' --output teams/team-riker/dev/templates/alb-skiapp/deployment.yaml curl 'https://static.us-east-1.prod.workshops.aws/8e45955c-68f9-4b13-bf1d-ad47716531db/assets/alb-skiapp/ingress.yaml?Key-Pair-Id=K36Q2WVO3JP7QD&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9zdGF0aWMudXMtZWFzdC0xLnByb2Qud29ya3Nob3BzLmF3cy84ZTQ1OTU1Yy02OGY5LTRiMTMtYmYxZC1hZDQ3NzE2NTMxZGIvKiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMjc1OTAwNH19fV19&Signature=QAHdxODcaOVkxWlU-PCtKGMDKhoeOetyLKlZSbSoWiIkuPUaLL20l-aGgbwcj-Sq9YnO2D6XaAY7rBBqJU3bNT7Oz40FY~7jGEBzMXbzYSmHBjkKTz~FuDaakhiwyb54SHvKYKxFy3jRzNiHNC1FYWADZQXg~Uwlm1WLJf49ytbOVDErnYFw8k32Xfs3PO27pMTBn2h9yWcXRZNkZR48zxzOkU0j9RgMhcfFWznw1IYkFiBiTAcpCNPtUVpHPSsUUJR6YscT2pTINS3P6O41G--9JvwWckZ5ToLy~Y8-kFWr-uK5ijJrERJ8VBhLdv9kHKtsGKDVeaeH3T-kOk5tkg__' --output teams/team-riker/dev/templates/alb-skiapp/ingress.yaml curl 'https://static.us-east-1.prod.workshops.aws/8e45955c-68f9-4b13-bf1d-ad47716531db/assets/alb-skiapp/service.yaml?Key-Pair-Id=K36Q2WVO3JP7QD&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9zdGF0aWMudXMtZWFzdC0xLnByb2Qud29ya3Nob3BzLmF3cy84ZTQ1OTU1Yy02OGY5LTRiMTMtYmYxZC1hZDQ3NzE2NTMxZGIvKiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMjc1OTAwNH19fV19&Signature=QAHdxODcaOVkxWlU-PCtKGMDKhoeOetyLKlZSbSoWiIkuPUaLL20l-aGgbwcj-Sq9YnO2D6XaAY7rBBqJU3bNT7Oz40FY~7jGEBzMXbzYSmHBjkKTz~FuDaakhiwyb54SHvKYKxFy3jRzNiHNC1FYWADZQXg~Uwlm1WLJf49ytbOVDErnYFw8k32Xfs3PO27pMTBn2h9yWcXRZNkZR48zxzOkU0j9RgMhcfFWznw1IYkFiBiTAcpCNPtUVpHPSsUUJR6YscT2pTINS3P6O41G--9JvwWckZ5ToLy~Y8-kFWr-uK5ijJrERJ8VBhLdv9kHKtsGKDVeaeH3T-kOk5tkg__' --output teams/team-riker/dev/templates/alb-skiapp/service.yaml ``` Now the repository should be like: ```= ls -la ~/environment/code-eks-blueprint ``` ``` teams/team-riker/dev/templates/alb-skiapp ├── deployment.yaml ├── ingress.yaml └── service.yaml ``` You can explore the three files to understand what we are adding. :::info **Important** In the EKS Blueprints, we have only configured Team Riker as of now. If we deploy as is, all four teams will be created. But we only focus on the team-riker in this workshop; to avoid any confusion or conflict, we will remove unnecessary teams. ::: Please remove other teams we don't care for now, and we also removed another app from team-riker. Execute this command in the CodeSpace or remove the files from the GitHub UI or you'd prefer: ```= rm envs/dev/templates/team-burnham.yaml rm envs/dev/templates/team-carmen.yaml rm envs/dev/templates/team-geordie.yaml rm teams/team-riker/dev/templates/2048.yaml ``` When ready, you must check-in your code on GitHub using the following command: ```= git add . git commit -m "feature: adding skiapp and keeping only team-riker" git push ``` **See it went live in ArgoCD** Go back to the ArgoCD UI and click on the Sync button in the team-riker application. :::info Argo Auto Sync has been [enabled](https://github.com/seb-tmp/eks-blueprints-workloads/blob/main/envs/dev/templates/team-riker.yaml?#L22-L24) by default in the team-riker application, but you can accelerate by manually clicking on the Sync button. ::: You should see your last commit at the top of the screen and the new application appearing: ![skiapp-ingress](https://hackmd.io/_uploads/SJ3YleiJ0.png) To access our Ski App application, you can click on the `skiapp-ingress`, as shown in red in the previous image. :::info **Important** It can take a few minutes for the load balancer to be created and the Domain name to be propagated. ::: ![Screenshot 2024-04-03 at 09.08.57](https://hackmd.io/_uploads/SkJYflsyC.png) :::info **Important** For a production application, we would have to configure our ingress to use a custom domain name and use the external-dns add-on to dynamically configure our route53 hosted zone from the ingress configuration. You can find a more complete example of the EKS Blueprints [here](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples/blue-green-upgrade). ::: So our **Riker Application Team** has successfully published their website to the EKS cluster provided by the **Platform Team**. This pattern can be reused with your actual applications. :::info **Congratulations!** You have successfully deployed a new application in your EKS cluster using the GitOps pattern with ArgoCD and EKS Blueprints ::: # Call to Action Go to the Amazon EKS Blueprints guide and try to complete the tutorial "Multi-Cluster centralized hub-spoke topology" https://aws-ia.github.io/terraform-aws-eks-blueprints/patterns/gitops-multi-cluster-hub-spoke-argocd/ This tutorial guides you through deploying an Amazon EKS cluster with addons configured via ArgoCD in a Multi-Cluster Hub-Spoke topology, employing the GitOps Bridge Pattern. ![gitops-bridge-multi-cluster-hup-spoke.drawio](https://hackmd.io/_uploads/Hk6w4-jJ0.png)