# Workshop
## Lab 1
Validate EKS Deployment
```bash
kubectl get pods -A
```
Login to ArgoCD Dashboard
```bash
export ARGOCD_SERVER=`kubectl get svc argo-cd-argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
echo "https://$ARGOCD_SERVER"
```
Open a new browser and paste in the url from the previous command. You will now see the ArgoCD UI.
:::info
Since ArgoCD UI exposed like this is using self-signed certificate, you'll need to accept the security exception in your browser to access it
:::
Get the ArgoCD Admin password by executing this command on your terminal
```bash
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
```
**Login to the UI:**
1. The username is admin
2. The password is: the result of the Query for admin password command above.
At this step you should be able to see Argo UI
In the ArgoUI, you can see that we have severals Applications deployed:
* addons
* workloads
* team-burnham
* team-carmen
* team-riker
So we have 3 Teams application already deployed. This is because we reuse an existing Github repo for the workloads. You can find the repo used and the argo cd configuration in locals.tf and main.tf terraform files.
## Deploy ECSDEMO microservice Applications
We will be deploying multiple microservices, much like in the real-world scenarios we face. We will deploy an application by adding 3 new teams in our EKS Blueprint (one team for each microservice), and then leveraging ArgoCD to deploy our services in their appropriate team environment (namespace) in the EKS cluster.
:::info
We focus here on the Continuous Deployment (CD) part, and not of the building of the application microservices themselves.
:::

### Adding new Teams
We want to create dedicated kubernetes manifests in our new teams namespaces. For this for each service we create directory inside the EKS Blueprints folder and create Kubernetes manifest, that will be installed at the namespace creation:
* An ArgoCD AppProject definition. It define a new project in argo, with our EKS cluster as destination and authorized to sync with our namespace the Github repositories as the source for the deployments files.
* A LimitRange object to dynamically add and control Pod resources.
* We uses LimitRange so that we can control the team usage in the namespace, and it will automatically set default values if the application did not provide them
Run the below commands on terminal in Cloud9 to create the below manifest files.
**ecsdemo-crystal**
```bash=
mkdir ecsdemo-crystal
cat << EOF > ecsdemo-crystal/crystal-app-project.yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: ecsdemo-crystal
namespace: argocd
spec:
destinations:
- namespace: ecsdemo-crystal
server: https://kubernetes.default.svc
sourceRepos:
- https://github.com/aws-containers/ecsdemo-crystal.git
EOF
cat << EOF > ecsdemo-crystal/limit-range.yaml
apiVersion: 'v1'
kind: 'LimitRange'
metadata:
name: 'resource-limits'
namespace: ecsdemo-crystal
spec:
limits:
- type: 'Container'
max:
cpu: '2'
memory: '1Gi'
min:
cpu: '100m'
memory: '4Mi'
default:
cpu: '300m'
memory: '200Mi'
defaultRequest:
cpu: '200m'
memory: '100Mi'
maxLimitRequestRatio:
cpu: '10'
EOF
```
**ecsdemo-nodejs**
```bash=
mkdir ecsdemo-nodejs
cat << EOF > ecsdemo-nodejs/nodejs-app-project.yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: ecsdemo-nodejs
namespace: argocd
spec:
destinations:
- namespace: ecsdemo-nodejs
server: https://kubernetes.default.svc
sourceRepos:
- https://github.com/aws-containers/ecsdemo-nodejs.git
EOF
cat << EOF > ecsdemo-nodejs/limit-range.yaml
apiVersion: 'v1'
kind: 'LimitRange'
metadata:
name: 'resource-limits'
namespace: ecsdemo-nodejs
spec:
limits:
- type: 'Container'
max:
cpu: '2'
memory: '1Gi'
min:
cpu: '100m'
memory: '4Mi'
default:
cpu: '300m'
memory: '200Mi'
defaultRequest:
cpu: '200m'
memory: '100Mi'
maxLimitRequestRatio:
cpu: '10'
EOF
```
**ecsdemo-frontend**
```bash=
mkdir ecsdemo-frontend
cat << EOF > ecsdemo-frontend/frontend-app-project.yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: ecsdemo-frontend
namespace: argocd
spec:
destinations:
- namespace: ecsdemo-frontend
server: https://kubernetes.default.svc
sourceRepos:
- https://github.com/aws-containers/ecsdemo-frontend.git
EOF
cat << EOF > ecsdemo-frontend/limit-range.yaml
apiVersion: 'v1'
kind: 'LimitRange'
metadata:
name: 'resource-limits'
namespace: ecsdemo-frontend
spec:
limits:
- type: 'Container'
max:
cpu: '2'
memory: '1Gi'
min:
cpu: '100m'
memory: '4Mi'
default:
cpu: '300m'
memory: '200Mi'
defaultRequest:
cpu: '200m'
memory: '100Mi'
maxLimitRequestRatio:
cpu: '10'
EOF
```
### Deploy new teams
Now, we need to create the associated teams within Terraform. In the main.tf file in the eks_blueprints module under application_teams and below team-riker section add the following team definition:
```hcl-terraform=
ecsdemo-frontend = {
"labels" = {
"elbv2.k8s.aws/pod-readiness-gate-inject" = "enabled",
"appName" = "ecsdemo-frontend",
"projectName" = "ecsdemo",
"environment" = "dev",
}
"quota" = {
"requests.cpu" = "10000m",
"requests.memory" = "20Gi",
"limits.cpu" = "20000m",
"limits.memory" = "50Gi",
"pods" = "10",
"secrets" = "10",
"services" = "10"
}
## Deploy manifest from specified directory
manifests_dir = "./ecsdemo-frontend"
users = [data.aws_caller_identity.current.arn]
}
ecsdemo-crystal = {
"labels" = {
"appName" = "ecsdemo-crystal",
"projectName" = "ecsdemo",
"environment" = "dev",
}
"quota" = {
"requests.cpu" = "10000m",
"requests.memory" = "20Gi",
"limits.cpu" = "20000m",
"limits.memory" = "50Gi",
"pods" = "10",
"secrets" = "10",
"services" = "10"
}
## Deploy manifest from specified directory
manifests_dir = "./ecsdemo-crystal"
users = [data.aws_caller_identity.current.arn]
}
ecsdemo-nodejs = {
"labels" = {
"appName" = "ecsdemo-nodejs",
"projectName" = "ecsdemo",
"environment" = "dev"
}
"quota" = {
"requests.cpu" = "10000m",
"requests.memory" = "20Gi",
"limits.cpu" = "20000m",
"limits.memory" = "50Gi",
"pods" = "10",
"secrets" = "10",
"services" = "10"
}
## Deploy manifest from specified directory
manifests_dir = "./ecsdemo-nodejs"
users = [data.aws_caller_identity.current.arn]
}
```
Once we add our 3 new teams, we just need to apply this to our cluster
```bash=
terraform apply --auto-approve
```
You should see 3 new namespaces
```bash=
kubectl get ns | grep ecsdemo
```
```
ecsdemo-crystal Active 1m
ecsdemo-frontend Active 1m
ecsdemo-nodejs Active 1m
```
We can also check that we create our 3 new ArgoCD App Projects:
```bash=
kubectl get appproject -A
```
```
NAMESPACE NAME AGE
argocd default 14d
argocd ecsdemo-crystal 1m
argocd ecsdemo-frontend 1m
argocd ecsdemo-nodejs 1m
```
And that our LimitRange objects has been correctly created in each of our newly namespaces:
```bash=
kubectl get limitrange -A
```
```
NAMESPACE NAME CREATED AT
ecsdemo-crystal resource-limits 2022-07-18T13:48:49Z
ecsdemo-frontend resource-limits 2022-07-18T13:48:48Z
ecsdemo-nodejs resource-limits 2022-07-18T13:48:49Z
```
### Configure ECSDEMO ArgoCD deployment repoHeader anchor link
Now we are going to let ArgoCD know about our ecsdemo 3 microservices to deploy in each of our team namespaces.
Open local.tf and add our workload definition
```hcl-terraform=
#---------------------------------------------------------------
# ARGOCD ECSDEMO APPLICATION
#---------------------------------------------------------------
ecsdemo_application = {
path = "multi-repo/argo-app-of-apps/dev"
repo_url = "https://github.com/seb-demo/eks-blueprints-workloads.git"
add_on_application = false
}
```
Then go to main.tf and add our application in the argocd_applications section.
```hcl-terraform=
argocd_applications = {
addons = local.addon_application
workloads = local.workload_application
ecsdemo = local.ecsdemo_application
}
```
Apply the Terraform changes:
```bash=
terraform apply --auto-approve
```
**Let's walk through the application definition in the eks-blueprint-workload github repository:**
You can see the directory we will sync here : https://github.com/seb-demo/eks-blueprints-workloads/tree/main/multi-repo/argo-app-of-apps/dev
It has the following structure:
```
multi-repo/argo-app-of-apps/dev/
├── Chart.yaml
├── templates
│ ├── ecsdemo-crystal.yaml
│ ├── ecsdemo-frontend.yaml
│ └── ecsdemo-nodejs.yaml
└── values.yaml
```
The directory structure is in helm chart format and the values.yaml file contains the reference to other repositories for each application that contain the code to deploy the app: https://github.com/seb-demo/eks-blueprints-workloads/blob/main/multi-repo/argo-app-of-apps/dev/values.yaml
```hcl-terraform=
spec:
destination:
server: https://kubernetes.default.svc
apps:
ecsdemoFrontend:
repoURL: https://github.com/aws-containers/ecsdemo-frontend.git
targetRevision: main
path: kubernetes/helm/ecsdemo-frontend
ecsdemoNodejs:
repoURL: https://github.com/aws-containers/ecsdemo-nodejs.git
targetRevision: main
path: kubernetes/helm/ecsdemo-nodejs
ecsdemoCrystal:
repoURL: https://github.com/aws-containers/ecsdemo-crystal.git
targetRevision: main
path: kubernetes/helm/ecsdemo-crystal
```
If we open the templates/ecsdemo-frontend.yaml we can see specific values that we want to be injected in our application helm deployment. You can update this file to adapt the deployment of the application to your needs.
```hcl-terraform=
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ecsdemo-frontend
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: ecsdemo-frontend
destination:
namespace: ecsdemo-frontend
server: {{.Values.spec.destination.server}}
source:
repoURL: {{.Values.spec.apps.ecsdemoFrontend.repoURL}}
targetRevision: {{.Values.spec.apps.ecsdemoFrontend.targetRevision}}
path: {{.Values.spec.apps.ecsdemoFrontend.path}}
helm:
parameters:
- name: ingress.enabled
value: 'true'
- name: ingress.className
value: 'alb'
- name: ingress.annotations.alb\.ingress\.kubernetes\.io/group\.name
value: 'ecsdemo'
- name: ingress.annotations.alb\.ingress\.kubernetes\.io/scheme
value: 'internet-facing'
- name: ingress.annotations.alb\.ingress\.kubernetes\.io/target-type
value: 'ip'
- name: ingress.annotations.alb\.ingress\.kubernetes\.io/listen-ports
value: '[{"HTTP": 80}]'
- name: ingress.annotations.alb\.ingress\.kubernetes\.io/tags
value: 'Environment=dev,Team=ecsdemo'
- name: ingress.hosts[0].host
value: ''
- name: ingress.hosts[0].paths[0].path
value: '/'
- name: ingress.hosts[0].paths[0].pathType
value: 'Prefix'
- name: replicaCount
value: '3'
- name: image.repository
value: 'public.ecr.aws/seb-demo/ecsdemo-frontend'
- name: resources.requests.cpu
value: '200m'
- name: resources.limits.cpu
value: '400m'
- name: resources.requests.memory
value: '256Mi'
- name: resources.limits.memory
value: '512Mi'
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=false # https://github.com/argoproj/argo-cd/issues/7799
```
:::info
In the above example we added resources/limits configuration, and configured the ingress to uses the Application Load Balancer via dedicated annotations.
:::
#### Validate deployment
The applications should be automatically deployed, or you can go to the ArgoCD UI and check if it needs synchronization.
Verify if the Pods are running:
```bash=
kubectl get pods -A | grep ecsdemo
```
```
ecsdemo-crystal ecsdemo-crystal-56dd4d5f4-clxvx 1/1 Running 0 42s
ecsdemo-crystal ecsdemo-crystal-56dd4d5f4-fn95g 1/1 Running 0 42s
ecsdemo-crystal ecsdemo-crystal-56dd4d5f4-g7c86 1/1 Running 0 42s
ecsdemo-frontend ecsdemo-frontend-7b84c6dd54-b66mv 1/1 Running 0 42s
ecsdemo-frontend ecsdemo-frontend-7b84c6dd54-czdn4 1/1 Running 0 42s
ecsdemo-frontend ecsdemo-frontend-7b84c6dd54-znh74 1/1 Running 0 42s
ecsdemo-nodejs ecsdemo-nodejs-669bc64c56-7l5kc 1/1 Running 0 43s
ecsdemo-nodejs ecsdemo-nodejs-669bc64c56-7xjk7 1/1 Running 0 43s
ecsdemo-nodejs ecsdemo-nodejs-669bc64c56-mfklw
```
And the ecsdemo-frontend app should be exposed through an ALB configure with the Kubernetes ingress
```bash=
kubectl get ing -A | grep ecsdemo
```
```
ecsdemo-frontend ecsdemo-frontend alb * k8s-ecsdemo-f3cf86ec6a-1010682437.eu-west-1.elb.amazonaws.com 80 81s
```
You should be able to see our application as showed in the top of this page connecting to the ALB associated URL.
:::spoiler
it will take couple of mins for the alb to become active.
:::
------
------
------
## Lab 2
### Upgrade EKS
#### Steps to Upgrade EKS cluster:
1. Change the version in Terraform locals.tf to the desired Kubernetes cluster version. See the example below,
```hcl-terraform
cluster_version = "1.22"
```
This will update the EKS Control Plane and Data Plane to Version 1.22
2. Explicitly defining release version in EKS blueprints for Terraform will give more control over managed node upgrades. Since we are upgrading cluster from 1.21 to 1.22, we need to make sure the release version is also updated.
To retreive latest version of managed node group release version for the EKS, please run the below AWS CLI command
```bash
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.22/amazon-linux-2/recommended/release_version --region us-west-2 --query "Parameter.Value" --output text
```
Update the release version value in the terraform locals.tf file
```hcl-terraform
mng_release_version = "1.22.12-20220824"
```
Apply the changes to the Terraform
```bash=
terraform plan
terraform apply --auto-approve
```
3. If you are specifying a version for EKS managed addons, you will need to ensure the version used is compatible with the new cluster version, or use a data source to pull the appropriate version.
First list latest supported add-on version
```bash=
aws eks describe-addon-versions --addon-name vpc-cni --kubernetes-version 1.22 --query addons[*].addonVersions[*].addonVersion
aws eks describe-addon-versions --addon-name coredns --kubernetes-version 1.22 --query addons[*].addonVersions[*].addonVersion
aws eks describe-addon-versions --addon-name kube-proxy --kubernetes-version 1.22 --query addons[*].addonVersions[*].addonVersion
aws eks describe-addon-versions --addon-name aws-ebs-csi-driver --kubernetes-version 1.22 --query addons[*].addonVersions[*].addonVersion
```
Update Add-on versions in the locals.tf file
```hcl-terraform
eks_vpc_cni_version = "v1.11.3-eksbuild.1"
coredns_version = "v1.8.7-eksbuild.1"
kube_proxy_version = "v1.22.11-eksbuild.2"
ebs_csi_driver_version = "v1.10.0-eksbuild.1"
```
```bash=
terraform plan
terraform apply --auto-approve
```
Doing this will:
- Upgrade the Control Plane to the version specified
- Update the Data Plane to ensure the compute resources are utilizing the corresponding AMI for the given cluster version
- Update addons to reflect the respective versions
## Important Note
Please note that you may need to update other Kubernetes Addons deployed through Helm Charts to match with new Kubernetes upgrade version