# AWS EKS 學習筆記 ###### tags: `AWS 功能` ## 安裝步驟 ### [在 Linux 上安裝或更新 kubectl](https://docs.aws.amazon.com/zh_tw/eks/latest/userguide/install-kubectl.html) 1. 適用於裝置硬體平台的命令,從 Amazon S3下載 Amazon EKS 提供之適用您叢集 Kubernetes 版本的 kubectl 二進位檔案 * amd64 : `curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl` * arm64 : `curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/arm64/kubectl` 2. (選用) 使用您的二進位檔案 SHA-256 總和,驗證下載的二進位檔案 * amd64 : `curl -o kubectl.sha256 https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl.sha256` * arm64 : `curl -o kubectl.sha256 https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/arm64/kubectl.sha256` 3. 申請此二進位檔的執行許可 * `chmod +x ./kubectl` 4. (選用) 將 `$HOME/bin` 路徑新增到 Shell 初始化檔案 * `echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc` 5. 安裝 `kubectl` 之後,您可使用以下命令驗證其版本: * `kubectl version --short --client` ### [在 Linux 上安裝或更新 eksctl](https://docs.aws.amazon.com/zh_tw/eks/latest/userguide/eksctl.html) 1. 使用下列命令下載並擷取最新版的 `eksctl` * `curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp` 2. 將擷取的二進位檔移至 `/usr/local/bin` * `sudo mv /tmp/eksctl /usr/local/bin` 3. 使用下列命令,測試安裝是否成功 * `eksctl version` ### 所需的命令列工具: 1. kubectl:命令列工具,適用於使用 Kubernetes 叢集 2. eksctl:命令列工具,適用於使用 EKS 叢集,可自動執行許多個別任務 3. AWS CLI:適用於使用 AWS 服務 (包括 Amazon EKS) 的命令列工具 ## Amazon EKS 入門:eksctl ### 步驟 1 : 建立 Amazon EKS 叢集和節點 * Fargate – Linux:想要在 AWS Fargate 上執行 Linux 應用程式,選取此節點類型 * `eksctl create cluster --name my-cluster --region region-code --fargate` * 受管節點 – Linux:想要在 Amazon EC2 執行個體上執行 Amazon Linux 應用程式,選取此節點類型 * `eksctl create cluster --name my-cluster --region region-code` ``` # eksctl create cluster --name eva-cluster --region ap-northeast-1 2022-11-28 11:03:27 [ℹ] eksctl version 0.120.0 2022-11-28 11:03:27 [ℹ] using region ap-northeast-1 2022-11-28 11:03:28 [ℹ] setting availability zones to [ap-northeast-1d ap-northeast-1a ap-northeast-1c] 2022-11-28 11:03:28 [ℹ] subnets for ap-northeast-1d - public:192.168.0.0/19 private:192.168.96.0/19 2022-11-28 11:03:28 [ℹ] subnets for ap-northeast-1a - public:192.168.32.0/19 private:192.168.128.0/19 2022-11-28 11:03:28 [ℹ] subnets for ap-northeast-1c - public:192.168.64.0/19 private:192.168.160.0/19 ... 2022-11-28 11:25:35 [✔] EKS cluster "eva-cluster" in "ap-northeast-1" region is ready ``` ### 步驟 2 : 檢視 Kubereksctl 資源 1. 檢視叢集節點 : `kubectl get nodes -o wide` ``` # kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME fargate-ip-192-168-146-204.ap-northeast-1.compute.internal Ready <none> 5m38s v1.23.12-eks-1558457 192.168.146.204 <none> Amazon Linux 2 4.14.294-220.533.amzn2.x86_64 containerd://1.6.6 fargate-ip-192-168-188-161.ap-northeast-1.compute.internal Ready <none> 5m19s v1.23.12-eks-1558457 192.168.188.161 <none> Amazon Linux 2 4.14.294-220.533.amzn2.x86_64 containerd://1.6.6 ``` 2. 檢視叢集上執行的工作負載 : `kubectl get pods -A -o wide` ``` # kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-6d97794bdd-6gsx6 1/1 Running 0 3h9m 192.168.146.204 fargate-ip-192-168-146-204.ap-northeast-1.compute.internal <none> <none> kube-system coredns-6d97794bdd-7ktt2 1/1 Running 0 3h9m 192.168.188.161 fargate-ip-192-168-188-161.ap-northeast-1.compute.internal <none> <none> ``` ### 步驟 3 : 刪除您的叢集和節點 * `eksctl delete cluster --name my-cluster --region region-code` ``` # eksctl delete cluster --name eva-cluster --region ap-northeast-1 2022-11-28 14:36:56 [ℹ] deleting EKS cluster "eva-cluster" 2022-11-28 14:39:06 [ℹ] deleted Fargate profile "fp-default"" 2022-11-28 14:39:06 [ℹ] deleted 1 Fargate profile(s) ... 2022-11-28 14:39:09 [ℹ] will delete stack "eksctl-eva-cluster-cluster" 2022-11-28 14:39:09 [✔] all cluster resources were deleted ``` ### 步驟 4 : 其他指令 * 更新並重新建立關聯 ``` # kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * arn:aws:eks:ap-northeast-1:285167715064:cluster/eks-eva arn:aws:eks:ap-northeast-1:285167715064:cluster/eks-eva arn:aws:eks:ap-northeast-1:285167715064:cluster/eks-eva # aws eks update-kubeconfig --name eva-eks --region ap-northeast-1 Added new context arn:aws:eks:ap-northeast-1:285167715064:cluster/eva-eks to /root/.kube/config # kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE arn:aws:eks:ap-northeast-1:285167715064:cluster/eks-eva arn:aws:eks:ap-northeast-1:285167715064:cluster/eks-eva arn:aws:eks:ap-northeast-1:285167715064:cluster/eks-eva * arn:aws:eks:ap-northeast-1:285167715064:cluster/eva-eks arn:aws:eks:ap-northeast-1:285167715064:cluster/eva-eks arn:aws:eks:ap-northeast-1:285167715064:cluster/eva-eks ``` * 查詢服務 & 節點 ``` # kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-192-168-56-143.ap-northeast-1.compute.internal Ready <none> 68s v1.23.13-eks-fb459a0 192.168.56.143 18.177.151.215 Amazon Linux 2 5.4.219-126.411.amzn2.x86_64 docker://20.10.17 ip-192-168-78-82.ap-northeast-1.compute.internal Ready <none> 75s v1.23.13-eks-fb459a0 192.168.78.82 13.115.245.187 Amazon Linux 2 5.4.219-126.411.amzn2.x86_64 docker://20.10.17 # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 10m ``` * 設定自動擴展 ``` # kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5 horizontalpodautoscaler.autoscaling/php-apache autoscaled # kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache <unknown>/50% 1 5 1 5m52s ``` ## 滾動式更新 * 一個 Deployment 掌管一或多個 ReplicaSet,而一個 ReplicaSet 掌管一或多個 Pod ![](pic/deployment-replicasets-pods.png) ``` └─ Deployment: <name> └─ ReplicaSet: <name>-<rs> └─ Pod: <name>-<rs>-<randomString> ``` ### 1. 準備 yaml 檔並執行 ``` # vim test-rolling.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.10.2 ports: - containerPort: 80 # kubectl apply -f test-rolling.yaml deployment.apps/nginx-deployment created ``` ### 2. 查看 deployment 部署情況 ``` # kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 2m31s ``` ### 3. 查看自動產生的 replicaset ``` # kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-659f55f4bc 3 3 3 7m11s ``` #### 4. 由該 replicaset 產生的 pod ``` # kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-56bcb45f97-74prt 1/1 Running 0 51m nginx-deployment-56bcb45f97-fzs89 1/1 Running 0 51m nginx-deployment-56bcb45f97-pm96d 1/1 Running 0 51m ``` ### 5.加入滾動升級 (Rolling Update) ``` minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 ``` * `minReadySeconds` : * 容器內應用程式的啟動時間,Kubernetes 會等待設定的時間後才繼續進行升級流程 * 如果沒有此欄位的話,Kubernetes 會假設該容器一開完後即可進行服務 * 若未設定此欄位,在某些極端情況下可能會造成服務無法正常運作(新誕生的 pod 尚未進入可服務階段) * `maxSurge` : * 升級過程中最多可以比原先設定所多出的 pod 數量 * 此欄位可以為固定值或是比例(%) * ex. maxSurge: 1、replicas: 5,代表 Kubernetes 會先開好 1 個新 pod 後才刪掉一個舊的 pod,整個升級過程中最多會有 5+1 個 pod * `maxUnavailable` : * 最多可以有幾個 pod 處在無法服務的狀態 * 與 maxSurge 不可同時為零 (當 maxSurge 不為零時,此欄位可為零) * ex. maxUnavailable: 1,代表 Kubernetes 整個升級過程中最多會有 1 個 pod 處在無法服務的狀態 ``` # vim test-rolling.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 10 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 hostPort: 80 ``` ### 6. 進行滾動升級 * `set image` ``` # kubectl set image deployment <deployment> <container>=<image> --record //format # kubectl set image deployment nginx-deployment nginx=nginx:1.11.5 --record //example ``` * `replace` : 修改 image 版本 ``` # kubectl replace -f <yaml> //format # kubectl replace -f test-rolling.yaml //example ``` * `edit` : 這指令會直接打開編輯器的視窗,可以直接修改 deployment 內的設定值 ``` # kubectl edit deployment <deployment> --record //format # kubectl edit deployment nginx-deployment --record //example ``` ### 7. 滾動指令查詢 * 查詢升級狀況 ``` # kubectl rollout status deployment <deployment> //format # kubectl rollout status deployment nginx-deployment //example ``` * 暫停滾動升級 ``` # kubectl rollout pause deployment <deployment> //format # kubectl rollout pause deployment nginx-deployment //example ``` * 繼續滾動升級 ``` # kubectl rollout resume deployment <deployment> //format # kubectl rollout resume deployment nginx-deployment //example ``` * 復原滾動更新 ``` # kubectl rollout undo deployment <deployment> //format # kubectl rollout undo deployment nginx-deployment //example ``` * 滾動歷史查詢 ``` # kubectl rollout history deployment nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 kubectl apply --filename=test-rolling.yaml --record=true 2 kubectl apply --filename=test-rolling.yaml --record=true ``` * 觀看更新後服務狀態 ``` # kubectl get deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx 2/3 3 1 103m nginx nginx:1.7.9 app=nginx nginx-deployment 2/2 2 2 5h22m nginx nginx:1.11.5 app=nginx ``` * 回到指定版本 ``` # kubectl rollout undo deployment nginx --to-revision=<版本> //format # kubectl rollout undo deployment nginx --to-revision=5 //example ``` ## Annotations ### `service.beta.kubernetes.io/aws-load-balancer-type` * 指定負載均衡器類型 * 對於 `nlb-ip類型` ,控制器將為 NLB 提供 IP 目標。支持此值是為了向後兼容 * 對於 `external` 類型,NLB 目標類型取決於註釋 `nlb-target-type` (下方) ### `service.beta.kubernetes.io/aws-load-balancer-nlb-target-type` * 指定要為 NLB 配置的目標類型 * `instance` 模式會將流量路由到為您的服務打開的NodePort上集群內的所有 EC2 實例。 * `ip` mode 會將流量直接路由到 pod IP。 ### `service.beta.kubernetes.io/aws-load-balancer-scheme` * 指定 NLB 是面向 Internet 的還是內部的 * 有效值為internal, internet-facing。如果未指定,則默認為internal ### `service.beta.kubernetes.io/aws-load-balancer-internal` * 指定 NLB 是面向 Internet 的還是內部的。 ## Session Affinity * 可以透過session affinity的設定來盡量讓來自同個client的request可以由同一個Pod回覆,這樣就可以避免session遺失的問題 * `None` : 以round robin的方式輪詢下面的Pods。 * `ClientIP` : 以client ip的方式固定request到同一台機器。 ``` sessionAffinity: ClientIP loadBalancerIP: 123.123.123.123 ``` ## 工作負載 ### Horizontal Pod Autoscaler 會根據資源的 CPU 使用率,自動調整部署、複寫控制器或複本集中的 pods 裝置數量 基於CPU使用率或特定資源條件自動增減ReplicationController、ReplicaSet與Deployment中Pod的數量。特定資源條件可以為CPU、Memory、Storage或其他自訂條件 #### HPA 部署 * HPA YAML由 apiVersion, kind, metadata, spec組成 * spec 包含以下字段: * minReplicas : 自動伸縮仍須保有的最小Pod數 * maxReplicas : 自動伸縮可以擴展的的最大Pod數 * scaleTargetRef : 要縮放的目標, 目標的spec只少會包含apiVersopn, kind, spec * metrics <[]object>: 用來計算所需Pod數的指標列表 * external: 引用不屬於任何對象的全局指標 * type: 指標的類型, 可選Objects, Pods 或 Resource * object: 引用描述cluster中單一對象的特定指標 * pods: 引用當前被自動伸縮的Pod對象的特定指標 * resource: 引用資源指標, 即當前被自動伸縮的Pod對象容器中的requests 和 limits 定義的指標 * targetAverageUtilization: Pod 的平均利用率 * targetAverageValue: Pod 的平均利用量 * targetCPUUtilizationPercentage: 指定目標的Scaling條件,這邊為CPU的平均百分比。 #### HPA.yaml ``` apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: ironman-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: ironman minReplicas: 2 maxReplicas: 5 targetCPUUtilizationPercentage: 50 ``` ### Vertical Pod Autoscaler 可自動調整保留給 pods 的 CPU 和記憶體,以協助保持應用程式的「適當大小」,來減少運維成本 #### VPA 部署 * spec.updatePolic.updateMode : * Off :VPA 只會提供推薦資源配置,不會自動的調整任何設定。 * Initial :VPA 只會在 Pod 被建立時調整資源配置並且不會再有任何自動調整。 * Auto :VPA 將會自動配置 Recommender 提供的配置。 * Recreate :和 Auto 類似差別在於每次重啟 Pod 都會 recreate (很少用到)。 * spec.resourcePolicy.containerPolicies : * containerName :指定 VPA 的範圍, * 代表目標中所有的 Pod。 * minAllowed:可調整的資源下限。 * maxAllowed:可調整的資源上限。 * controlledResources :需要監控的資源指標,有 cpu 和 memory 可以選擇。 #### VPA.yaml ``` apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: hamster-vpa namespace: default spec: targetRef: apiVersion: apps/v1 kind: Deployment name: hamster updatePolicy: updateMode: "Off" resourcePolicy: containerPolicies: - containerName: '*' minAllowed: cpu: 100m memory: 50Mi maxAllowed: cpu: 1 memory: 500Mi controlledResources: ["cpu", "memory"] ``` ### Cluster Autoscaler 當物件因資源不足而無法生成啟動時,或者是叢集節點使用率過低時,ClusterAutoScaler會自動地去調節節點的數量。 * —num-nodes: 結點創建初始值,預設為3。 * --enable-autoscaling: 啟動autoscaler與否。 * --min-nodes: node pool中最低的節點數。 * --max-nodes: node pool中最高的節點數。