Product k8s cluster with AWS
===
[TOC]
## 注意事項
:::danger
使用aws ec2 相關功能 可能會使aws超出限制而收費 請時刻注意
:::
- 使用 ubuntu terminal,別使用vscode wsl remote terminal,pull docker images會有問題
- 建立 aws user 跟 group 建議使用 aws網站建立,別使用command line
- 刪除 bucket 或者其他資源要小心,最好使用 command line
- 若cluster 沒刪除,bucket透過aws web console刪除,則可能會無法透過local command刪除 cluster,**會顯示找不到bucket的error**
- 若發生這狀況,得將local aws全部刪除,清除所有linux環境變數
- 因此建議最好是透過 ```kops delete cluster --name {CLUSTER NAME}```直接將cluster所有相關東西刪除
- default region 最好選擇亞洲地區的地方,預設是 us-east-1
- 如果你不設定在create cluster會失敗,或者是我等不夠久 (1 hr)
- 設置在亞洲應該能夠在 30 min內把cluster建立完成
- 最好在18小時內跑完全部流程,不然憑證會TTL
- 建議先保留(安裝)kubectl、minikube套件再往下看,然後原本的cluster和namespace要清乾淨,不然你要處理很多麻煩的問題 (來自遠方朋友的建議)
:::warning
顯示警告視窗的章節,確認過照著educative做會有問題
:::
## Getting Started with Production-Ready Clusters
- 我們要建立一個 cluster 在 AWS 上
- 記得去申請一個AWS帳號,理論上每個月有750小時的免費
- [註冊](https://aws.amazon.com/tw/free/?trk=30ef8e27-201f-4959-b6d2-de9e186bb48f&sc_channel=ps&s_kwcid=AL!4422!3!595905314153!e!!g!!aws%20%E8%A8%BB%E5%86%8A&ef_id=CjwKCAiAjs2bBhACEiwALTBWZW0_th8TVcqDLeB044PVRXNYOYn3HQs-3Eq1Qd9eQx0s3iO_awWKNBoCXW8QAvD_BwE:G:s&s_kwcid=AL!4422!3!595905314153!e!!g!!aws%20%E8%A8%BB%E5%86%8A&all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc&awsf.Free%20Tier%20Types=*all&awsf.Free%20Tier%20Categories=*all)
- [aws console](https://ap-northeast-1.console.aws.amazon.com/console/home?region=ap-northeast-1#)
- 扣款的permission沒給,不弄public DNS整個教學不含任何收費內容
## Kubernetes Operations (kops) Project
- 為什麼用 aws 跟 kops 照著看
- 安裝 kops
```
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
```
- 之後用 ```kops version```確認是否安裝完成
- 
## Preparing for the Cluster Setup: AWS CLI and Region
:::warning
educative版本過舊
:::
- 先確認是否安裝過 unzip
```
sudo apt upgrade
sudo apt install unzip
```
- 安裝 AWS CLI
- 下載下來是壓縮檔
```
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
```
- 確認 aws ```aws version```
- 進入網站並登入創立一個user
- 在上面的 Service或者 serach bar中輸入 ```IAM```
- [點選網址](https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/users)或點選旁邊選單中的Users
- 點選 ```Add users```
- User name 輸入 ```kops```
- 勾選 ```Access key - Programmatic access```
- 
- Next (下一步)
- 選取 Attach existing policies directly
- 勾選 AdministratorAcess
- 
- Next (下一步)
- 無視tag
- Next (下一步)
- Create user
- 把 Access key ID 和 Secret access key 記錄下來,建議存在你的本地端某處
- 後面需要這兩個東西授權給local的aws cli
- 
## Preparing for the Cluster Setup: IAM Group and User
- 授權 local aws cli
- 是透過環境變數進行授權 (printenv 可以查看目前環境變數)
- terminal輸入
```
export AWS_ACCESS_KEY_ID= {你的 Access key ID }
export AWS_SECRET_ACCESS_KEY= {你的 Secret access key}
```
- 確認 ```$AWS_ACCESS_KEY_ID``` ```$AWS_SECRET_ACCESS_KEY```
- 設置 region ```export AWS_DEFAULT_REGION=ap-northeast-1```
- 或者輸入``` aws configure``` 然和按照上面填入
- 可以在 .aws/credentials 和 config 看見你的設置
- 
- 創立一個group 把 user 加進去
- 指令依序為 創立 名稱為 kops的 group
- 設置 group的權限
- 列出 users
```
aws iam create-group --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam add-user-to-group --user-name kops --group-name kops
aws iam list-users
```
## Preparing for the Cluster Setup: Availability Zones and SSH Keys
:::warning
educative版本過舊
:::
- 先確認aws相關變數已設置完成
- `aws configure`
- 依序輸入你的 id, sercret key ,region
- 注意deafualt region要輸入 `ap-northeast-1`
- 設置環境變數
- `export AWS_ACCESS_KEY_ID=...`
- `export AWS_SECRET_ACCESS_KEY= ...`
- `export KOPS_STATE_STORE=s3://[devops23的ID]`
- 選好路徑,我們要新建一個 cluster資料夾,放ssh key和一些設定檔
```
mkdir -p cluster
cd cluster
```
- 確認我們的區域 ```ap-northeast-1```可用的ZONES
```aws ec2 describe-availability-zones --region ap-northeast-1```
- 選三個(ZoneName)加入區域變數 ```ZONES```
- 記得要看一下是否還是這些節點,每次可能列出的都不一樣
- ```export ZONES=ap-northeast-1a,ap-northeast-1c,ap-northeast-1d```
- 創立 ssh keys
- 先確認一下有沒有 ```jq```
- 安裝 jq
- ```sudo apt-get update```
- ```sudo apt-get install jq```
- ```aws ec2 create-key-pair --key-name devops23 | jq -r '.KeyMaterial' >devops23.pem```
- 確認一下 pem 內部有 private key
- 若有舊的 key 要刪除,可以參考下面網址: https://docs.aws.amazon.com/zh_tw/cli/latest/userguide/cli-services-ec2-keypairs.html
- 創立 key pair叫做 devops23,並存成devops.pem
- ```chmod 400 devops23.pem``` 修改 devops.pem的權限,不改會沒辦法genreate key
- ```ssh-keygen -y -f devops23.pem >devops23.pub``` 產生 ssh key並存成devops23.pub
## Creating a Cluster: Creating S3 Bucket and Installing kops
:::warning
educative版本過舊
:::
- kops上面交過怎麼裝了
- 現在要創立一個 bucket ,可以透過 aws 網頁的 console 觀看 bucket
- 最好不要從網頁上刪除bucket 通常會出事
- https://s3.console.aws.amazon.com/s3/home?region=us-east-1
- 先設置一下環境變數
- NAME 是 cluster的name
- BUCKET_NAME 是 bucket的name,後面是日期的命名,因為aws bucket是 globally必須為unique
```
export NAME=devops23.k8s.local
export BUCKET_NAME=devops23-$(date +%s)
```
- 創立一個 bucket
```
aws s3api create-bucket --bucket $BUCKET_NAME --create-bucket-configuration LocationConstraint=ap-northeast-1
```
- 可以在[aws S3 bucket](https://s3.console.aws.amazon.com/s3/buckets?region=us-east-1®ion=us-east-1) 看到
## Creating a Cluster: Discussing the Specifications
## Creating a Cluster: Running and Verification
:::warning
educative版本過舊
:::
- 設置環境變數
```
export NAME=devops23.k8s.local
```
- 建立 cluster (建micro大小的比較不容易被扣錢扣到六親不認)
```
kops create cluster --name $NAME --master-count 3 --node-count 1 --node-size t2.micro --master-size t2.micro --zones $ZONES --master-zones $ZONES --ssh-public-key devops23.pub --networking kubenet --yes
```
- 驗證 cluster
```
kubectl cluster-info
kops get cluster
kops validate cluster --wait 10m
```

:::info
1. 前3分鐘可能會出現 ```EOF``` ,接著出現unhealthy幾分鐘,等10分鐘左右應該就能ready
2. 若有 3個master & 3個nodes 正常運作
但cluster為unhealthy,理論上是上次刪除cluster不乾淨,不影響之後操作
:::
## Exploring the Components That Constitute the Cluster
```kubectl --namespace kube-system get pods```
- 如果要 edit cluster/instance group不建議直接使用```kops edit cluster --name $NAME``` 最好是導出成yaml設定檔 ```kops get -o yaml > cluster.yml``` ,再 ```kops replace cluster```
- 另外,```kops edit ig --name $NAME nodes```的nodes由於aws命名規則改變,可以看上面 ```kops validate cluster```,通常為```nodes-ap-northeast-1a 1c 1d```
- 依序執行下列步驟:
- 導出cluster設定檔```kops get -o yaml > cluster.yml```
- 將內部所有的 ```maxSize minSize``` 改成2
- 
- 取代掉原本的設定檔```kops replace -f cluster.yml```
- 確認是否更改成功 ```kops get -o yaml```
- 進行滾動更新```kops rolling update cluster —yes```
- 若出現```09:51:24.282564 504495 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Unauthorized```
- 為憑證過期 (18h限制)
- 執行 ```kops export kubecfg —admin=87600h```
- ```kops rolling-update cluster —name $NAME —yes —admin```
## Upgrading the Cluster Manually: Changing the Kubernetes Version
跳過 我們的create cluster沒指定版本,預設為latest版本
## Accessing the Cluster: Understanding the Protocol
- 查看目前所有的load balancer```aws elb describe-load-balancers```
- 看edaucative
## Accessing the Cluster: Adding the Load Balancer
:::warning
educative版本過舊
:::
- 要新增 ingress,利用nginx
- kubectl版本在v1.22已進行更新,原本educative提供的v1.6.0.yml會無法建立
- 建立一個檔案叫做 ```ingress-nginx.yaml```,原始碼在之下
- 我們之後建立的東西都在 namespace kube-ingress之下
- 因此若有任何問題可以透過命名空間直接刪除或操作所有ingresses
::: spoiler ```ingress-nginx.yaml```
```yaml=
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.0.2@sha256:85b53b493d6d658d8c013449223b0ffd739c76d76dc9bf9000786669ec04e049
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: nginx
namespace: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
```
:::
- 執行 ```kubectl apply -f ingress-nginx.yaml```
- 沒看到任何錯誤訊息則繼續下一步
- 查看 ingress```kubectl --namespace kube-ingress get all```
- 確定每個都是 READY 1/1 和 RUNNING
## Deploying Applications
:::warning
educative版本過舊
:::
- 建立 Resources,在原本aws的資料夾內部有個 ```k8s-specs```資料夾,內部自帶一個```go-demo-2.yml```
```
cd ~/k8s-specs
kubectl create -f aws/go-demo-2.yml --record --save-config
```
- 如果找不到,以下為```go-demo-2.yml```
:::spoiler ```go-demo-2.yml```
```yaml=
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: go-demo-2
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /demo
pathType: ImplementationSpecific
backend:
service:
name: go-demo-2-api
port:
number: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-demo-2-db
spec:
selector:
matchLabels:
type: db
service: go-demo-2
strategy:
type: Recreate
template:
metadata:
labels:
type: db
service: go-demo-2
vendor: MongoLabs
spec:
containers:
- name: db
image: mongo:3.3
resources:
limits:
memory: "100Mi"
cpu: 0.1
requests:
memory: "50Mi"
cpu: 0.01
---
apiVersion: v1
kind: Service
metadata:
name: go-demo-2-db
spec:
ports:
- port: 27017
selector:
type: db
service: go-demo-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-demo-2-api
spec:
replicas: 3
selector:
matchLabels:
type: api
service: go-demo-2
template:
metadata:
labels:
type: api
service: go-demo-2
language: go
spec:
containers:
- name: api
image: vfarcic/go-demo-2
env:
- name: DB
value: go-demo-2-db
readinessProbe:
httpGet:
path: /demo/hello
port: 8080
periodSeconds: 1
livenessProbe:
httpGet:
path: /demo/hello
port: 8080
resources:
limits:
memory: "25Mi"
cpu: 0.1
requests:
memory: "5Mi"
cpu: 0.01
---
apiVersion: v1
kind: Service
metadata:
name: go-demo-2-api
spec:
ports:
- port: 8080
selector:
type: api
service: go-demo-2
```
:::
- 執行`kubectl rollout status deployment go-demo-2-api`等待全部部署完畢
- `kubectl get ingress`
- 把其中的address賦值給環境變數CLUSTER_DNS
- `export CLUSTER_DNS= {ingress address}`
- ```curl -i "http://$CLUSTER_DNS/demo/hello```
- 最後應該會顯示 `HTTP 200 hello world`
- 
## Exploring the High-Availability and Fault-Tolerance
- 簡單說 我們要殺掉其中一個nodes,在security groups裡面
- 然後觀察我們的cluster會不會自己在重新建立
- 首先,找到security groups的id
```
aws ec2 describe-instances | jq -r ".Reservations[].Instances[] | select(.SecurityGroups[].GroupName==\"nodes.devops23.k8s.local\").InstanceId”
```

- 殺掉其中一個instnace
- `aws ec2 terminate-instances --instance-ids {你選的id}`
- 再執行一次發現總數不變,刪掉的沒了換一個新的
```
aws ec2 describe-instances | jq -r ".Reservations[].Instances[] | select(.SecurityGroups[].GroupName==\"nodes.devops23.k8s.local\").InstanceId”
```
- 驗證完畢
## Destory cluster
- 把環境變數存成一個檔案
```
echo "export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
export ZONES=$ZONES
export NAME=$NAME
export KOPS_STATE_STORE=$KOPS_STATE_STORE" \
>kops
```
- 刪掉cluster (如果你還有要實作persistance state的volume 先別刪 user和usergroups)
```
kops delete cluster \
--name $NAME \
--yes
aws s3api delete-bucket \
--bucket $BUCKET_NAME
--- 如果有要繼續實作volumes 下面先別刪 ----
aws iam remove-user-from-group \
--user-name kops \
--group-name kops
aws iam delete-access-key \
--user-name kops \
--access-key-id $(\
cat kops-creds | jq -r \
'.AccessKey.AccessKeyId')
aws iam delete-user \
--user-name kops
aws iam detach-group-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess \
--group-name kops
aws iam detach-group-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess \
--group-name kops
aws iam detach-group-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess \
--group-name kops
aws iam detach-group-policy \
--policy-arn arn:aws:iam::aws:policy/IAMFullAccess \
--group-name kops
aws iam delete-group \
--group-name kops
```
##
- 環境 (all latest version in `2022/11/16`)
- wsl
- kops
- `v1.25.2`
- kubectl
- Client Version: `v1.25.3`
- Kustomize Version: `v4.5.7`
- docker
- `20.10.17`
- aws
- aws-cli `2.8.11`
- Python `3.9.11`
- Linux/5.10.102.1-microsoft-standard-WSL2 exe/x86_64.ubuntu.20 prompt/off