[雲端] K8s / 範例集 / Hello World === ###### tags: `雲端 / K8s` ###### tags: `雲端`, `K8s` <br> ![](https://i.imgur.com/vsCp0RX.png)<br><br> [TOC] <br> ## 建立第一個 web-server (nginx) - ### 配置資源 ```tj-first-pod.yaml``` ```yaml= apiVersion: v1 kind: Pod metadata: name: tj-nginx-pod spec: containers: - name: tj-nginx-container image: nginx ``` - 最小的資源範例 - ```metadata.name``` 不可缺少,否則會有錯誤訊息 ``` error: error when retrieving current configuration of: Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" Name: "", Namespace: "default" from server for: "tj-first-pod.yaml": resource name may not be empty ``` - ```containers.name``` 不可缺少,否則會有錯誤訊息 ``` error: error validating "tj-first-pod.yaml": error validating data: ValidationError(Pod.spec.containers[0]): missing required field "name" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false ``` - ### 佈署資源 (deploy) ```bash $ kubectl apply -f tj-first-pod.yaml ``` - ### 檢視資源 ```bash $ kubectl get pod # 可使用資源縮寫 $ kubectl get po ``` - ### 刪除資源 (undeploy, 取消佈署) ```bash $ kubectl delete -f tj-first-pod.yaml ``` 或是 ```bash $ kubectl delete pod tj-nginx-pod # 或 $ kubectl delete pod/tj-nginx-pod ``` <br> ## 建立一個「包含多個 container」的 pod - ### 資源場景 [![](https://i.imgur.com/Zssyx1R.png)](https://i.imgur.com/Zssyx1R.png) - nginx x 3 - blue-whale x 1 - purple-whale x 1 - kuard x 1 - ### 配置資源 ```tj-webserver-pod.yaml``` ```yaml apiVersion: v1 kind: Pod metadata: name: tj-webserver-pod spec: containers: - name: tj-nginx-container1 image: nginx - name: tj-nginx-container2-failed image: nginx ports: - containerPort: 80 # port 80 重複 - name: tj-nginx-container3-failed image: nginx ports: - containerPort: 8008 # web-server 沒有使用 8008 # port 8008 無人接聽 - name: tj-blue-whale-container4 image: hcwxd/blue-whale ports: - containerPort: 3000 - name: tj-purple-whale-container5-failed image: hcwxd/purple-whale ports: - containerPort: 3000 # port 3000 重複 - name: tj-kuard-container6 image: gcr.io/kuar-demo/kuard-amd64:1 ports: - containerPort: 8080 ``` - ### 佈署資源 ```bash $ kubectl apply -f tj-webserver-pod.yaml ``` - ### 檢視資源 ```bash $ kubectl get pod -o wide # 可使用資源縮寫 $ kubectl get po -o wide ``` [![](https://i.imgur.com/XUasEzz.png)](https://i.imgur.com/XUasEzz.png) <br> - ### 測試連線 ```bash curl 10.244.1.187 # nginx curl 10.244.1.187:80 # nginx curl 10.244.1.187:8008 # nginx curl 10.244.1.187:3000 # hcwxd/blue-whale curl 10.244.1.187:8080 # gcr.io/kuar-demo/kuard-amd64:1 ``` - ### 除錯:檢視目前容器狀態 ``` $ kubectl describe pod tj-webserver-pod Name: tj-webserver-pod Namespace: default ... Containers: tj-nginx-container1: Image: nginx Port: <none> State: Running Started: Wed, 23 Dec 2020 12:52:17 +0800 Ready: True Restart Count: 0 tj-nginx-container2-failed: Image: nginx Port: 80/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Ready: False Restart Count: 4 tj-nginx-container3-failed: Image: nginx Port: 8008/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Ready: False Restart Count: 4 tj-blue-whale-container4: Image: hcwxd/blue-whale Port: 3000/TCP State: Running Ready: True Restart Count: 0 tj-purple-whale-container5-failed: Image: hcwxd/purple-whale Port: 3000/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Ready: False Restart Count: 4 tj-kuard-container6: Image: gcr.io/kuar-demo/kuard-amd64:1 Port: 8080/TCP State: Running Ready: True Restart Count: 0 ``` - 查看 container1 的 log ``` $ kubectl logs tj-webserver-pod -c tj-nginx-container1 ``` - 查看 container2 的 log ``` $ kubectl logs tj-webserver-pod -c tj-nginx-container2-failed ... 2020/12/23 06:01:36 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) ... ``` - 查看 container3 的 log ``` $ kubectl logs tj-webserver-pod -c tj-nginx-container3-failed ... 2020/12/23 06:01:40 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) ... ``` - 查看 container5 的 log ``` $ kubectl logs tj-webserver-pod -c tj-purple-whale-container5-failed events.js:174 throw er; // Unhandled 'error' event ^ Error: listen EADDRINUSE: address already in use :::3000 ``` - root cause: - 一個 app 可以使用多個 port - 多個 app 不能共用同一個 port - ### 除錯:連到 pod 內部進行測試 ``` $ kubectl exec -it tj-webserver-pod -- /bin/sh Defaulting container name to tj-nginx-container1. Use 'kubectl describe pod/tj-webserver-pod -n default' to see all of the containers in this pod. # curl 127.0.0.1 # curl 127.0.0.1:80 # curl localhost:3000 # curl localhost:8080 ``` 指定某個 container ``` $ kubectl exec -it tj-webserver-pod \ -c tj-kuard-container6 -- /bin/sh ``` - ### 刪除資源 ```bash $ kubectl delete -f tj-webserver-pod.yaml ``` 或是 ```bash $ kubectl delete pod tj-webserver-pod # 或是 $ kubectl delete pod/tj-webserver-pod ``` <br> ## 建立一個 web-server (nginx),並開放某個 node port ```yaml # tj-first-pod.yaml apiVersion: v1 kind: Pod metadata: name: tj-pod-kuard labels: app: webserver spec: containers: - name: tj-container-kuard image: gcr.io/kuar-demo/kuard-amd64:1 ports: - containerPort: 8080 --- # tj-first-pod-service.yaml apiVersion: v1 kind: Service metadata: name: first-pod-service spec: type: NodePort ports: - port: 80 nodePort: 30080 protocol: TCP targetPort: 8080 selector: app: webserver ``` - nodePort 沒有填,系統會隨機配置 - targetPort 沒有填,好像會拿 service port 來填入 - **故障排除** - [Step-1] 檢查 pod ```bash= $ kubectl get pods -o wide ``` - 檢查 STATUS 是不是顯示 RUNNING <br> - [Step-2A] STATUS 不是 RUNNING,檢查 container 啟動過程 ```bash= $ kubectl describe your_pod_name ``` - [Step-2B] STATUS 是 RUNNING - 檢查 pod 是否可連線 ```bash= $ curl -X GET your_pod_ip:8080 ``` - 若不能連線,再連線到 container 內部 ```bash= $ kubectl get pods -o wide $ kubectl exec -it your_pod_name bash # 進入到 container 內部 root@your_pod_name:/# curl bash: curl: command not found root@your_pod_name:/# apt update && apt install curl -y root@your_pod_name:/# curl -X GET 127.0.0.1:80 ``` - [Step-3] 檢查 service ```bash= $ kubectl get service -o wide $ curl -X GET your_service_ip:80 $ curl 127.0.0.1:30080 ``` - 參考資料 - [kubernetes中port、target port、node port的对比分析,以及kube-proxy代理](https://blog.csdn.net/xinghun_4/article/details/50492041) - [Kubernetes 基礎教學(二)實作範例:Pod、Service、Deployment、Ingress](https://medium.com/@C.W.Hu/kubernetes-implement-ingress-deployment-tutorial-7431c5f96c3e) - [Kubernetes Service 深度剖析 - 存取路徑差異](https://tachingchen.com/tw/blog/kubernetes-service-in-detail-1/) <br> ## [具體例子] 建立一個 python notebook ( jupyter lab ),並開放某個 node port ```yaml= apiVersion: v1 kind: Pod metadata: name: tj-elyra-pod labels: app: tj-elyra-server spec: containers: - name: tj-elyra-container image: elyra/elyra:latest command: ["jupyter", "lab", "--debug"] ports: - containerPort: 8888 --- apiVersion: v1 kind: Service metadata: name: tj-elyra-service spec: type: NodePort ports: - targetPort: 8888 port: 80 nodePort: 30002 selector: app: tj-elyra-server ``` - 若在 minikube,若要連線測試,可用 for pod: ```bash $ kubectl port-forward pod/tj-elyra-pod --address 10.78.26.241 18888:8888 ``` for service: ```bash $ kubectl port-forward service/tj-elyra-service --address 10.78.26.241 18888:80 ``` <br> <hr> <hr> <br> ## 一個 pod 有兩個 containers ```yaml # for workload apiVersion: apps/v1 kind: Deployment metadata: name: tj-deployment-webserver namespace: tj-app2 spec: selector: matchLabels: app: tj-deployment-webserver-on-app2 template: # Required: `selector` does not match template `labels` metadata: labels: app: tj-deployment-webserver-on-app2 spec: containers: - name: tj-container-nginx image: nginx ports: - containerPort: 80 - name: tj-container-kuard image: gcr.io/kuar-demo/kuard-amd64:1 ports: - containerPort: 8080 --- # for service apiVersion: v1 kind: Service metadata: namespace: tj-app2 name: tj-service-webserver spec: type: NodePort ports: - name: nginx-30090-to-80 nodePort: 30090 port: 30090 targetPort: 80 - name: kuard-30091-to-8080 nodePort: 30091 port: 30091 targetPort: 8080 selector: app: tj-deployment-webserver-on-app2 ``` <br> <hr> <hr> <br> ## 一個 pod 有三個 containers,並搭配 selector/replicas ```yaml= apiVersion: apps/v1 kind: Deployment metadata: name: deploy1 spec: template: metadata: name: deploy1-pod labels: app: deploy1-app spec: containers: - name: deploy1-nginx-container image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 - name: deploy1-kuard-container image: gcr.io/kuar-demo/kuard-amd64:1 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 - name: deploy1-blue-whale image: hcwxd/blue-whale imagePullPolicy: IfNotPresent ports: - containerPort: 3000 selector: matchLabels: app: deploy1-app replicas: 4 ``` 執行結果 ``` $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES deploy1-759d9896cf-fd6fd 3/3 Running 0 16m 10.244.1.58 alprworker-1203417-iaas <none> <none> deploy1-759d9896cf-jc4fd 3/3 Running 0 16m 10.244.0.18 alprmaster-1422009-iaas <none> <none> deploy1-759d9896cf-nd8mt 3/3 Running 0 16m 10.244.1.59 alprworker-1203417-iaas <none> <none> deploy1-759d9896cf-qfddm 3/3 Running 0 16m 10.244.1.57 alprworker-1203417-iaas <none> <none> ``` - 由於 replicas = 4,所以共有 4 個 pod - Deployment 的 replicas 和 selector,是一起使用的 - pod 其實可視為 vmlet (微型vm) - 每個 pod 有 3 個 container (正常會顯示 ```3/3```) - **注意**: - 每個 container 使用的 port 不能重複 如 hcwxd/blue-whale 和 hcwxd/purple-whale 都使用 port 3000 port 重複會導致 CrashLoopBackOff (Back-off restarting failed container) 因為 port 已經被使用,後面的 container 會起不來 <br> <hr> <hr> <br> ## 快速建立一個 blue-whale 伺服器,並用 ingress 銜接 > k8s 測試版本:v1.19 ![](https://i.imgur.com/VswH1Cx.png) - 建立一個 pod,用於執行 blue-whale 伺服器 ```bash $ kubectl run tj-blue-whale \ --image=hcwxd/blue-whale \ --port=3000 ``` - 很像 docker run 指令 ``` $ docker run --name tj-blue-whale -p 3000:3000 hcwxd/blue-whale ``` 指令會被轉譯為 ```yaml= apiVersion: v1 kind: Pod metadata: labels: run: tj-blue-whale name: tj-blue-whale namespace: default spec: containers: - image: hcwxd/blue-whale imagePullPolicy: Always name: tj-blue-whale ports: - containerPort: 3000 protocol: TCP ``` - 將 pod 對內部開放,並作為一個 service ```bash $ kubectl expose pod/tj-blue-whale ``` - 連線測試 ```bash $ kubectl get svc -o wide NAME TYPE CLUSTER-IP PORT(S) SELECTOR tj-blue-whale ClusterIP 10.110.8.116 3000/TCP run=tj-blue-whale $ curl 10.110.8.116:3000 ``` - 將 service 對外部開放,並作為一個 ingress ```tj-blue-whale-load-balancer.yaml``` 底下有兩種版本:```v1beta``` & ```v1``` ([參考資料](https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress)) ```yaml= apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tj-blue-whale-load-balancer spec: backend: serviceName: tj-blue-whale servicePort: 3000 ``` ```yaml= apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tj-blue-whale-load-balancer spec: defaultBackend: service: name: tj-blue-whale port: number: 3000 ``` ``` $ kubectl apply -f tj-blue-whale-load-balancer.yaml ``` <br> <hr> <hr> <br> ## 建立一個 job ```cat tj-oneshot2.yaml``` ```yaml= apiVersion: batch/v1 kind: Job metadata: name: oneshot labels: chapter: jobs spec: template: metadata: labels: chapter: jobs spec: containers: - name: kuard image: gcr.io/kuar-demo/kuard-amd64:1 imagePullPolicy: IfNotPresent args: - "--keygen-enable" - "--keygen-exit-on-complete" - "--keygen-num-to-gen=10" restartPolicy: OnFailure ``` 檢視 log ``` $ kubectl logs job/oneshot # 或是(結果相同) $ kubectl logs pod/oneshot-tjxx2 ... 2020/12/31 08:28:52 Serving on HTTP on :8080 2020/12/31 08:28:53 (ID 0 1/10) Item done: SHA256:65uCghUlMAAdM8t7jYTLKTIoaDcRHEs+4/LBVPTKT1M 2020/12/31 08:28:54 (ID 0 2/10) Item done: SHA256:MJQh1oK8+AibCD321BtQhlHQUD/ft958wxVCBp+QpcY 2020/12/31 08:28:55 (ID 0 3/10) Item done: SHA256:pxb+XVLpwW2qxSHzfxsQbwdi/0E3m0BAd9eyXKaCwC8 2020/12/31 08:29:01 (ID 0 4/10) Item done: SHA256:0SmstkcsQCQL4Nu6rL27Sgj2+CxIIsXRN3JS3wWxNrM 2020/12/31 08:29:02 (ID 0 5/10) Item done: SHA256:QQijWg81E0gpSKxlAFex71Ci6Dj1l/reRZSOoUaypeE 2020/12/31 08:29:03 (ID 0 6/10) Item done: SHA256:5MkKnmfONR5xzjSUMynPR2P4ytrU9qtfRK6JjB71IIY 2020/12/31 08:29:04 (ID 0 7/10) Item done: SHA256:ikr0zLcuk3a55QWGf+2joKTkSpKW7dVohwc2SPzv9Xo 2020/12/31 08:29:06 (ID 0 8/10) Item done: SHA256:FsIr4P4w2w4XIh5YGqbmLRtUJwM9T6GFCgimwUdxUBI 2020/12/31 08:29:06 (ID 0 9/10) Item done: SHA256:6jagKhuFSpiERPryV1pMJnwE8tHX2oSSHtTGXlHWbZU 2020/12/31 08:29:09 (ID 0 10/10) Item done: SHA256:kr5BSbNi+oZ55qVDYqoHg68EEIz28TucbkGqeiMncFM 2020/12/31 08:29:09 (ID 0) Workload exiting ``` ### restartPolicy: OnFailure v.s. Never - Never ```bash $ kubectl get pod -l chapter -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES oneshot-4gf9m 0/1 Error 0 3m14s 10.244.1.22 alprworker-1203417-iaas <none> <none> oneshot-jk4ld 0/1 Error 0 3m44s 10.244.1.20 alprworker-1203417-iaas <none> <none> oneshot-mw65k 0/1 Error 0 109s 10.244.1.25 alprworker-1203417-iaas <none> <none> oneshot-ptsd8 0/1 Error 0 3m52s 10.244.1.18 alprworker-1203417-iaas <none> <none> oneshot-q5l52 0/1 Error 0 3m34s 10.244.1.21 alprworker-1203417-iaas <none> <none> oneshot-wctrs 0/1 Error 0 74s 10.244.1.24 alprworker-1203417-iaas <none> <none> oneshot-x72wc 0/1 Error 0 2m34s 10.244.1.23 alprworker-1203417-iaas <none> <none> ... ... ``` - 在 pod 發生錯誤時,不重啟,而是宣告 pod 為失敗 - job 物件察覺後,再建立新的 pod 來替換/ - 因此,會看到一堆有 error 的 pod <br> - OnFailure ``` $ ku get pod -l chapter -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES oneshot-lw7zz 0/1 Error 3 77s 10.244.1.26 alprworker-1203417-iaas <none> <none> ``` - 在 pod 發生錯誤時,透過重啟 pod 的方式來因應,而不是建立新的 pod