---
title: K8S lab6(Pod/Deployment 延伸)
tags: k8s
---
K8S lab6(Pod/Deployment 延伸)
===
[TOC]
## Container Probes
### What is Container Probes
Probe 在 K8s 中是用來為 Container 進行診斷,K8s 提供了三種方式: Startup Probes、Readness Probes、Liveness Probes。
- Startup Probes: 可以讓k8s知道該container什麼時候啟動,使用 Startup Probes 下,當 Pod 還沒啟用成功時,Liveness 及 Readness 不會被啟用,直到 Pod 成功啟動。
- Readness Probes: 讓 K8s 知道 container 是不是準備好可以接收流量。當 Pod 中所有的 container 都準備好了,Pod 才算是就緒。這個的功用是當 Pod 還沒有準備好就會從 Service 的負載清單移除。
- 示意圖
- 
- Liveness Probes: 讓 K8s 知道何時要重啟 Container。當 Container 無法再處理新的 Request 時,就會將 Container 重啟。
- 示意圖
- 
### How Container Probes
#### Startup Probes
- startupProbe 片段範例:
```yaml=
startupProbe:
httpGet:
path: /actuator/health
port: 8080
failureThreshold: 30
periodSeconds: 10
```
參數說明:
- httpGet: 指定 Pod path 及 port 發送 GET 請求,若 response 等於 200 或小於 400 時,算成功。
- periodSeconds: liveness probe 多久檢查一次(預設值為10)
- failureThreshold: 檢查失敗後重試的次數(預設值為3,最小值為1)
#### Liveness Probes
- livenessProbe 片段範例:
```yaml=
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 3
```
參數說明:
- httpGet: 指定 Pod path 及 port 發送 GET 請求,若 response 等於 200 或小於 400 時,算成功。
- initialDelaySeconds: 首次啟動後, 要延遲多久在執行
- periodSeconds: liveness probe 多久檢查一次(預設值為10)
- timeoutSeconds: 檢查 timeout 的秒數,預設為1。
- successThreshold: 檢查失敗後,要被視為成功狀態所需的連續成功次數,在liveness、startup Probes該值必須為1。
- failureThreshold: 檢查失敗後重試的次數(預設值為3,最小值為1)

- 正常情境測試:
- 調整 configmap.yaml
```yaml=
apiVersion: v1
kind: ConfigMap
metadata:
name: deposit-config
data:
application.yml: |
server:
port: 8080
shutdown: graceful
spring:
application:
name: deposit
management:
endpoint:
shutdown:
enabled: true
health:
probes:
enabled: true
endpoints:
web:
exposure:
include: '*'
health:
livenessState:
enabled: true
readinessState:
enabled: true
endpoints:
shutdown:
enabled: true
version: 'v1.0.5'
readConfigSeconds: 1
backend-endpoint: 'http://customer-external-service:8080/api/customer'
```
- 調整 deployment.yml
```yaml=
apiVersion: apps/v1
kind: Deployment
metadata:
name: lab-my-deposit-deployment
spec:
replicas: 3
selector:
matchLabels:
app: deposit-app
template:
metadata:
labels:
app: deposit-app
spec:
containers:
- name: my-deposit
image: <dockerhub-id>/deposit1:<tag>
ports:
- containerPort: 8080
volumeMounts:
- name: app-config
mountPath: /app/config
readOnly: true
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 3
volumes:
- name: app-config
configMap:
name: <config-name>
```
- 將 configmap 及 deployment 更新到 K8s
```
...自行練習
```
- 查看任一個 Pod之log
```shell=
kubectl logs -n <namespace-name> <pod-name>
```
```
$ kubectl logs -n my-namespace lab-my-deposit-67b9d54f48-dkbcp
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.3)
2022-03-18 08:33:04.436 INFO 1 --- [ main] com.webcomm.DepositApplication : Starting DepositApplication v2.6.3 using Java 1.8.0_212 on lab-my-deposit-67b9d54f48-dkbcp with PID 1 (/app/deposit.jar started by root in /app)
2022-03-18 08:33:04.441 INFO 1 --- [ main] com.webcomm.DepositApplication : No active profile set, falling back to default profiles: default
2022-03-18 08:33:07.286 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2022-03-18 08:33:07.313 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-03-18 08:33:07.314 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.56]
2022-03-18 08:33:07.449 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-03-18 08:33:07.449 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2905 ms
read_config
2022-03-18 08:33:19.572 INFO 1 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 14 endpoint(s) beneath base path '/actuator'
2022-03-18 08:33:19.667 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2022-03-18 08:33:19.695 INFO 1 --- [ main] com.webcomm.DepositApplication : Started DepositApplication in 16.307 seconds (JVM running for 17.3)
2022-03-18 08:33:29.157 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-03-18 08:33:29.157 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2022-03-18 08:33:29.163 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 5 ms
2022-03-18 08:33:29.301 INFO 1 --- [nio-8080-exec-1] com.webcomm.helper.HealthProvider : UP
2022-03-18 08:33:34.050 INFO 1 --- [nio-8080-exec-2] com.webcomm.helper.HealthProvider : UP
2022-03-18 08:33:39.055 INFO 1 --- [nio-8080-exec-3] com.webcomm.helper.HealthProvider : UP
2022-03-18 08:33:44.050 INFO 1 --- [nio-8080-exec-4] com.webcomm.helper.HealthProvider : UP
```
可以發現 liveness 每5秒就會 call /actuator/health 確認 Pod 是否活著。
- 意外情境測試:
- 模擬狀況1: initialDelaySeconds 過短而導致的問題
- 可能發生情境
- 烏龜服務
- initialDelaySeconds被設定過短
- 解法
- 配置StartupProbe或更長的initialDelaySeconds
- 模擬步驟
- 更改deployment.ymal檔
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: lab-my-deposit-deployment
spec:
replicas: 3
selector:
matchLabels:
app: deposit-app
template:
metadata:
labels:
app: deposit-app
spec:
containers:
- name: my-deposit
image: asaeeen/deposit1:v1.0.31
ports:
- containerPort: 8080
volumeMounts:
- name: app-config
mountPath: /app/config
readOnly: true
livenessProbe:
httpGet:
path: /api/health
port: 8080
periodSeconds: 5
timeoutSeconds: 3
initialDelaySeconds: 3
volumes:
- name: app-config
configMap:
name: deposit-config
# 這裡將 initialDelaySeconds 做為3秒,模擬服務啟動失敗的狀況
```
- deploy deposit服務
```
$ kubectl apply -f my-deposit-deployment.yaml
deployment.apps/lab-my-deposit-deployment created
```
- 大約隔一段時間後查看pod狀態(3min左右)
```
danny@NB0128:/mnt/c/webcommCode/POC/k8s/ymal/deployment$ kubectl get pods
NAME READY STATUS RESTARTS AGE
lab-my-deposit-deployment-5f575485f-7sb7v 1/1 Running 4 (3s ago) 2m24s
lab-my-deposit-deployment-5f575485f-nh5ws 1/1 Running 4 (3s ago) 2m24s
lab-my-deposit-deployment-5f575485f-pxl9x 0/1 CrashLoopBackOff 4 (5s ago) 2m24s
net-tool 1/1 Running 0 3h36m
```
根據以上的資訊可得知3個pod已經重啟了4次,且中間的pod已經Crash了,此時可透過`kubectl describe pod <podName >`進行 debug
```
....上述略....
$ kubectl describe pod lab-my-deposit-deployment-5f575485f-pxl9x
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/lab-my-deposit-deployment-5f575485f-pxl9x to gke-pratice-default-pool-55e9a357-pfeo
Normal Killing 14m (x3 over 15m) kubelet Container my-deposit failed liveness probe, will be restarted
Normal Pulled 14m (x4 over 16m) kubelet Container image "asaeeen/deposit1:v1.0.31" already present on machine
Normal Created 14m (x4 over 16m) kubelet Created container my-deposit
Normal Started 14m (x4 over 16m) kubelet Started container my-deposit
Warning Unhealthy 11m (x19 over 16m) kubelet Liveness probe failed: Get "http://10.24.2.27:8080/api/health": dial tcp 10.24.2.27:8080: connect: connection refused
Warning BackOff 55s (x55 over 13m) kubelet Back-off restarting failed container
```
根據上述可以知道它是因為health-check不過,導致pod被重複地殺掉重建
為求保險起見,再繼續順便確認是否為服務問題
```
kubectl logs lab-my-deposit-deployment-5f575485f-pxl9x
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.3)
2022-03-18 09:36:28.458 INFO 1 --- [ main] com.webcomm.DepositApplication : Starting DepositApplication v2.6.3 using Java 1.8.0_212 on lab-my-deposit-deployment-5f575485f-pxl9x with PID 1 (/app/example.jar started by root in /app)
2022-03-18 09:36:28.462 INFO 1 --- [ main] com.webcomm.DepositApplication : No active profile set, falling back to default profiles: default
2022-03-18 09:36:31.132 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2022-03-18 09:36:31.158 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-03-18 09:36:31.159 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.56]
2022-03-18 09:36:31.292 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-03-18 09:36:31.292 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2727 ms
read_config
2022-03-18 09:36:53.425 INFO 1 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 14 endpoint(s) beneath base path '/actuator'
2022-03-18 09:36:53.517 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2022-03-18 09:36:53.551 INFO 1 --- [ main] com.webcomm.DepositApplication : Started DepositApplication in 26.257 seconds (JVM running for 27.203)
2022-03-18 09:36:53.558 INFO 1 --- [ionShutdownHook] o.s.b.w.e.tomcat.GracefulShutdown : Commencing graceful shutdown. Waiting for active requests to complete
2022-03-18 09:36:53.587 INFO 1 --- [tomcat-shutdown] o.s.b.w.e.tomcat.GracefulShutdown : Graceful shutdown complete
```
#### Readness Probes
- readnessProbe 片段範例:
```yaml=
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 30
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
```
- 正常情境測試:
- 調整 deplyment.yml
```yaml=
apiVersion: apps/v1
kind: Deployment
metadata:
name: lab-my-deposit-deployment
spec:
replicas: 3
selector:
matchLabels:
app: deposit-app
template:
metadata:
labels:
app: deposit-app
spec:
containers:
- name: my-deposit
image: <dockerhub-id>/deposit1:<tag>
ports:
- containerPort: 8080
volumeMounts:
- name: app-config
mountPath: /app/config
readOnly: true
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 30
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
volumes:
- name: app-config
configMap:
name: <config-name>
```
- 將 deployment 更新到 K8s
```
自行練習...
```
- 透過 Serivce 呼叫 API
```
/app # curl --location --request GET "http://10.8.11.8:8080/api/version"
{"version":"v1.0.2","datetime":"2022/03/21 14:00:51"}
```
- 失敗情境測試
- 調整 deployment.yml 製造失敗情境
```yaml=
apiVersion: apps/v1
kind: Deployment
metadata:
name: lab-my-deposit-deployment
spec:
replicas: 3
selector:
matchLabels:
app: deposit-app
template:
metadata:
labels:
app: deposit-app
spec:
containers:
- name: my-deposit
image: <dockerhub-id>/deposit1:<tag>
ports:
- containerPort: 8080
volumeMounts:
- name: app-config
mountPath: /app/config
readOnly: true
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 30
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
volumes:
- name: app-config
configMap:
name: <config-name>
```
- 將 deployment 更新到 K8s
先將原本的 deployment 刪除,在更新上去,自行練習
- 直接透過 IP 呼叫 API
```
/app # curl --location --request GET "http://10.4.2.16:8080/api/version"
{"version":"v1.0.2","datetime":"2022/03/21 15:05:55"}
```
- 透過 Serivce 呼叫 API
```
/app # curl --location --request GET "http://10.8.11.8:8080/api/version"
curl: (7) Failed to connect to 10.8.11.8 port 8080: Connection refused
```
可以發現雖然 Pod 是可以正常運行,但是當 Readness 偵測到的端點是錯的,就無法透過 Service 去呼叫 Pod 的 API。
- 查看 Pod
```
$ kubectl describe pods -n my-namespace lab-my-deposit-754fd65577-pqkvw
...上面略
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m3s default-scheduler Successfully assigned my-namespace/lab-my-deposit-754fd65577-pqkvw to gke-cluster-1-default-pool-4c7db623-3t7t
Normal Pulled 3m3s kubelet Container image "marklab1108/deposit:v1.0.4" already present on machine
Normal Created 3m3s kubelet Created container my-deposit
Normal Started 3m3s kubelet Started container my-deposit
Warning Unhealthy 4s (x16 over 2m34s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
```
可以發現 Readness Probe 是失敗的。
## Resource Management
### What is Resource Management
在指定 Pod 的時候,可以額外指定這個 Container 需要多少資源,K8s 中最基本的資源為 CPU 跟 Memory。
#### Request 及 Limit
- request
定義 Container **最少**需要多少資源,kube-scheduler 會依據 requset 的資源量去調度 Pod 要在哪個 Node 上運行。
> kube-scheduler: 負責將 Pod 指派到 Node 上。
- limit
定義 Container 可使用**最大**資源量。
#### Resource 種類
- CPU
以`m`為單位,而**1000m=1vCPU**,當設定500m就等於是用了0.5個vCPU。
> K8s不允許設置小於**1m**的 CPU 資源
- Memory
以`byte`為單位,也可以使用Ei、Pi、Ti、Gi、Mi、Ki做設定。
### How Resource Management
#### 限制 Pod 的可用資源 (cpu/memory)
- 範例ymal
- 片段範例
```yaml=
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
```
- 完整範例
```yaml=
apiVersion: apps/v1
kind: Deployment
metadata:
name: lab-my-deposit-deployment
spec:
replicas: 3
selector:
matchLabels:
app: deposit-app
template:
metadata:
labels:
app: deposit-app
spec:
containers:
- name: my-deposit
image: <dockerhub-id>/deposit1:<tag>
ports:
- containerPort: 8080
volumeMounts:
- name: app-config
mountPath: /app/config
readOnly: true
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 30
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
resources:
requests:
memory: "600Mi"
cpu: "300m"
limits:
memory: "600Mi"
volumes:
- name: app-config
configMap:
name: <config-name>
```
- 更新 deployment
- 透過describe查看pod的資料
```
$ kubectl describe pods -n my-namespace lab-my-deposit-65d98bdd9-shk6p
Name: lab-my-deposit-65d98bdd9-shk6p
Namespace: my-namespace
Priority: 0
Node: gke-cluster-1-default-pool-4c7db623-s14x/10.128.0.11
Start Time: Tue, 22 Mar 2022 16:44:45 +0800
Labels: app=lab-my-deposit
pod-template-hash=65d98bdd9
Annotations: <none>
Status: Running
IP: 10.4.3.17
IPs:
IP: 10.4.3.17
Controlled By: ReplicaSet/lab-my-deposit-65d98bdd9
Containers:
my-deposit:
Container ID: containerd://8b6eae1153ec2c8c5df94985417ff9cd3e8ab1c4cced62bde5168e100965c958
Image: marklab1108/deposit:v1.0.4
Image ID: docker.io/marklab1108/deposit@sha256:8740eaebac2a5e681555ffdd1f783122e2efaac71fda80df88854b7f81d21707
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 22 Mar 2022 16:44:46 +0800
Ready: True
Restart Count: 0
Limits:
memory: 600Mi
Requests:
cpu: 300m
memory: 600Mi
Liveness: http-get http://:8080/actuator/health delay=20s timeout=60s period=5s #success=1 #failure=3
Readiness: http-get http://:8080/actuator/health delay=20s timeout=30s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/app/config from app-config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5c2gk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
app-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: deposit-config
Optional: false
kube-api-access-5c2gk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m42s default-scheduler Successfully assigned my-namespace/lab-my-deposit-65d98bdd9-shk6p to gke-cluster-1-default-pool-4c7db623-s14x
Normal Pulled 7m42s kubelet Container image "marklab1108/deposit:v1.0.4" already present on machine
Normal Created 7m42s kubelet Created container my-deposit
Normal Started 7m42s kubelet Started container my-deposit
```
- 透過 describe 查看 node 詳細
```
$ kubectl describe nodes gke-cluster-1-default-pool-4c7db623-s14x
Name: gke-cluster-1-default-pool-4c7db623-s14x
... 中間略
Capacity:
attachable-volumes-gce-pd: 15
cpu: 2
ephemeral-storage: 98868448Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4029336Ki
pods: 110
Allocatable:
attachable-volumes-gce-pd: 15
cpu: 940m
ephemeral-storage: 47093746742
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2883480Ki
pods: 110
... 中間略
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system fluentbit-gke-g76f7 100m (10%) 0 (0%) 200Mi (7%) 500Mi (17%) 3d5h
kube-system gke-metrics-agent-bpqp2 3m (0%) 0 (0%) 50Mi (1%) 50Mi (1%) 3d5h
kube-system konnectivity-agent-65cdff9f68-rftwd 10m (1%) 0 (0%) 30Mi (1%) 125Mi (4%) 3d5h
kube-system kube-dns-697dc8fc8b-548j8 260m (27%) 0 (0%) 110Mi (3%) 210Mi (7%) 3d5h
kube-system kube-proxy-gke-cluster-1-default-pool-4c7db623-s14x 100m (10%) 0 (0%) 0 (0%) 0 (0%) 3d5h
kube-system pdcsi-node-pp9xr 10m (1%) 0 (0%) 20Mi (0%) 100Mi (3%) 3d5h
my-namespace lab-my-deposit-65d98bdd9-shk6p 300m (31%) 0 (0%) 600Mi (21%) 600Mi (21%) 13m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 783m (83%) 0 (0%)
memory 1010Mi (35%) 1585Mi (56%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-gce-pd 0 0
Events: <none>
```
可以發現 node 已經分配 300m cpu 及 600Mi memory 給 Pod
### 限制 namesapce 的總資源 (cpu/memory)
當 namespace 要進行資源管理,可以透過 ResourceQuota,進行管理。
#### What is ResourceQuota
ResourceQuota 是對每個 Namsespace 的資源消耗總量提供限制。
它可以限制 Namsespace 中某種類型的對象的總數目上限,也可以限制命令空間中的 Pod 可以使用的計算資源的總上限。
#### How 限制 namesapce 的總資源
- 建一個新的 Namespace
```yaml=
apiVersion: v1
kind: Namespace
metadata:
name: <namespace-name>
```
- 佈署新 Namespace
```shell=
kubectl apply -f <path-to-namespace-yaml> -n <namespace-name>
```
- 建立 ResourceQuota Yaml
```yaml=
apiVersion: v1
kind: ResourceQuota
metadata:
name: <resourcequota-name>
spec:
hard:
requests.cpu: "2"
requests.memory: 2Gi
limits.cpu: "3"
limits.memory: 3Gi
```
- 將 ResourceQuota 佈署到新的 Namespace 底下
```shell=
kubectl apply -f <path-to-resourcequota-yaml> -n <namespace-name>
```
- 查看 ResourceQuota 狀態
```shell=
kubectl get resourcequotas -n <namespace-name> <resourcequota-name> -o yaml
```
```
$ kubectl get resourcequotas -n resourcequota-namespace lab-resourcequota -o yaml
apiVersion: v1
kind: ResourceQuota
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"lab-resourcequota","namespace":"resourcequota-namespace"},"spec":{"hard":{"limits.cpu":"3","limits.memory":"3Gi","requests.cpu":"2","requests.memory":"2Gi"}}}
creationTimestamp: "2022-03-24T02:57:46Z"
name: lab-resourcequota
namespace: resourcequota-namespace
resourceVersion: "9012784"
uid: 38508087-4d1d-43f1-8c7d-c92afd6da336
spec:
hard:
limits.cpu: "3"
limits.memory: 3Gi
requests.cpu: "2"
requests.memory: 2Gi
status:
hard:
limits.cpu: "3"
limits.memory: 3Gi
requests.cpu: "2"
requests.memory: 2Gi
used:
limits.cpu: "0"
limits.memory: "0"
requests.cpu: "0"
requests.memory: "0"
```
尚未有 Pod 在此 Namespace 中,所以在`status`中的`used`都還是0
- 將 Deployment 佈署到新的 Namespace 底下
```yaml=
resources:
requests:
memory: "600Mi"
cpu: "300m"
limits:
memory: "600Mi"
cpu: "1"
```
:::info
:bulb: 提醒:
因為在 ResourceQuota 中 request 及 limit 都有 cpu 及 memory,所以在佈署前請檢查 deployment.yml `resources.request`及`resources.limit`是否有 cpu 及 memory
:::
:::info
:bulb: 提醒: configMap 也要佈署!!!
:::
- 查看 ResourceQuota 狀態
```
$ kubectl get resourcequotas -n resourcequota-namespace lab-resourcequota -o yaml
apiVersion: v1
...中間略
status:
hard:
limits.cpu: "3"
limits.memory: 3Gi
requests.cpu: "2"
requests.memory: 2Gi
used:
limits.cpu: "3"
limits.memory: 1800Mi
requests.cpu: 900m
requests.memory: 1800Mi
```
當有 Pod 佈署到 Namespace 後,在`status`中的`used`可以看到 Pod 所占用的資源。
#### 測試當 (Pod 總資源) > (namespace 可用資源)
- 當 Pod request 資源超過 namespace request
- 將 ResourceQuota 設定為 (調小)
```yaml=
spec:
hard:
requests.cpu: "1"
requests.memory: "1Gi"
limits.cpu: "2"
limits.memory: "2Gi"
```
- deployment 中的 resources 設定為 (調大)
```yaml=
resources:
requests:
memory: "600Mi"
cpu: "1.1"
limits:
memory: "600Mi"
cpu: "2"
```
- 佈署 ResourceQuota 及 Deployment
- 當 Pod limit 資源超過 namesace limit
K8s 會禁止 Pod 執行
```
$ kubectl get pods -n resourcequota-namespace -o wide
No resources found in resourcequota-namespace namespace.
```

###
```
4*2core = 8core
4*8G = 32G
Pod 總資源: x
namesapce Max: (x*0.2)+x
```
## 參考
### Container Probes
- [Container probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
- [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
- [How to Perform Health checks in Kubernetes](https://medium.com/avmconsulting-blog/how-to-perform-health-checks-in-kubernetes-k8s-a4e5300b1f9d)
- [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
- [Liveness and Readiness Probes in Spring Boot](https://www.baeldung.com/spring-liveness-readiness-probes)
- [Kubernetes best practices: Setting up health checks with readiness and liveness probes](https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes)
### Resource Management
- [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
- [參考1-k8s-cpu及memory單位說明](https://godleon.github.io/blog/Kubernetes/k8s-Scheduling-Manage-Compute-Resource-for-Container/)
- [Resource Quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/)
- [Configure Memory and CPU Quotas for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)