--- title: K8S la8(PV & PVC) tags: k8s --- K8S la8(PV & PVC) === [TOC] ## What is PV、PVC ### PersistentVolume (PV) PV 是用來管理 cluster 中 storage 的資源,支援像是 NFS、Glusterfs、HostPath等多種存儲插件,擁有獨立的生命週期。 ### PersistentVolumeClaim (PVC) PVC 是針對 PV 請求特定大小、[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/)等等,PVC 是屬於 Namespace 層級的資源。 ## How PV、PVC ### 建立 PV 0. 前置作業: 使用 GCP FileStore 作為 NFS Server 1. 點選「建立執行個體」 ![](https://hackmd.io/_uploads/rkU7BdqX9.png) :::info :bulb: 提醒: 第一次使用需要先同意使用 FileStore API ::: 2. 完成參數設定 只需要填執行個體ID及檔案共用區名稱,之後點選建立 3. 等待建立完成 ![](https://hackmd.io/_uploads/H1qhUd57c.png) 4. 完成後查看詳細資料 ![](https://hackmd.io/_uploads/ByWsFO9mc.png) 1. 製作 pv.yml ```yaml= apiVersion: v1 kind: PersistentVolume metadata: name: lab-deposit-log-pv labels: claim: lab-deposit-log spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: path: /nfs_storage server: 10.144.135.26 ``` - 參數說明 - `metadata.labels`: 設定與 PV 連結的參數 - `spec.storageClassName`: 定義 [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/)名稱,讓相同 Storage Class 的 PV 與 PVC 連結 - `spec.capacity`: 此 PV 有多少容量 - `spec.accessModes`: PV 的掛載方式,有三種模式: ReadWriteOnce(RWO), ReadOnlyMany(ROX), ReadWriteMany(RWX) 三種 - ReadWriteOnce: 只能被單個 node 以讀寫方式掛載 - ReadOnlyMany: 複數個 node 以讀方式掛載 - ReadWriteMany: 複數個 node 以讀寫方式掛載 - `spec.nfs`: nfs 放入 nfs server 的連線資訊。可以換成其他插件,像是 hostPath。 2. 將 PV 佈署到 K8s ```shell= kubectl apply -f <path-to-pv-config> ``` 3. 查看 PV ```shell= kubectl get pv ``` ``` $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE lab-deposit-log-pv 10Gi RWX Retain Available manual 7s ``` ### 建立 PVC 1. 製作 pvc.yml ```yaml= apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lab-deposit-log-pvc spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 3Gi selector: matchLabels: claim: lab-deposit-log ``` - 參數說明 - `spec.accessModes`: 與 PV 設定相同 - `spec.resource.request.storage`: 設定 PVC 所需要的資源量 - `spec.volumeName`: 將 PVC 與 PV 連結所需 - `spec.selector.matchLabels`: 讓 PVC 與特定的 PV 進行連結。 2. 將 PVC 佈署到 K8s ```shell= kubectl apply -f <path-to-pvc-config> -n <namespace-name> ``` 3. 查看 PVC ```shell= kubectl get pvc -n <namespace-name> ``` ``` $ kubectl get pvc -n my-namespace NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lab-deposit-log-pvc Bound lab-deposit-log-pv 10Gi RWX manual 7s ``` 4. 確認與 PV 綁定 ```shell= kubectl get pv ``` ``` $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE lab-deposit-log-pv 10Gi RWX Retain Bound my-namespace/lab-deposit-log-pvc manual 86s ``` 5. 反向測試 - 當 PVC 指定的資源超過 PV 上限 當 cluster 中,沒有 PV 能滿足此 PVC 的請求時,PVC 會不斷處於未綁定的狀態直到 cluster中有符合的 PV 為止。 調整 pv.yml 將 storageClassName 改為 GKE 預設的 standard ```yaml= apiVersion: v1 kind: PersistentVolume metadata: name: lab-deposit-log-pv labels: claim: lab-deposit-log spec: storageClassName: standard capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /nfs_storage server: 10.31.201.122 ``` 調整 pvc.yml ```yaml= apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lab-deposit-log-pvc spec: storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 11Gi selector: matchLabels: claim: lab-deposit-log ``` 部屬到 K8s 後,查看 pvc 資訊可以發現 pvc 一直處在 Pending 的狀態,而且 GCE 上的 standard 的 StorageClass 不支援自動生成 PV,所以 PVC 就會一直 Pending 住。 ``` $ kubectl describe pvc -n my-namespace lab-deposit-log-pvc Name: lab-deposit-log-pvc Namespace: my-namespace StorageClass: standard Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 99s (x190 over 6h52m) persistentvolume-controller Failed to provision volume with StorageClass "standard": claim.Spec.Selector is not supported for dynamic provisioning on GCE ``` 6. 測試當 PVC 沒有指定 StorageClass 情境 調整 pvc.yml ```yaml= apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lab-deposit-log-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi selector: matchLabels: claim: lab-deposit-log ``` 部屬到 K8s 後,查看 pvc 資訊 ``` $ kubectl describe pvc -n my-namespace Name: lab-deposit-log-pvc Namespace: my-namespace StorageClass: standard Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 11s (x8 over 2m39s) persistentvolume-controller Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported ``` 會發現當 pvc 沒有指定 StorageClass 下,K8s 會自動給一個 StorageClass 進去,下`kubectl get storageClass`指令查看 K8s 上的 StorageClass ``` $ kubectl get storageClass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 8h standard (default) kubernetes.io/gce-pd Delete Immediate true 8h standard-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 8h ``` 可以看到 standard 這個 StorageClass 是 GKE 預設的。 ### Pod 與 PVC連結 0. 調整 Dockerfile,並重新包成 image 後上傳至 DockerHub 新增一個`docker-entrypoint.sh` ```shell= #!/bin/sh exec java -Dlogging.config=/app/config/logback.xml -jar ${APP_NAME}.jar --spring.config.name=application ``` 調整 Dockerfile ```dockerfile= FROM openjdk:8-jre-alpine COPY deposit.jar /app/deposit.jar COPY ./docker-entrypoint.sh /app/docker-entrypoint.sh ENV APP_NAME deposit WORKDIR /app VOLUME /app/config VOLUME /app/logs EXPOSE 8080 ENTRYPOINT [ "./docker-entrypoint.sh" ] ``` 包成 image 及 推到 DockerHub ``` ...請自行練習 ``` 1. 調整 ConfigMap,將 logback.xml 加進去,並更新到 K8s ```yaml= apiVersion: v1 kind: ConfigMap metadata: name: deposit-config data: application.yml: |- server: port: 8080 shutdown: graceful spring: application: name: deposit management: endpoint: shutdown: enabled: true health: probes: enabled: true endpoints: web: exposure: include: '*' health: livenessState: enabled: true readinessState: enabled: true endpoints: shutdown: enabled: true version: 'v1.0.2' backend-endpoint: 'http://my-external-service:8080/api/customer' logback.xml: |- <?xml version="1.0" encoding="UTF-8"?> <configuration> <springProperty scope="context" name="serverName" source="HOSTNAME" /> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>[%d][%p][%t][%C-%L] %m%n</pattern> <charset>utf8</charset> </encoder> </appender> <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>./logs/app.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>./logs/app.%d{yyyy-MM-dd-HH}.log</fileNamePattern> </rollingPolicy> <encoder> <pattern>[%d][%p][%t][%C-%L] %m%n</pattern> <charset>utf8</charset> </encoder> </appender> <logger name="org.hibernate.SQL" additivity="false" > <level value="info" /> <appender-ref ref="console" /> </logger> <root level="info"> <appender-ref ref="file" /> </root> </configuration> ``` 2. Deployment 中,pvc 相關設定範例 ```yaml= spec: volumes: - name: app-log-storage persistentVolumeClaim: claimName: lab-deposit-log-pvc containers: volumeMounts: - name: app-log-storage mountPath: /app/logs ``` 3. 調整 Deployment,將 pvc 相關參數放進去後,重新佈署 ```yaml= apiVersion: apps/v1 kind: Deployment metadata: name: lab-my-deposit spec: replicas: 1 selector: matchLabels: app: lab-my-deposit template: metadata: labels: app: lab-my-deposit spec: containers: - name: my-deposit image: <dockerhub-id>/<image-name>:<version> ports: - containerPort: 8080 volumeMounts: - mountPath: /app/config/application.yml subPath: application.yml name: app-config readOnly: true - mountPath: /app/config/logback.xml subPath: logback.xml name: app-config readOnly: true - mountPath: /app/logs name: app-log-storage livenessProbe: httpGet: path: /actuator/health port: 8080 scheme: HTTP initialDelaySeconds: 20 periodSeconds: 5 timeoutSeconds: 60 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /actuator/health port: 8080 scheme: HTTP initialDelaySeconds: 20 timeoutSeconds: 30 periodSeconds: 10 successThreshold: 1 failureThreshold: 5 resources: requests: memory: "600Mi" cpu: "300m" limits: memory: "600Mi" cpu: "1" volumes: - name: app-config configMap: name: deposit-config - name: app-log-storage persistentVolumeClaim: claimName: lab-deposit-log-pvc ``` 4. 進入 deposit pod 查看 ```shell= kubectl exec -n <namespace-name> -it <pod-name> sh ``` ``` $ kubectl exec -n my-namespace -it lab-my-deposit-7b6f9cbc44-6b8dd sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. /app # cd logs /app/logs # ls -lSt total 72 -rw-r--r-- 1 root root 31954 Apr 6 09:18 app.log -rw-r--r-- 1 root root 16713 Apr 6 09:00 app.2022-04-06-08.log drwx------ 2 root root 16384 Apr 6 02:11 lost+found ``` 5. 查看 log ``` /app # cat app.2022-04-06-08.log [2022-04-06 08:51:18,169][INFO][main][org.springframework.boot.StartupInfoLogger-55] Starting DepositApplication v2.6.3 using Java 1.8.0_212 on lab-my-deposit-7b6f9cbc44-6b8dd with PID 1 (/app/deposit.jar started by root in /app) [2022-04-06 08:51:18,227][INFO][main][org.springframework.boot.SpringApplication-637] No active profile set, falling back to default profiles: default [2022-04-06 08:51:23,824][INFO][main][org.springframework.boot.web.embedded.tomcat.TomcatWebServer-108] Tomcat initialized with port(s): 8080 (http) [2022-04-06 08:51:23,856][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Initializing ProtocolHandler ["http-nio-8080"] [2022-04-06 08:51:23,858][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Starting service [Tomcat] [2022-04-06 08:51:23,859][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Starting Servlet engine: [Apache Tomcat/9.0.56] [2022-04-06 08:51:24,071][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Initializing Spring embedded WebApplicationContext [2022-04-06 08:51:24,072][INFO][main][org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext-290] Root WebApplicationContext: initialization completed in 5626 ms [2022-04-06 08:51:38,141][INFO][main][org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver-58] Exposing 14 endpoint(s) beneath base path '/actuator' [2022-04-06 08:51:38,267][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Starting ProtocolHandler ["http-nio-8080"] [2022-04-06 08:51:38,351][INFO][main][org.springframework.boot.web.embedded.tomcat.TomcatWebServer-220] Tomcat started on port(s): 8080 (http) with context path '' [2022-04-06 08:51:38,431][INFO][main][org.springframework.boot.StartupInfoLogger-61] Started DepositApplication in 22.973 seconds (JVM running for 24.578) [2022-04-06 08:51:39,743][INFO][http-nio-8080-exec-2][org.apache.juli.logging.DirectJDKLog-173] Initializing Spring DispatcherServlet 'dispatcherServlet' [2022-04-06 08:51:39,743][INFO][http-nio-8080-exec-2][org.springframework.web.servlet.FrameworkServlet-525] Initializing Servlet 'dispatcherServlet' [2022-04-06 08:51:39,745][INFO][http-nio-8080-exec-2][org.springframework.web.servlet.FrameworkServlet-547] Completed initialization in 2 ms [2022-04-06 08:51:39,944][INFO][http-nio-8080-exec-2][com.webcomm.helper.HealthProvider-16] UP ...中間略 [com.webcomm.helper.HealthProvider-16] UP ``` ### 驗證是否存進 nfs server 1. 將 net-tool 加上 pvc 後重新佈署 2. 進到 net-tool 查看 /app/logs ``` /app # cd logs /app # ls -lSt total 60 -rw-r--r-- 1 root root 22824 Apr 6 09:13 app.log -rw-r--r-- 1 root root 16713 Apr 6 09:00 app.2022-04-06-08.log drwx------ 2 root root 16384 Apr 6 02:11 lost+found ``` 3. 選一個 log 查看是否為 deposit 的 log ``` /app # cat app.2022-04-06-08.log [2022-04-06 08:51:18,169][INFO][main][org.springframework.boot.StartupInfoLogger-55] Starting DepositApplication v2.6.3 using Java 1.8.0_212 on lab-my-deposit-7b6f9cbc44-6b8dd with PID 1 (/app/deposit.jar started by root in /app) [2022-04-06 08:51:18,227][INFO][main][org.springframework.boot.SpringApplication-637] No active profile set, falling back to default profiles: default [2022-04-06 08:51:23,824][INFO][main][org.springframework.boot.web.embedded.tomcat.TomcatWebServer-108] Tomcat initialized with port(s): 8080 (http) [2022-04-06 08:51:23,856][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Initializing ProtocolHandler ["http-nio-8080"] [2022-04-06 08:51:23,858][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Starting service [Tomcat] [2022-04-06 08:51:23,859][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Starting Servlet engine: [Apache Tomcat/9.0.56] [2022-04-06 08:51:24,071][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Initializing Spring embedded WebApplicationContext [2022-04-06 08:51:24,072][INFO][main][org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext-290] Root WebApplicationContext: initialization completed in 5626 ms [2022-04-06 08:51:38,141][INFO][main][org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver-58] Exposing 14 endpoint(s) beneath base path '/actuator' [2022-04-06 08:51:38,267][INFO][main][org.apache.juli.logging.DirectJDKLog-173] Starting ProtocolHandler ["http-nio-8080"] [2022-04-06 08:51:38,351][INFO][main][org.springframework.boot.web.embedded.tomcat.TomcatWebServer-220] Tomcat started on port(s): 8080 (http) with context path '' [2022-04-06 08:51:38,431][INFO][main][org.springframework.boot.StartupInfoLogger-61] Started DepositApplication in 22.973 seconds (JVM running for 24.578) [2022-04-06 08:51:39,743][INFO][http-nio-8080-exec-2][org.apache.juli.logging.DirectJDKLog-173] Initializing Spring DispatcherServlet 'dispatcherServlet' [2022-04-06 08:51:39,743][INFO][http-nio-8080-exec-2][org.springframework.web.servlet.FrameworkServlet-525] Initializing Servlet 'dispatcherServlet' [2022-04-06 08:51:39,745][INFO][http-nio-8080-exec-2][org.springframework.web.servlet.FrameworkServlet-547] Completed initialization in 2 ms [2022-04-06 08:51:39,944][INFO][http-nio-8080-exec-2][com.webcomm.helper.HealthProvider-16] UP ...中間略 [com.webcomm.helper.HealthProvider-16] UP ``` 確認為 deposit 的 log,完成驗證 ### 刪除 PV & PVC - 刪除 PV ```shell= kubectl delete pv <pv-name> ``` - 刪除 PVC ```shell= kubectl delete pvc -n <namespace-name> <pvc-name> ``` :::danger :bulb: 提醒: 測完後記得將 nfs-server 刪除,等要用得時候再建!!! ::: ## 參考資料 - [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) - [GKE-Persistent volumes and dynamic provisioning](https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes) - [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) - [FileStore-Creating instances](https://cloud.google.com/filestore/docs/creating-instances#cloud-console)