# 實作 K8s 實體機節點硬碟備份與還原 ## 0. Preface 本篇文章將實作透過 Clonezilla 再生龍來備份硬碟到其他裝置,並還原到新的硬碟上。 (實作環境透過 Proxmox 進行模擬) 實作兩個情境: 1. 備份 Control-plane node (同時扮演 **Etcd member**) 的硬碟,並還原在新的一顆乾淨的硬碟上。 > 實作環境並沒用切割系統碟和資料碟,通通儲存在同一顆硬碟。 2. 備份 Control-plane node (同時扮演 **Etcd leader**) 的硬碟,在備份完後,破壞一些 K8s 核心服務的檔案及目錄,再透過備份好的硬碟還原到新的硬碟上,最後檢查 K8s 是否能正常運作。 ## 1. 下載 Clonezilla live CD 1. 到以下 Clonezilla 官方網站下載 Clonezilla live CD: https://clonezilla.nchc.org.tw/clonezilla-live/download/ 2. 選擇 "穩定-發行版" :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/rJENEfqHxl.png) ::: 3. CPU 架構選擇:`amd64`, 檔案類別選擇:`iso`,檔案儲存庫:`auto`,確認好後點選 `下載` 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/SkfTSfcBgg.png) ::: 4. 可透過 Rufus 將 ISO 做成 USB 可開機裝置; 如過是虛擬環境則將 ISO 匯入虛擬化平台 ## 2. 將第一顆硬碟備份,存為印象檔(disk image),放置在第二顆硬碟上 ### Step1: 由再生龍 live CD 開機 1. 新增一顆硬碟 (稍後會作為儲放備份 disk image 的硬碟) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Syr0mNqHee.png) ::: 2. 設定 Size 為 `50` Gi (須視環境做調整),確認好後點選 Add 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BySP4EcBle.png) ::: 3. 將 /dev/sdb 格式化為 XFS 檔案系統 ``` sudo mkfs.xfs /dev/sdb ``` 4. 將 VM 關機 5. 點選 Hardware -> 點選 Add -> 點選 CD/DVD Driver :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/rk8PbN9Sxl.png) ::: 6. 選擇 Storage 和 ISO image: `clonezilla-live-3.2.2-15-amd64.iso`,確認好後點選 Add 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkQEhzcHgx.png) ::: 7. 新增一顆硬碟 (稍後會作為儲放備份 disk image 的硬碟) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Syr0mNqHee.png) ::: 8. 設定可開機裝置的順序,點選 options -> 點選 Boot Order -> 點選 Edit :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ryYWP49Slx.png) ::: 9. 將 再生龍 live CD 設為第一個可開機裝置,確認好後點選 OK 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJhUDV5Sll.png) ::: 10. 將 VM 開機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ByeC_49ree.png) ::: ### Step2: Clonezilla live 開機選單 1. 選擇 "Clonezilla live (Default settings, VGA 800x600)": 第1個選項是 Clonezilla Live 的預設模式:使用 framebuffer 並設定螢幕解析度為 800x600. :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkcQKE5rxg.png) ::: ### Step3: 選擇語言 1. 選擇 `zh_TW.UFT-8 Chinese (Traditional) | 正體中文 - 臺灣` :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ryOYoNqBxe.png) ::: ### Step4: 選擇鍵盤 1. 使用預設的鍵盤,直接按 Enter 鍵 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJt53Vqrgx.png) ::: ### Step5: 選擇 "使用再生龍" 1. 使用預設的選項,直接按 Enter 鍵 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJ1WaEcHle.png) ::: ### Step6: 選擇 "device-image" 1. 選擇預設的選項:"device-image" :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkDUTNqBxg.png) ::: ### Step7: 選擇 "local_dev" 來指定 sdb 作為 Disk image 家目錄 1. 選擇 "local_dev", 就可以將印象檔存到本機其他硬碟上或 USB 裝置. 如果使用 USB 儲存裝置,請等待一下後在將 USB 儲存裝置插上 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HkSIA4cBxg.png) ::: 2. 將 USB 接上電腦,過 5 秒後按 Enter 鍵(前面步驟已先額外新增一顆硬碟,故此步驟可直接按 Enter 鍵) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BJpaCE5Sgg.png) ::: 3. 每過幾秒就會顯示裝置掃描的結果,如下面的提示 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HkfdxHqSxe.png) ::: 4. 確認裝置都認到後,按 `Ctrl + C` ### Step8: 選擇 sdb 作為 Disk image 的目錄 1. 選擇 sdb 硬碟裡的目錄作為印象檔目錄. :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HyLSEScBel.png) ::: 2. 選擇 "fsck" 檢查來源硬碟的檔案系統 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Hk6XBBqrxg.png) ::: 3. 選擇要哪個目錄給再生龍存 disk image,確定後按 Tab 鍵,選擇 `<Done>` :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ryRSPH5Bgg.png) ::: 4. 顯示硬碟的使用狀況 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HkCjwrcSel.png) ::: 5. 選擇 "Beginner" (初學模式) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/rkWy_S9Sgg.png) ::: 6. 選擇 "savedisk" :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/B1TNOHcrex.png) ::: ### Step9: 輸入 Disk image 檔名和選擇來源硬碟 1. 輸入備份之後 disk image 的檔名,確認後按 Enter 鍵 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HyJnur9rgx.png) ::: 2. 選擇要備份哪一顆硬碟,本篇範例為 sda :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Bk_xtr5Bex.png) ::: 3. 選擇印象檔壓縮演算法 `-z9p` (zstd): :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HykOKBcHge.png) ::: 4. 選擇 "fsck" 檢查來源硬碟的檔案系統 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BJ-_cr9Ble.png) ::: 5. 選擇檢查備份後的 Disk image 是否完整 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/S1YAqB9rxg.png) ::: 6. 選擇不要對 Disk image 加密 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Sk0jsB9Ble.png) ::: 7. 選擇將 log 檔也儲存在 Disk image 中 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/B1BCoB9rex.png) ::: 8. 選擇 `-p choose`,等程式跑完後再選擇關機或重開或進入命令模式 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkTKhH9Bxg.png) ::: 9. Clonezilla 出現此次工作的完整命令提示. 此命令可在製作客制化Clonezilla live時使用,按 Enter 鍵繼續 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/S1YahBcBex.png) ::: 10. 開始備份之前,再次詢問是否進行,輸入 `y`,按 Enter 鍵確定開始備份 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Sy2rpS9ree.png) ::: ### Step 10: 完成將硬碟 (sda) 備份到第二顆硬碟 (sdb) 中 1. 完成時,Clonezilla 提示若要再次使用再生龍選單的方式,按 Enter 鍵繼續 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/B1PT185rxl.png) ::: 2. 選擇關機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/H1tMlI5Sgl.png) ::: 3. 按 Enter 繼續關機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/r1XtgU5Hle.png) ::: ## 3. 將 Disk 還原到第一顆硬碟上 ### Step1: 由再生龍 live CD 開機 1. 新增一顆硬碟 (稍後會作為儲放備份 disk image 的硬碟) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Syr0mNqHee.png) ::: 2. 設定 Size 為 `100` Gi (須視環境做調整),確認好後點選 Add 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/SJNcQLqreg.png) ::: 3. 點選 Hardware -> 點選 Add -> 點選 CD/DVD Driver :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/rk8PbN9Sxl.png) ::: 4. 選擇 Storage 和 ISO image: `clonezilla-live-3.2.2-15-amd64.iso`,確認好後點選 Add 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkQEhzcHgx.png) ::: 5. 設定可開機裝置的順序,點選 options -> 點選 Boot Order -> 點選 Edit :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ryYWP49Slx.png) ::: 6. 將 再生龍 live CD 設為第一個可開機裝置,確認好後點選 OK 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJhUDV5Sll.png) ::: 7. 將 VM 開機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ByeC_49ree.png) ::: ### Step2: Clonezilla live 開機選單 1. 選擇 "Clonezilla live (Default settings, VGA 800x600)": 第1個選項是 Clonezilla Live 的預設模式:使用 framebuffer 並設定螢幕解析度為 800x600. :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkcQKE5rxg.png) ::: ### Step3: 選擇語言 1. 選擇 `zh_TW.UFT-8 Chinese (Traditional) | 正體中文 - 臺灣` :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ryOYoNqBxe.png) ::: ### Step4: 選擇鍵盤 1. 使用預設的鍵盤,直接按 Enter 鍵 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJt53Vqrgx.png) ::: ### Step5: 選擇 "使用再生龍" 1. 使用預設的選項,直接按 Enter 鍵 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJ1WaEcHle.png) ::: ### Step6: 選擇 "device-image" 1. 選擇預設的選項:"device-image" :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkDUTNqBxg.png) ::: ### Step7: 選擇 "local_dev" 來指定 sdb 作為 Disk image 家目錄 1. 選擇 "local_dev", 就可以將印象檔存到本機其他硬碟上或 USB 裝置. 如果使用 USB 儲存裝置,請等待一下後在將 USB 儲存裝置插上 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HkSIA4cBxg.png) ::: 2. 將 USB 接上電腦,過 5 秒後按 Enter 鍵(前面步驟已先額外新增一顆硬碟,故此步驟可直接按 Enter 鍵) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BJpaCE5Sgg.png) ::: 3. 每過幾秒就會顯示裝置掃描的結果,如下面的提示 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HkfdxHqSxe.png) ::: 4. 確認裝置都認到後,按 `Ctrl + C` ### Step8: 選擇 sdb 作為 Disk image 的目錄 1. 選擇 sdb 硬碟裡的目錄作為印象檔目錄. :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HyLSEScBel.png) ::: 2. 選擇 "fsck" 檢查來源硬碟的檔案系統 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Hk6XBBqrxg.png) ::: 3. 選擇要哪個目錄給再生龍存 disk image,確定後按 Tab 鍵,選擇 `<Done>` :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkqoBL9Hgl.png) ::: 4. 顯示硬碟的使用狀況 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkseIL5rle.png) ::: 5. 選擇 "Beginner" (初學模式) :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/rkWy_S9Sgg.png) ::: 6. 選擇 "restoredisk" :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Hy1EI89Hxl.png) ::: ### Step9: 選擇印象檔與目標碟 1. 選擇要透過哪個 Disk image 還原,確認後按 Enter 鍵 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HkXCLIqrlg.png) ::: 2. 選擇要還原到哪一顆硬碟上,本篇範例為 sdc :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/B1LbPLqHxe.png) ::: 3. 選擇印象檔中的硬碟分割表 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/SJ1FPIqBxg.png) ::: 4. 選擇 "是",在還原前檢查來源硬硬碟的檔案完整性 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Sk8y_89Hxg.png) ::: 5. 選擇將 log 檔也儲存在 Disk image 中 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/H1v8dUcSll.png) ::: 6. 選擇 `-p choose`,等程式跑完後再選擇關機或重開或進入命令模式 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/BkTKhH9Bxg.png) ::: 7. Clonezilla 出現此次工作的完整命令提示. 此命令可在製作客制化Clonezilla live時使用,按 Enter 鍵繼續 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Bk5KO89rel.png) ::: 8. 因為剛剛有選擇要檢查印象檔所以在還原之前, 就會進行印象檔的檢查 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/SJHDtI5Sex.png) ::: 9. 開始備份之前,詢問是否進行,按 Enter 鍵確定開始備份 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/Hk8fc8cHlx.png) ::: 10. 開始備份之前,再次詢問是否進行,按`y`,再按 Enter 鍵確定開始備份 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/HJPIcLcrgl.png) ::: 10. 開始備份之前,第三次詢問是否進行,按`y`,再按 Enter 鍵確定開始備份 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/SyEs5U9rel.png) ::: ### Step 10: 完成將硬碟 (sda) 備份到第二顆硬碟 (sdb) 中 1. 完成時,Clonezilla 提示若要再次使用再生龍選單的方式,按 Enter 鍵繼續 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/H1bn1P5Hge.png) ::: 2. 選擇關機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/H1tMlI5Sgl.png) ::: 3. 按 Enter 繼續關機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/r13RkP5Ble.png) ::: ### Step 11: 驗證是否備份成功 1. 設定可開機裝置的順序,點選 options -> 點選 Boot Order -> 點選 Edit :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ryYWP49Slx.png) ::: 2. 將第三顆硬碟 sdc 設為第一個可開機裝置,確認好後點選 OK 按鈕 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/SJL5eDcSll.png) ::: 3. 將 VM 開機 :::spoiler 點我展開圖片 ![image](https://hackmd.io/_uploads/ByeC_49ree.png) ::: 4. 檢視 K8s 節點是否健康 ``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION m1 Ready control-plane,master 7d5h v1.21.14 m2 Ready control-plane,master 7d4h v1.21.14 m3 Ready control-plane,master 7d3h v1.21.14 w1 Ready <none> 7d3h v1.21.14 w2 Ready <none> 7d3h v1.21.14 ``` 5. 確認 K8s 核心服務是否正常 ``` $ kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE calico-kube-controllers-6477b97f9f-zk24r 1/1 Running 6 5d3h calico-node-558x6 1/1 Running 0 7d3h calico-node-cnzrx 1/1 Running 2 5d3h calico-node-gpcbl 1/1 Running 6 7d3h calico-node-mh4k6 1/1 Running 7 7d3h calico-node-rsvmr 1/1 Running 1 7d3h coredns-558bd4d5db-nvbxc 1/1 Running 0 24h coredns-558bd4d5db-sg5nb 1/1 Running 0 24h etcd-m1 1/1 Running 3 26h etcd-m2 1/1 Running 0 7d4h etcd-m3 1/1 Running 2 7d3h kube-apiserver-m1 1/1 Running 0 26h kube-apiserver-m2 1/1 Running 0 7d4h kube-apiserver-m3 1/1 Running 2 7d3h kube-controller-manager-m1 1/1 Running 1 26h kube-controller-manager-m2 1/1 Running 2 7d3h kube-controller-manager-m3 1/1 Running 2 7d3h kube-proxy-6w529 1/1 Running 0 7d3h kube-proxy-75c4k 1/1 Running 0 7d3h kube-proxy-dpr4d 1/1 Running 2 5d4h kube-proxy-lzxml 1/1 Running 0 5d4h kube-proxy-vlfwv 1/1 Running 0 7d3h kube-scheduler-m1 1/1 Running 1 26h kube-scheduler-m2 1/1 Running 2 7d3h kube-scheduler-m3 1/1 Running 2 7d3h kube-vip-m1 1/1 Running 2 7d3h kube-vip-m2 1/1 Running 2 7d4h kube-vip-m3 1/1 Running 2 7d3h ``` 6. 檢查該節點上的 Pod 是否正常 ``` $ kubectl get pods --field-selector spec.nodeName=m3 -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-cnzrx 1/1 Running 2 5d3h kube-system etcd-m3 1/1 Running 2 7d3h kube-system kube-apiserver-m3 1/1 Running 2 7d3h kube-system kube-controller-manager-m3 1/1 Running 2 7d3h kube-system kube-proxy-dpr4d 1/1 Running 2 5d4h kube-system kube-scheduler-m3 1/1 Running 2 7d3h kube-system kube-vip-m3 1/1 Running 2 7d3h ``` 7. 確認 etcd Member 是不是都還健在 ``` $ alias etcdctl="ETCDCTL_API=3 sudo /usr/local/bin/etcdctl \ --endpoints=127.0.0.1:2379,172.20.7.81:2379,172.20.7.82:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/apiserver-etcd-client.crt \ --key=/etc/kubernetes/pki/apiserver-etcd-client.key" $ etcdctl member list 768bf4f42edfb9b3, started, m1, https://172.20.7.80:2380, https://172.20.7.80:2379, false e044dfaaca4a3cf4, started, m2, https://172.20.7.81:2380, https://172.20.7.81:2379, false ff78f71ad246a1cd, started, m3, https://172.20.7.82:2380, https://172.20.7.82:2379, false ``` 8. 確認 etcd 狀態 ``` $ etcdctl endpoint status -w table +------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED | +------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | 127.0.0.1:2379 | 768bf4f42edfb9b3 | 3.4.13 | | 9.0 MB | 4.3 MB | 52% | 0 B | false | false | 243 | 1963834 | 1963834 | | | false | | 172.20.7.81:2379 | e044dfaaca4a3cf4 | 3.4.13 | | 8.9 MB | 4.3 MB | 52% | 0 B | true | false | 243 | 1963834 | 1963834 | | | false | | 172.20.7.82:2379 | ff78f71ad246a1cd | 3.4.13 | | 8.9 MB | 4.3 MB | 52% | 0 B | false | false | 243 | 1963834 | 1963834 | | | false | +------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ ``` > 透過 IS LEADER 欄位確認 m2 主機是 etcd leader --- ## 4. 挑戰破壞 etcd leader vm 並透過再生龍備份的硬碟 restore 回去 ### 開始備份與還原 1. 確認當前哪一台節點是 etcd leader ``` $ etcdctl endpoint status --cluster -w table +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED | +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | https://172.20.7.81:2379 | 31954c2b28bfea6a | 3.4.13 | | 9.0 MB | 3.9 MB | 57% | 0 B | false | false | 244 | 2226833 | 2226833 | | | false | | https://172.20.7.80:2379 | 768bf4f42edfb9b3 | 3.4.13 | | 9.0 MB | 3.9 MB | 57% | 0 B | true | false | 244 | 2226833 | 2226833 | | | false | | https://172.20.7.82:2379 | ff78f71ad246a1cd | 3.4.13 | | 8.9 MB | 3.9 MB | 57% | 0 B | false | false | 244 | 2226833 | 2226833 | | | false | +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ ``` > 由以上執行結果得知 leader 是 `m1` 節點 2. 先透過再生龍備份 `m1` 主機的硬碟 sda (備份動作請參考前面的步驟,這裡不再贅述) 3. 透過原來的硬碟 sda 將 VM 開機,並破壞 K8s ``` $ ssh m1 $ sudo systemctl disable --now containerd.service $ sudo systemctl disable --now kubelet $ sudo rm -r /etc/containerd/ $ sudo rm -r /etc/kubernetes/ $ sudo poweroff ``` 4. 透過再生龍將備份好的 disk image 還原到新的硬碟 sdc 上 (還原動作請參考前面的步驟,這裡不再贅述) 5. 透過還原好的硬碟 sdc 將 VM 開機 6. 檢查在 `m1` 節點的 pod 是否有異常 ``` $ kubectl get pods --field-selector spec.nodeName=m1 -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-558x6 1/1 Running 1 7d21h kube-system etcd-m1 0/1 CrashLoopBackOff 7 44h kube-system kube-apiserver-m1 0/1 Error 3 44h kube-system kube-controller-manager-m1 1/1 Running 2 44h kube-system kube-proxy-6w529 1/1 Running 1 7d21h kube-system kube-scheduler-m1 1/1 Running 2 44h kube-system kube-vip-m1 1/1 Running 3 7d21h ``` > 發現 etcd 跟 API Server 掛了 7. 查找 etcd 死亡原因 ``` $ kubectl -n kube-system logs etcd-m1 ``` 執行結果如下: ``` ... 以上省略 2025-07-09 03:45:22.108428 I | rafthttp: started streaming with peer ff78f71ad246a1cd (stream MsgApp v2 reader) raft2025/07/09 03:45:22 INFO: 768bf4f42edfb9b3 [term: 244] received a MsgHeartbeat message with higher term from 31954c2b28bfea6a [term: 245] raft2025/07/09 03:45:22 INFO: 768bf4f42edfb9b3 became follower at term 245 raft2025/07/09 03:45:22 tocommit(2238877) is out of range [lastIndex(2235883)]. Was the raft log corrupted, truncated, or lost? panic: tocommit(2238877) is out of range [lastIndex(2235883)]. Was the raft log corrupted, truncated, or lost? goroutine 75 [running]: log.(*Logger).Panicf(0xc0001e6140, 0x10db3a0, 0x5d, 0xc000fa6380, 0x2, 0x2) /usr/local/go/src/log/log.go:219 +0xc1 go.etcd.io/etcd/raft.(*DefaultLogger).Panicf(0x1ac1310, 0x10db3a0, 0x5d, 0xc000fa6380, 0x2, 0x2) /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/logger.go:127 +0x60 go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc00015e150, 0x22299d) /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/log.go:203 +0x131 go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc0000f9a40, 0x8, 0x768bf4f42edfb9b3, 0x31954c2b28bfea6a, 0xf5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/raft.go:1396 +0x54 go.etcd.io/etcd/raft.stepFollower(0xc0000f9a40, 0x8, 0x768bf4f42edfb9b3, 0x31954c2b28bfea6a, 0xf5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/raft.go:1341 +0x480 go.etcd.io/etcd/raft.(*raft).Step(0xc0000f9a40, 0x8, 0x768bf4f42edfb9b3, 0x31954c2b28bfea6a, 0xf5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/raft.go:984 +0x1113 go.etcd.io/etcd/raft.(*node).run(0xc0003bd020) /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/node.go:352 +0xab6 created by go.etcd.io/etcd/raft.RestartNode /tmp/etcd-release-3.4.13/etcd/release/etcd/raft/node.go:240 +0x33c ``` ### 🔍 分析 etcd 毀損原因 1. 收到 heartbeat,變成 follower ``` INFO: 768bf4f42edfb9b3 [term: 244] received a MsgHeartbeat message with higher term from 31954c2b28bfea6a [term: 245] INFO: 768bf4f42edfb9b3 became follower at term 245 ``` 表示本節點(768bf4f42edfb9b3)接受到領導者心跳,發現自己在舊 term(244),於是以新 term(245)降階為 follower。這是正常 Raft 流程。 2. 提交索引 `tocommit` 超出日誌範圍 ``` tocommit(2238877) is out of range [lastIndex(2235883)] ``` 這裡 leader 傳來了一筆要 commit 的索引 `2238877`,但本地的最後日誌索引只有 `2235883`。缺少 2994 筆日誌紀錄,即 index gap。Raft 本意要求 follower 在 commit 前先擁有所有日誌條目;若 commit index 超出本地 lastIndex,代表日誌是不連續、不完整的。 3. 導致 panic 崩潰 Raft 為了保障一致性,若遇到不合法的 commit 請求(tocommit 超界),會直接 panic 停止,以避免資料錯誤或分裂。這是設計上的安全機制。 ### 💥 Root Cause Raft 在 etcd 中負責維持複本間的一致性(consensus),當一個成員嘗試啓動並提交過舊的資料時,領導者會嘗試將該成員更新到目前的提交進度。如果領導者無法更新該成員,就會觸發 raft 錯誤,以避免集群內資料損壞,並導致該成員因 raft panic 而啓動失敗。 這個問題可能發生於某個 etcd 成員長時間與其他成員隔離,導致 etcd 領導者無法保留所有的提交歷史來正確更新該成員。 ### ✅ 解決辦法 1. 檢查當前 etcd member ``` $ etcdctl member list 31954c2b28bfea6a, started, m2, https://172.20.7.81:2380, https://172.20.7.81:2379, false 768bf4f42edfb9b3, started, m1, https://172.20.7.80:2380, https://172.20.7.80:2379, false ff78f71ad246a1cd, started, m3, https://172.20.7.82:2380, https://172.20.7.82:2379, false ``` 2. 檢查每個 etcd 的狀態 ``` $ etcdctl endpoint status --cluster -w table {"level":"warn","ts":"2025-07-09T13:10:30.247283+0800","logger":"etcd-client","caller":"v3@v3.6.1/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000480000/172.20.7.80:2379","method":"/etcdserverpb.Maintenance/Status","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 172.20.7.80:2379: connect: connection refused\""} Failed to get the status of endpoint https://172.20.7.80:2379 (context deadline exceeded) +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED | +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | https://172.20.7.81:2379 | 31954c2b28bfea6a | 3.4.13 | | 9.0 MB | 4.0 MB | 56% | 0 B | true | false | 245 | 2264113 | 2264113 | | | false | | https://172.20.7.82:2379 | ff78f71ad246a1cd | 3.4.13 | | 8.9 MB | 4.0 MB | 56% | 0 B | false | false | 245 | 2264130 | 2264130 | | | false | +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ ``` > 確認在 m1 node 的 etcd 已故障 3. 在 `m1` 主機停止 etcd container ``` sudo mv /etc/kubernetes/manifests/etcd.yaml . ``` 4. 將 `m1` 移出 etcd cluster,**注意此命令須在健康的 etcd 節點執行** ``` etcdctl member remove 768bf4f42edfb9b3 ``` 執行結果: ``` Member 768bf4f42edfb9b3 removed from cluster 4bfe2fbd656adb1a ``` 5. 清理在 `m1` 主機上 etcd 的資料 ``` ssh m1 sudo mv /var/lib/etcd/member /var/lib/etcd/member-old sudo mkdir -p /var/lib/etcd/member sudo chmod 700 /var/lib/etcd/member ``` 6. 修改 etcd 參數 ``` sudo nano etcd.yaml ``` 修改內容如下: ``` spec: containers: - command: - etcd - --advertise-client-urls=https://172.20.7.80:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://172.20.7.80:2380 #- --initial-cluster=m1=https://172.20.7.80:2380 ## 將上一行註解,並添加以下這行,如果原本有以下兩個參數,就修改成跟以下一樣 - --initial-cluster=m1=https://172.20.7.80:2380,m2=https://172.20.7.81:2380,m3=https://172.20.7.82:2380 - --initial-cluster-state=existing ``` 7. 將 `m1` etcd 以 `learner` 的身份加回 etcd cluster,**注意此命令須在健康的 etcd 節點執行** ``` etcdctl member add m1 --peer-urls="https://172.20.7.80:2380" --learner ``` 執行結果: ``` Member 3bf15e520c24448c added as learner to cluster 4bfe2fbd656adb1a ETCD_NAME="m1" ETCD_INITIAL_CLUSTER="m2=https://172.20.7.81:2380,m1=https://172.20.7.80:2380,m3=https://172.20.7.82:2380" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.20.7.80:2380" ETCD_INITIAL_CLUSTER_STATE="existing" ``` 8. 在 `m1` 主機啟動 etcd container ``` sudo mv etcd.yaml /etc/kubernetes/manifests/ ``` 9. 確認 `m1` 主機的 etcd 是否成功加入叢集,並啟動 ``` etcdctl member list -w table ``` 執行結果: ``` +------------------+---------+------+--------------------------+--------------------------+------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+------+--------------------------+--------------------------+------------+ | 31954c2b28bfea6a | started | m2 | https://172.20.7.81:2380 | https://172.20.7.81:2379 | false | | fbc0ce2210efe023 | started | m1 | https://172.20.7.80:2380 | https://172.20.7.80:2379 | true | | ff78f71ad246a1cd | started | m3 | https://172.20.7.82:2380 | https://172.20.7.82:2379 | false | +------------------+---------+------+--------------------------+--------------------------+------------+ ``` 10. 確認 `m1` 主機 etcd 同步狀態 ``` etcdctl endpoint status --cluster -w table ``` 執行結果: ``` +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED | +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | https://172.20.7.81:2379 | 31954c2b28bfea6a | 3.4.13 | | 9.0 MB | 4.3 MB | 53% | 0 B | true | false | 379 | 2314290 | 2314290 | | | false | | https://172.20.7.80:2379 | fbc0ce2210efe023 | 3.4.13 | | 9.0 MB | 4.3 MB | 53% | 0 B | false | true | 379 | 2314290 | 2314290 | | | false | | https://172.20.7.82:2379 | ff78f71ad246a1cd | 3.4.13 | | 8.9 MB | 4.3 MB | 53% | 0 B | false | false | 379 | 2314290 | 2314290 | | | false | +--------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ ``` > 觀察 `RAFT_INDEX` 與 `RAFT_APPLIED_INDEX`,若兩者相同或非常接近,代表該 learner 已完全跟上 leader 的日誌,catch‑up 已完成 11. 確認 etcd learner 已同步資料後,將它升級為 member ``` etcdctl member promote fbc0ce2210efe023 ``` 執行結果: ``` Member fbc0ce2210efe023 promoted in cluster 4bfe2fbd656adb1a ``` 12. 確認 API Server Contaienr 運作正常 ``` $ sudo crictl ps --name kube-apiserver ``` 執行結果: ``` CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE 8a13e70428b12 e58b890e4ab44 About a minute ago Running kube-apiserver 7 010418cc69c62 kube-apiserver-m1 kube-system ``` 13. 確認 `m1` 主機的 Pods 運作正常 ``` kubectl get pods --field-selector spec.nodeName=m1 -A ``` 執行結果: ``` NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-558x6 1/1 Running 1 8d kube-system etcd-m1 1/1 Running 0 25m kube-system kube-apiserver-m1 1/1 Running 7 36m kube-system kube-controller-manager-m1 1/1 Running 2 2d1h kube-system kube-proxy-6w529 1/1 Running 1 8d kube-system kube-scheduler-m1 1/1 Running 2 2d1h kube-system kube-vip-m1 1/1 Running 3 8d ``` ## 5. 參考資料 - [Clonezilla live 文件集](https://clonezilla.nchc.org.tw/clonezilla-live/doc/)