# Rancher Downstream Cluster Snapshot 遇到 0B 問題 ![1710842548102](https://hackmd.io/_uploads/S1yMj5R06.jpg) ## 解決辦法 * 先編輯 `rke2-etcd-snapshots` configmap,並清除 data 以內的值 ``` $ kubectl -n kube-system edit cm rke2-etcd-snapshots # 以下指令可以直接清除 rke2-etcd-snapshots data 以內的值 $ kubectl -n kube-system get cm rke2-etcd-snapshots -oyaml > rke2-etcd-snapshots-cm-bak.yaml kubectl -n kube-system patch cm --type=json -p='[{"op": "remove", "path": "/data"}]' rke2-etcd-snapshots ``` ![image](https://hackmd.io/_uploads/HkH0jc0Cp.png) * 點選 Cluster Management -> Edit Config -> etcd * 將 Keep the last 從 5 份改成 4 份,目的是為了讓整個 cluster 重新 reload ![image](https://hackmd.io/_uploads/rylwhc0RT.png) * Save 後等 cluster 恢復確認 snapshot 是否恢復正常 ![image](https://hackmd.io/_uploads/rJmDp9RRa.png) * 如果透過以上方式還不能解決可以在執行以下指令 ``` $ kubectl get lease -n kube-system -oyaml > rke2-lease-kube-system-bak.yaml; kubectl get lease -n kube-system rke2-etcd -o jsonpath='{.spec.holderIdentity}' | \ xargs -I {} kubectl patch lease -n kube-system --patch='{"spec":{"holderIdentity":"{}"}}' --type=merge rke2; \ kubectl get lease -n kube-system -o custom-columns='name:.metadata.name,holder:.spec.holderIdentity' | \ grep rke2 ``` ## 參考 https://www.suse.com/support/kb/doc/?id=000021447 https://www.xtplayer.cn/rancher/etcd-snapshots-showing-0kb-size-in-the-rancher-ui/