# Kubeadm 在 ubuntu server(24.04)安裝 k8s 3m1w (v1.34) 使用 CRI-O 並搭配 Haproxy + Keepalived
## 實作環境
> VIP: 10.10.7.31
> m1: 10.10.7.32
> m2: 10.10.7.33
> m3:10.10.7.34
> w1:10.10.7.35
## 下載 podman (每一個節點都需安裝)
```
$ curl -fsSL -o podman-linux-amd64.tar.gz https://github.com/mgoltzsche/podman-static/releases/latest/download/podman-linux-amd64.tar.gz
$ tar -zxvf podman-linux-amd64.tar.gz;cd podman-linux-amd64
```
* 使用 rsync 複製檔案,並且會把原本會被覆寫的檔案移到備份目錄
```
$ sudo rsync -aHAX --numeric-ids --info=progress2 \
--backup --backup-dir="/root/usr-backup-$(date +%F_%H%M%S)" \
./usr/ /usr/
```
```
$ sudo mkdir -p /etc/containers
$ sudo rsync -aHAX --no-o --no-g --info=progress2 \
--backup --backup-dir="/root/etc-containers-backup-$(date +%F_%H%M%S)}" \
./etc/containers/ /etc/containers/
```
```
$ sudo podman version
Client: Podman Engine
Version: 5.6.0
API Version: 5.6.0
Go Version: go1.24.6
Built: Thu Jan 1 08:00:00 1970
OS/Arch: linux/amd64
```
* 如果 podman 需要使用 rootless ,還要安裝以下套件
```
$ sudo apt install -y uidmap
```
## 下載與安裝 CRI-O (每一個節點都需安裝)
```
$ K8SVERSION=v1.34.1 # 更改指定 k8s 版本
$ wget https://storage.googleapis.com/cri-o/artifacts/cri-o.amd64.${K8SVERSION}.tar.gz
$ tar -zxvf cri-o.amd64.${K8SVERSION}.tar.gz; cd cri-o/
$ sudo cp bin/* /usr/local/bin/
$ sudo cp contrib/crio.service /etc/systemd/system/crio.service
$ sudo systemctl daemon-reload && sudo systemctl enable --now crio
```
```
$ crio version
INFO[2025-10-23T15:45:48.317306655+08:00] Updating config from single file: /etc/crio/crio.conf
INFO[2025-10-23T15:45:48.317378239+08:00] Updating config from drop-in file: /etc/crio/crio.conf
INFO[2025-10-23T15:45:48.317990433+08:00] Updating config from path: /etc/crio/crio.conf.d
Version: 1.34.1
$ systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
Loaded: loaded (/etc/systemd/system/crio.service; enabled; preset: enabled)
Active: active (running) since Fri 2025-09-05 14:14:19 CST; 5s ago
```
* 已安裝 crun
```
$ crun -v
crun version 1.22
commit: 4de19b63a85efd9ea8503452179c371181750130
rundir: /run/user/1001/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
```
* 使用 podman 登入 Harbor,並將產生的認證 token 儲存到 CRI-O 指定的 auth.json 檔案中,這樣可以透過這個 account pull 不公開的 image(可選,這是每個節點都要產生)
```
$ sudo podman login --tls-verify=false harbor.k8sexample.com --authfile /etc/crio/registries.d/auth.json
$ sudo cat /etc/crio/registries.d/auth.json
{
"auths": {
"harbor.k8sexample.com": {
"auth": "YWRtaW46SGFyYm9yMTIzNDU="
}
}
}
```
* 設定 `crio.conf` 配置檔
- crio 1.34 版不允許簡寫的 image 名稱,一定要打完整路徑,設定 `short_name_mode = "disabled"` 允許可以不打完整路徑。
```
$ sudo nano /etc/crio/crio.conf
[crio.runtime]
conmon_cgroup = "pod"
cgroup_manager = "systemd"
default_runtime = "crun"
default_capabilities = [
"CHOWN",
"DAC_OVERRIDE",
"FSETID",
"FOWNER",
"SETGID",
"SETUID",
"SETPCAP",
"NET_BIND_SERVICE",
"AUDIT_WRITE",
"SYS_CHROOT",
"KILL"
]
[crio.runtime.runtimes.crun]
runtime_path = "/usr/local/bin/crun"
runtime_type = "oci"
runtime_root = ""
[crio.network]
network_dir = "/etc/cni/net.d/"
plugin_dir = "/opt/cni/bin"
[crio.image]
short_name_mode = "disabled"
global_auth_file = "/etc/crio/registries.d/auth.json"
pause_image = "registry.k8s.io/pause:3.10.1"
```
```
$ sudo nano /etc/crictl.yaml
runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
timeout: 2
```
* 設定內部 image registry 憑證 insecure
```
$ sudo mkdir -p /etc/containers/registries.conf.d/
$ sudo nano /etc/containers/registries.conf.d/insecure-registry.conf
[[registry]]
prefix = "harbor.k8sexample.com"
location = "harbor.k8sexample.com"
insecure = true
```
```
$ sudo systemctl daemon-reload && sudo systemctl restart crio.service
```
## control plane 節點安裝 kubelet kubeadm kubectl
* control plane 節點下載套件
```
$ cd ~
$ wget https://dl.k8s.io/${K8SVERSION}/kubernetes-server-linux-amd64.tar.gz
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes/server/bin && sudo cp kubeadm kubelet kubectl /usr/bin/
```
### 設定 kubelet service
* cri 設定為 crio
```
$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
After=crio.service
Requires=crio.service
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
$ sudo mkdir /etc/systemd/system/kubelet.service.d
$ cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating
# the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
# KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_KUBEADM_ARGS \$KUBELET_EXTRA_ARGS
EOF
```
* 設定節點 internal ip
```
$ sudo mkdir -p /etc/default
$ INTERFACE=ens18;IPV4_IP=$(ip -4 a s $INTERFACE | awk '/inet / {print $2}' | cut -d'/' -f1)
$ echo "KUBELET_EXTRA_ARGS=\"--node-ip=$IPV4_IP\"" | envsubst | sudo tee /etc/default/kubelet
```
```
$ sudo systemctl enable --now kubelet.service
```
## worker 節點安裝 kubelet kubeadm
* Worker 下載套件
```
$ cd ~
$ wget https://dl.k8s.io/${K8SVERSION}/kubernetes-node-linux-amd64.tar.gz
$ tar -xzvf kubernetes-node-linux-amd64.tar.gz
$ cd kubernetes/node/bin && sudo cp kubeadm kubelet /usr/bin/
```
### 設定 kubelet service
* cri 設定為 crio
```
$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
After=crio.service
Requires=crio.service
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
$ sudo mkdir /etc/systemd/system/kubelet.service.d
$ cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating
# the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
# KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_KUBEADM_ARGS \$KUBELET_EXTRA_ARGS
EOF
```
* 設定節點 internal ip
```
$ sudo mkdir -p /etc/default
# 注意要替換自己的網卡名稱
$ INTERFACE=ens18;IPV4_IP=$(ip -4 a s $INTERFACE | awk '/inet / {print $2}' | cut -d'/' -f1)
$ echo "KUBELET_EXTRA_ARGS=\"--node-ip=$IPV4_IP\"" | envsubst | sudo tee /etc/default/kubelet
```
```
$ sudo systemctl enable --now kubelet.service
```
## k8s 初始化
### HAProxy + Keepalived 功能
* keepalived 負責 VIP(浮動 IP) 的 failover(VRRP),HAProxy 負責 L4/L7 負載平衡 / TLS 終端。
* 當你在 client 執行 kubectl 指向 VIP 時,封包先到 VRRP MASTER 持有 VIP 的 node(由 keepalived 管理),該 node 上的 HAProxy 接收連線,再根據設定(L4 或 L7, roundrobin/leastconn 等)把連線/請求送到某台 kube-apiserver backend。
### 什麼是 VRRP (Virtual Router Redundancy Protocol)
VRRP 的設計目的是避免單點故障(SPOF):把一組實體機器對外暴露為一個 Virtual IP(VIP),讓服務對外只有一個穩定的 IP。當主機(Master)失效時,備援機(Backup)會自動接手該 VIP,確保服務仍可由客戶端以同一個 IP 訪問,不需變動任何客戶端設定。
### 如何 選舉 Master
多台節點會根據 VRRP 規則競爭成為 Master,常見的判定條件與順序如下:
1. Priority(優先權):數值越高優先成為 Master。
2. 是否已綁定 VIP(介面已擁有該 IP):有些實作會把已在介面上綁定 VIP 的節點視為更適合成為 Master。
3. 選 IP 地址最大的:若 priority 一樣,通常會比較 IP 值(數值較大的通常勝出)。
當選出 Master 後,Master 會發送 Gratuitous ARP(廣播 ARP) 去更新網路上其他設備的 ARP 快取,讓 VIP 的 MAC/IP 由新的 Master 擁有,流量即刻導向新的 Master。
### Master 故障時的行為(Failover)
Master 會以固定間隔發送 VRRP 廣播(heartbeat/advertise)。若 Backup 長時間收不到 Master 的廣播,就判定 Master 已不可用,執行重新選舉,並由新的 Master 廣播 ARP 將 VIP 指到自己上。
### Preempt 與 nopreempt(搶佔行為)
預設 / preempt(允許搶佔):當原本的 Master 恢復並重新加入時,如果它的 priority 比目前 Master 高,它會重新搶回 Master 身份。
nopreempt(禁止搶佔):原 Master 恢復後不會強制搶回,維持現有 Master 狀態直到手動或其他條件改變。
若 Master 常處於不穩定(反覆上下線),允許搶佔會導致頻繁切換(flapping),造成短暫斷線或封包遺失;這時建議啟用 nopreempt,提高系統穩定性。
### 設定 HAProxy
* 在所有 master 節點新增 `/etc/haproxy/` 目錄
```
$ sudo mkdir -p /etc/haproxy/
```
* 接著在所有 master 節點新增 `/etc/haproxy/haproxy.cfg` 設定檔,並加入以下內容,這邊會綁定 `8443` 作為 API Server 的 Proxy
```
$ sudo tee /etc/haproxy/haproxy.cfg > /dev/null <<'EOF'
frontend kube-apiserver-https
mode tcp
bind :8443
default_backend kube-apiserver-backend
backend kube-apiserver-backend
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server apiserver1 10.10.7.32:6443 check
server apiserver2 10.10.7.33:6443 check
server apiserver3 10.10.7.34:6443 check
EOF
```
* 接著在新增一個路徑為 `/etc/kubernetes/manifests/haproxy.yaml` 的 YAML 檔來提供 HAProxy 的 Static Pod 部署
```
$ sudo mkdir -p /etc/kubernetes/manifests/
$ sudo tee /etc/kubernetes/manifests/haproxy.yaml > /dev/null <<'EOF'
kind: Pod
apiVersion: v1
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
component: haproxy
tier: control-plane
name: kube-haproxy
namespace: kube-system
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
containers:
- name: kube-haproxy
image: docker.io/haproxy:latest
resources:
requests:
cpu: 100m
volumeMounts:
- name: haproxy-cfg
readOnly: true
mountPath: /usr/local/etc/haproxy/haproxy.cfg
volumes:
- name: haproxy-cfg
hostPath:
path: /etc/haproxy/haproxy.cfg
type: FileOrCreate
EOF
```
### 設定 Keepalived
* 建立 Keepalived 來提供 Kubernetes API Server 的 VIP。在所有 master 節點新增一個路徑為 `/etc/kubernetes/manifests/keepalived.yaml` 的 YAML 檔來提供 HAProxy 的 Static Pod 部署
```
$ sudo tee /etc/kubernetes/manifests/keepalived.yaml > /dev/null <<'EOF'
kind: Pod
apiVersion: v1
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
component: keepalived
tier: control-plane
name: kube-keepalived
namespace: kube-system
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
containers:
- name: kube-keepalived
image: docker.io/osixia/keepalived:latest
env:
- name: KEEPALIVED_VIRTUAL_IPS
value: 10.10.7.31
- name: KEEPALIVED_INTERFACE
value: ens18
- name: KEEPALIVED_UNICAST_PEERS
value: "#PYTHON2BASH:['10.10.7.32', '10.10.7.33', '10.10.7.34']"
- name: KEEPALIVED_PASSWORD
value: crio
- name: KEEPALIVED_PRIORITY
value: "100"
- name: KEEPALIVED_ROUTER_ID
value: "51"
resources:
requests:
cpu: 100m
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
EOF
```
> `KEEPALIVED_VIRTUAL_IPS`:Keepalived 提供的 VIPs。
> `KEEPALIVED_INTERFACE`:VIPs 綁定的網卡。
> `KEEPALIVED_UNICAST_PEERS`:其他 Keepalived 節點的單點傳播 IP。
> `KEEPALIVED_PASSWORD`: Keepalived auth_type 的 Password。
> `KEEPALIVED_PRIORITY`:指定了備援發生時,接手的介面之順序,數字越小,優先順序越高。這邊k8s-m1設為 100,其餘為150。
> `KEEPALIVED_ROUTER_ID`:一組 Keepalived instance 的數字識別子。
### 初始化 m1 master node
* `advertiseAddress` 需更換為自己的 master ip
* `controlPlaneEndpoint` 要指定 vip 的位置
```
$ nano init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.7.32 # change from Master 1 node IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock # change from crio Unix Socket
imagePullPolicy: IfNotPresent
name: m1 # change from Master node hsotname
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: 1.34.1
controlPlaneEndpoint: 10.10.7.31:8443 # vip 位置,使用的 port 是 8443
apiServer:
certSANs:
- 127.0.0.1
certificatesDir: /etc/kubernetes/pki
clusterName: topgun # set your clusterName
controllerManager:
extraArgs:
- name: bind-address
value: "0.0.0.0"
- name: secure-port
value: "10257"
extraVolumes:
- name: tz-config
hostPath: /etc/localtime
mountPath: /etc/localtime
readOnly: true
scheduler:
extraArgs:
- name: bind-address
value: "0.0.0.0"
- name: secure-port
value: "10259"
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
- name: listen-metrics-urls
value: http://0.0.0.0:2381
dns: {}
imageRepository: registry.k8s.io
networking:
dnsDomain: cluster.local # DNS domain used by Kubernetes Services.
podSubnet: 10.244.0.0/16 # the subnet used by Pods.
serviceSubnet: 10.96.0.0/16 # subnet used by Kubernetes Services.
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: "0.0.0.0:10249"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 110
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
imageMinimumGCAge: "2m0s" # 至少要保留 image 2 分鐘
imageMaximumGCAge: "168h" # 設為 1 週 (168 hours) → 超過 1 週未使用的 image 可回收
containerLogMaxSize: "10Mi"
containerLogMaxFiles: 5
systemReserved:
memory: "1Gi"
kubeReserved:
memory: "2Gi"
```
開始安裝
* `--upload-certs` 將 `control-plane` 節點所需的金鑰和憑證上傳到 `kubeadm-certs Secret` 中,以供其他 `control-plane` 節點下載使用。
```
$ sudo kubeadm init --upload-certs --config=init-config.yaml
```
輸出結果並記錄註冊指令:
```
......
You can now join any number of control-plane nodes running the following command on each as root:
kubeadm join 10.10.7.31:8443 --token z1tiof.v0oamlvygwbqz06u \
--discovery-token-ca-cert-hash sha256:43d1582eeab425d17c458cccc9ba2c88b979ebf3adc445cc2aa6344fecab0642 \
--control-plane --certificate-key 24c98ce65129eb9eff7ab5ca148cbffc2d7028968d9e21efb5295b293db77b0b
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.7.31:8443 --token z1tiof.v0oamlvygwbqz06u \
--discovery-token-ca-cert-hash sha256:43d1582eeab425d17c458cccc9ba2c88b979ebf3adc445cc2aa6344fecab0642
```
* 設定 kubeconfig
```
$ mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ echo -e "source /usr/share/bash-completion/bash_completion\nsource <(kubectl completion bash)" | sudo tee -a /etc/profile
```
### 部屬 calico 3.29.4
```
$ curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.29.4/manifests/calico.yaml | kubectl apply -f -
```
### 加入 m2 master node
* 在 m2 執行
```
$ sudo kubeadm join 10.10.7.31:8443 --token z1tiof.v0oamlvygwbqz06u \
--discovery-token-ca-cert-hash sha256:43d1582eeab425d17c458cccc9ba2c88b979ebf3adc445cc2aa6344fecab0642 \
--control-plane --certificate-key 24c98ce65129eb9eff7ab5ca148cbffc2d7028968d9e21efb5295b293db77b0b
```
### 加入 m3 master node
* 在 m3 執行
```
$ sudo kubeadm join 10.10.7.31:8443 --token z1tiof.v0oamlvygwbqz06u \
--discovery-token-ca-cert-hash sha256:43d1582eeab425d17c458cccc9ba2c88b979ebf3adc445cc2aa6344fecab0642 \
--control-plane --certificate-key 24c98ce65129eb9eff7ab5ca148cbffc2d7028968d9e21efb5295b293db77b0b
```
### 加入 worker node
* 在 worker node 執行
```
$ sudo kubeadm join 10.10.7.31:8443 --token z1tiof.v0oamlvygwbqz06u \
--discovery-token-ca-cert-hash sha256:43d1582eeab425d17c458cccc9ba2c88b979ebf3adc445cc2aa6344fecab0642
```
* 如果沒記錄到指令,可以在 m1 使用以下指令產出 `worker` 註冊指令
```
$ sudo kubeadm token create --print-join-command
```
* worker 貼上標籤
```
$ kubectl label node crio-w1 node-role.kubernetes.io/worker=
```
## 環境檢查
```
$ kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 Ready control-plane 5m5s v1.34.1 10.10.7.32 <none> Ubuntu 24.04.2 LTS 6.8.0-59-generic cri-o://1.34.1
m2 Ready control-plane 2m8s v1.34.1 10.10.7.33 <none> Ubuntu 24.04.2 LTS 6.8.0-59-generic cri-o://1.34.1
m3 Ready control-plane 63s v1.34.1 10.10.7.34 <none> Ubuntu 24.04.2 LTS 6.8.0-59-generic cri-o://1.34.1
w1 Ready worker 51s v1.34.1 10.10.7.35 <none> Ubuntu 24.04.2 LTS 6.8.0-59-generic cri-o://1.34.1
```
```
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-548476666b-fhxp8 1/1 Running 0 4m29s
kube-system calico-node-jj2f2 1/1 Running 0 2m20s
kube-system calico-node-n6lmp 1/1 Running 0 75s
kube-system calico-node-tkthq 1/1 Running 0 4m29s
kube-system calico-node-tstjj 0/1 Running 0 63s
kube-system coredns-66bc5c9577-kqpbs 1/1 Running 0 5m5s
kube-system coredns-66bc5c9577-t79pk 1/1 Running 0 5m5s
kube-system etcd-m1 1/1 Running 0 5m15s
kube-system etcd-m2 1/1 Running 0 2m17s
kube-system etcd-m3 1/1 Running 0 73s
kube-system kube-apiserver-m1 1/1 Running 0 5m16s
kube-system kube-apiserver-m2 1/1 Running 0 2m17s
kube-system kube-apiserver-m3 1/1 Running 0 73s
kube-system kube-controller-manager-m1 1/1 Running 0 5m12s
kube-system kube-controller-manager-m2 1/1 Running 0 2m17s
kube-system kube-controller-manager-m3 1/1 Running 0 73s
kube-system kube-haproxy-m1 1/1 Running 0 5m6s
kube-system kube-haproxy-m2 1/1 Running 0 2m17s
kube-system kube-haproxy-m3 1/1 Running 0 73s
kube-system kube-keepalived-m1 1/1 Running 0 5m14s
kube-system kube-keepalived-m2 1/1 Running 0 2m17s
kube-system kube-keepalived-m3 1/1 Running 0 73s
kube-system kube-proxy-65p2f 1/1 Running 0 2m20s
kube-system kube-proxy-cg8rd 1/1 Running 0 63s
kube-system kube-proxy-h5rwj 1/1 Running 0 75s
kube-system kube-proxy-jdtpn 1/1 Running 0 5m5s
kube-system kube-scheduler-m1 1/1 Running 0 5m9s
kube-system kube-scheduler-m2 1/1 Running 0 2m17s
kube-system kube-scheduler-m3 1/1 Running 0 73s
```
* 預設 `keepalived` 已啟用 `nopreempt` 功能
```
$ kubectl -n kube-system exec kube-keepalived-m1 -- cat /usr/local/etc/keepalived/keepalived.conf
global_defs {
default_interface ens18
}
vrrp_instance VI_1 {
interface ens18
state BACKUP
virtual_router_id 51
priority 100
nopreempt
......
```
### 測試 HA 功能
* 檢查目前 VIP 在 m1 上
```
bigred@m1:~$ ip a s ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:a7:7d:d7 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.10.7.32/24 brd 10.10.7.255 scope global ens18
valid_lft forever preferred_lft forever
inet 10.10.7.31/32 scope global ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fea7:7dd7/64 scope link
valid_lft forever preferred_lft forever
```
* 將 m1 關機
```
$ sudo poweroff
```
* 到 m2 上還是可以正常使用 kubectl 指令
```
bigred@m2:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
m1 NotReady control-plane 32m v1.34.1
m2 Ready control-plane 29m v1.34.1
m3 Ready control-plane 28m v1.34.1
w1 Ready worker 27m v1.34.1
```
* 此時 vip 轉移到 m3 上
```
bigred@m3:~$ ip a s ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:a0:25:74 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.10.7.34/24 brd 10.10.7.255 scope global ens18
valid_lft forever preferred_lft forever
inet 10.10.7.31/32 scope global ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fea0:2574/64 scope link
valid_lft forever preferred_lft forever
```