# Kubeadm 在 ubuntu server 安裝 k8s 3m3w (v1.32)
## 事前準備
所有節點都需先設定好固定 ip、hostname
> vip: 10.10.7.33
> m1: 10.10.7.34
> m2: 10.10.7.35
> m3: 10.10.7.36
> w1: 10.10.7.37
> w2: 10.10.7.38
> w3: 10.10.7.39
## 設定 k8s node
### 設定台灣時區
```
$ sudo timedatectl set-timezone Asia/Taipei
```
```
$ timedatectl
Local time: Tue 2025-06-10 17:53:24 CST
Universal time: Tue 2025-06-10 09:53:24 UTC
RTC time: Tue 2025-06-10 09:53:24
Time zone: Asia/Taipei (CST, +0800)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
```
### 設定校時
```
$ sudo apt -y install chrony
$ sudo systemctl enable chrony
# 設定使用 google ntp
$ sudo nano /etc/chrony/chrony.conf
......
#pool ntp.ubuntu.com iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2
pool time.google.com iburst maxsources 4
$ sudo systemctl daemon-reload
$ sudo systemctl restart chrony
```
```
$ chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? time2.google.com 1 6 1 51 +70us[ +70us] +/- 3425us
```
### 關閉 SWAP 交換分區
```
$ sudo swapoff -a
$ sudo sed -i '/swap/s/^/#/' /etc/fstab
```
### 設定 kernel modules
```
$ sudo apt install -y ipvsadm ipset
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
$ sudo cat <<EOF |sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo sysctl --system
$ sudo reboot
```
### 下載 containerd
```
$ wget https://github.com/containerd/containerd/releases/download/v2.1.1/containerd-2.1.1-linux-amd64.tar.gz
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ sudo tar Cxzvf /usr/local containerd-2.1.1-linux-amd64.tar.gz
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo mv containerd.service /usr/local/lib/systemd/system/containerd.service
$ sudo systemctl daemon-reload && sudo systemctl enable --now containerd
```
```
$ sudo systemctl status containerd.service
● containerd.service - containerd container runtime
Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-06-10 06:17:14 UTC; 9s ago
Docs: https://containerd.io
Process: 1162 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
```
### 下載 runC
```
$ wget https://github.com/opencontainers/runc/releases/download/v1.3.0/runc.amd64
$ sudo install -m 755 runc.amd64 /usr/local/sbin/runc
```
```
$ runc -v
runc version 1.3.0
commit: v1.3.0-0-g4ca628d1
spec: 1.2.1
go: go1.23.8
libseccomp: 2.5.6
```
### 下載 cni
```
$ sudo mkdir -p /opt/cni/bin
$ curl -sL "$(curl -sL https://api.github.com/repos/containernetworking/plugins/releases/latest | \
jq -r '.assets[].browser_download_url' | grep 'linux-amd64.*.tgz$')" -o ~/cni-plugins.tgz
$ sudo tar xf ~/cni-plugins.tgz -C /opt/cni/bin
```
```
$ sudo ls -l /opt/cni/bin
total 96188
-rwxr-xr-x 1 root root 5033580 Apr 25 12:58 bandwidth
-rwxr-xr-x 1 root root 5694447 Apr 25 12:58 bridge
-rwxr-xr-x 1 root root 13924938 Apr 25 12:58 dhcp
-rwxr-xr-x 1 root root 5247557 Apr 25 12:58 dummy
-rwxr-xr-x 1 root root 5749447 Apr 25 12:58 firewall
-rwxr-xr-x 1 root root 5163089 Apr 25 12:58 host-device
-rwxr-xr-x 1 root root 4364143 Apr 25 12:58 host-local
-rwxr-xr-x 1 root root 5269812 Apr 25 12:58 ipvlan
-rw-r--r-- 1 root root 11357 Apr 25 12:58 LICENSE
-rwxr-xr-x 1 root root 4263979 Apr 25 12:58 loopback
-rwxr-xr-x 1 root root 5305057 Apr 25 12:58 macvlan
-rwxr-xr-x 1 root root 5125860 Apr 25 12:58 portmap
-rwxr-xr-x 1 root root 5477120 Apr 25 12:58 ptp
-rw-r--r-- 1 root root 2343 Apr 25 12:58 README.md
-rwxr-xr-x 1 root root 4488703 Apr 25 12:58 sbr
-rwxr-xr-x 1 root root 3736370 Apr 25 12:58 static
-rwxr-xr-x 1 root root 5332257 Apr 25 12:58 tap
-rwxr-xr-x 1 root root 4352498 Apr 25 12:58 tuning
-rwxr-xr-x 1 root root 5267833 Apr 25 12:58 vlan
-rwxr-xr-x 1 root root 4644777 Apr 25 12:58 vrf
```
### 生成 containerd 設定檔
```
$ sudo mkdir /etc/containerd
$ sudo containerd config default | sudo tee /etc/containerd/config.toml
```
* 修改 `config.toml`
* 新增 `SystemdCgroup = true`
```
$ sudo nano /etc/containerd/config.toml
......
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
......
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
SystemdCgroup = true
```

* 重啟 containerd
```
$ sudo systemctl restart containerd.service
```
### 安裝 crictl
```
# 根據 k8s 版本指定
$ VERSION="v1.33.0"
$ wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
$ sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
$ rm -f crictl-$VERSION-linux-amd64.tar.gz
```
### 設定 crictl
```
$ sudo nano /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
```
### master 節點安裝 kubelet kubeadm kubectl(如果是 worker 節點只需要安裝 kubelet kubeadm)
```
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg socat conntrack
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
```
```
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"32", GitVersion:"v1.32.5", GitCommit:"9894294ef13a5b32803e3ca2c0d620a088cc84d1", GitTreeState:"clean", BuildDate:"2025-05-15T09:10:46Z", GoVersion:"go1.23.8", Compiler:"gc", Platform:"linux/amd64"}
```
安裝前可以搜尋套件版本
```
$ sudo apt-cache madison kubelet kubeadm kubectl
```
### 啟動 k8s
#### 設定 kube-vip
```
# 安裝 kube-vip Set configuration details
# IP 要改成叢集外的新 IP
$ export VIP=10.10.7.33
# 宣告網卡名稱
$ export INTERFACE=ens18
# 取得 kube-vip 版本代號
$ export KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
# 檢查 kube-vip 版本
$ echo $KVVERSION
v0.9.1
$ alias kube-vip="sudo ctr -n k8s.io image pull ghcr.io/kube-vip/kube-vip:$KVVERSION;sudo ctr -n k8s.io run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
$ sudo mkdir -p /etc/kubernetes/manifests/
$ kube-vip manifest pod \
--address $VIP \
--interface $INTERFACE \
--controlplane \
--arp \
--leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml
# 1.29 以後需要調整權限,只有第一台 master 需要修改
$ sudo sed -i 's|path: /etc/kubernetes/admin.conf|path: /etc/kubernetes/super-admin.conf|g' /etc/kubernetes/manifests/kube-vip.yaml
```
* `advertiseAddress` 需更換為自己的 master ip
* `controlPlaneEndpoint` 要指定 vip 的位置
```
$ nano init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.7.34 # change from Master node IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock # change from containerd Unix Socket
imagePullPolicy: IfNotPresent
name: m1 # change from Master node hsotname
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.32.4
controlPlaneEndpoint: 10.10.7.33:6443 # kube-vip 位置
apiServer:
timeoutForControlPlane: 4m0s
certSANs:
- 127.0.0.1
certificatesDir: /etc/kubernetes/pki
clusterName: topgun # set your clusterName
controllerManager:
extraArgs:
bind-address: "0.0.0.0"
secure-port: "10257"
extraVolumes:
- name: tz-config
hostPath: /etc/localtime
mountPath: /etc/localtime
readOnly: true
scheduler:
extraArgs:
bind-address: "0.0.0.0"
secure-port: "10259"
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
listen-metrics-urls: "http://0.0.0.0:2381"
dns: {}
imageRepository: registry.k8s.io
networking:
dnsDomain: cluster.local # DNS domain used by Kubernetes Services.
podSubnet: 10.244.0.0/16 # the subnet used by Pods.
serviceSubnet: 10.96.0.0/16 # subnet used by Kubernetes Services.
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: "0.0.0.0:10249"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 110
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
imageMinimumGCAge: "2m0s" # 至少要保留 image 2 分鐘
imageMaximumGCAge: "168h" # 設為 1 週 (168 hours) → 超過 1 週未使用的 image 可回收
systemReserved:
memory: "1Gi"
kubeReserved:
memory: "2Gi"
```
開始安裝
* `--upload-certs` 將 `control-plane` 節點所需的金鑰和憑證上傳到 `kubeadm-certs Secret` 中,以供其他 `control-plane` 節點下載使用。
```
$ sudo kubeadm init --upload-certs --config=init-config.yaml
```
輸出結果並記錄註冊指令:
```
......
You can now join any number of control-plane nodes running the following command on each as root:
kubeadm join 10.10.7.33:6443 --token 6fwl39.aaoai7d4do0o8lr9 \
--discovery-token-ca-cert-hash sha256:2ad95aa92c57b4738fdf409b9e491351f35f809c8e33653c676a60ecd00de4bc \
--control-plane --certificate-key ac4ba0f9eb6ce8b21f45573c28ba288c78ca7082b6873f662f8e81f28ad88f0f
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.7.33:6443 --token 6fwl39.aaoai7d4do0o8lr9 \
--discovery-token-ca-cert-hash sha256:2ad95aa92c57b4738fdf409b9e491351f35f809c8e33653c676a60ecd00de4bc
```
* 設定 kubeconfig
```
$ mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ echo -e "source /usr/share/bash-completion/bash_completion\nsource <(kubectl completion bash)" | sudo tee -a /etc/profile
```
### 部屬 calico 3.29.4
```
$ curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.29.4/manifests/calico.yaml | kubectl apply -f -
```
## 環境檢查
```
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-cbd56df6f-j6z6w 1/1 Running 0 95s
kube-system calico-node-ps28b 1/1 Running 0 95s
kube-system coredns-668d6bf9bc-j5lt8 1/1 Running 0 3m45s
kube-system coredns-668d6bf9bc-zgklm 1/1 Running 0 3m45s
kube-system etcd-m1 1/1 Running 0 3m50s
kube-system kube-apiserver-m1 1/1 Running 0 3m50s
kube-system kube-controller-manager-m1 1/1 Running 0 3m50s
kube-system kube-proxy-2m5nj 1/1 Running 0 3m45s
kube-system kube-scheduler-m1 1/1 Running 0 3m50s
kube-system kube-vip-m1 1/1 Running 0 3m50s
```
* k8s 安裝好後再將 kube-vip 權限調整回來
```
$ sudo sed -i 's|path: /etc/kubernetes/super-admin.conf|path: /etc/kubernetes/admin.conf|g' /etc/kubernetes/manifests/kube-vip.yaml
$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet
```
## 加入 m2 master node
#### 設定 kube-vip
```
# 安裝 kube-vip Set configuration details
# IP 要改成叢集外的新 IP
$ export VIP=10.10.7.33
# 宣告網卡名稱
$ export INTERFACE=ens18
# 取得 kube-vip 版本代號
$ export KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
# 檢查 kube-vip 版本
$ echo $KVVERSION
v0.9.1
$ alias kube-vip="sudo ctr -n k8s.io image pull ghcr.io/kube-vip/kube-vip:$KVVERSION;sudo ctr -n k8s.io run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
$ sudo mkdir -p /etc/kubernetes/manifests/
$ kube-vip manifest pod \
--address $VIP \
--interface $INTERFACE \
--controlplane \
--arp \
--leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml
```
在 m2 開始安裝 k8s
```
$ sudo kubeadm join 10.10.7.33:6443 --token 6fwl39.aaoai7d4do0o8lr9 \
--discovery-token-ca-cert-hash sha256:2ad95aa92c57b4738fdf409b9e491351f35f809c8e33653c676a60ecd00de4bc \
--control-plane --certificate-key ac4ba0f9eb6ce8b21f45573c28ba288c78ca7082b6873f662f8e81f28ad88f0f
```
## 加入 m3 master node
#### 設定 kube-vip
```
# 安裝 kube-vip Set configuration details
# IP 要改成叢集外的新 IP
$ export VIP=10.10.7.33
# 宣告網卡名稱
$ export INTERFACE=ens18
# 取得 kube-vip 版本代號
$ export KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
# 檢查 kube-vip 版本
$ echo $KVVERSION
v0.9.1
$ alias kube-vip="sudo ctr -n k8s.io image pull ghcr.io/kube-vip/kube-vip:$KVVERSION;sudo ctr -n k8s.io run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
$ sudo mkdir -p /etc/kubernetes/manifests/
$ kube-vip manifest pod \
--address $VIP \
--interface $INTERFACE \
--controlplane \
--arp \
--leaderElection | sudo tee /etc/kubernetes/manifests/kube-vip.yaml
```
在 m3 開始安裝 k8s
```
$ sudo kubeadm join 10.10.7.33:6443 --token 6fwl39.aaoai7d4do0o8lr9 \
--discovery-token-ca-cert-hash sha256:2ad95aa92c57b4738fdf409b9e491351f35f809c8e33653c676a60ecd00de4bc \
--control-plane --certificate-key ac4ba0f9eb6ce8b21f45573c28ba288c78ca7082b6873f662f8e81f28ad88f0f
```
## 加入 w1、w2、w3 worker node
如果沒記錄到指令,可以在 m1 使用以下指令產出 `worker` 註冊指令
```
$ sudo kubeadm token create --print-join-command
```
### 環境檢查
```
$ kubectl get no
NAME STATUS ROLES AGE VERSION
m1 Ready control-plane 7m53s v1.32.5
m2 Ready control-plane 4m21s v1.32.5
m3 Ready control-plane 4m18s v1.32.5
w1 Ready <none> 2m18s v1.32.5
w2 Ready <none> 2m13s v1.32.5
w3 Ready <none> 2m11s v1.32.5
```
w1、w2、w3 貼上標籤,都是叫 worker
```
$ kubectl label node w1 node-role.kubernetes.io/worker=; kubectl label node w2 node-role.kubernetes.io/worker=; kubectl label node w3 node-role.kubernetes.io/worker=
```
```
$ kubectl get no
NAME STATUS ROLES AGE VERSION
m1 Ready control-plane 8m47s v1.32.5
m2 Ready control-plane 5m15s v1.32.5
m3 Ready control-plane 5m12s v1.32.5
w1 Ready worker 3m12s v1.32.5
w2 Ready worker 3m7s v1.32.5
w3 Ready worker 3m5s v1.32.5
```
## 驗證 Apiserver HA 容錯
查看 `plndr-cp-lock` 目前是 m1 提供當 vip
```
$ kubectl -n kube-system get lease
NAME HOLDER AGE
apiserver-276wpfzo55z6xlhwmv7sebbpdm apiserver-276wpfzo55z6xlhwmv7sebbpdm_74f6bead-cf28-434f-a13d-ca8b312b51e6 3m3s
kube-controller-manager m1_7c86a948-be78-4a01-b4fa-a359367f212a 3m
kube-scheduler m1_c8936eb4-8d33-4b10-a9b6-0211573e4316 2m58s
plndr-cp-lock m1 3m3s
```
將 m1 關機
```
bigred@m1:~$ sudo poweroff
```
在外部管理主機還是可以控制 k8s,並且確認 vip 轉移到 m2 上
```
$ kubectl get no
NAME STATUS ROLES AGE VERSION
m1 NotReady control-plane 11m v1.32.5
m2 Ready control-plane 7m38s v1.32.5
m3 Ready control-plane 7m35s v1.32.5
w1 Ready worker 5m35s v1.32.5
w2 Ready worker 5m30s v1.32.5
w3 Ready worker 5m28s v1.32.5
$ kubectl -n kube-system get lease
NAME HOLDER AGE
apiserver-276wpfzo55z6xlhwmv7sebbpdm apiserver-276wpfzo55z6xlhwmv7sebbpdm_74f6bead-cf28-434f-a13d-ca8b312b51e6 10m
apiserver-adkfkrghmfikf7izpjatkklyyi apiserver-adkfkrghmfikf7izpjatkklyyi_95e8741c-f4dd-45e3-8bce-a91f9e76255d 6m41s
apiserver-ormoeicu2rhguypboutgrrqply apiserver-ormoeicu2rhguypboutgrrqply_67f37328-5c8b-46d2-90be-8153f665f88a 6m34s
kube-controller-manager m2_c61340c5-b87b-4b96-9bfa-bf11e8314852 10m
kube-scheduler m3_af7b5edd-dc14-4dcb-a2a5-03f493fd4cdf 10m
plndr-cp-lock m2 10m
```
## 應用安裝