# Kubeadm 在 ubuntu server 安裝 k8s 1m2w (v1.30)
## 事前準備
所有節點都需先設定好固定 ip、hostname
## 設定 master node
### 設定台灣時區
```
$ sudo timedatectl set-timezone Asia/Taipei
```
```
$ timedatectl
Local time: Tue 2025-06-10 17:53:24 CST
Universal time: Tue 2025-06-10 09:53:24 UTC
RTC time: Tue 2025-06-10 09:53:24
Time zone: Asia/Taipei (CST, +0800)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
```
### 設定校時
```
$ sudo apt -y install chrony
$ sudo systemctl enable chrony
# 設定使用 google ntp
$ sudo nano /etc/chrony/chrony.conf
......
#pool ntp.ubuntu.com iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2
pool time.google.com iburst maxsources 4
$ sudo systemctl daemon-reload
$ sudo systemctl restart chrony
```
```
$ chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? time2.google.com 1 6 1 51 +70us[ +70us] +/- 3425us
```
### 關閉 SWAP 交換分區
```
$ sudo swapoff -a
$ sudo sed -i '/swap/s/^/#/' /etc/fstab
```
### 設定網卡
```
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
$ sudo cat <<EOF |sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo sysctl --system
$ sudo reboot
```
### 下載 containerd
```
$ wget https://github.com/containerd/containerd/releases/download/v2.1.1/containerd-2.1.1-linux-amd64.tar.gz
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ sudo tar Cxzvf /usr/local containerd-2.1.1-linux-amd64.tar.gz
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo mv containerd.service /usr/local/lib/systemd/system/containerd.service
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd
```
```
$ sudo systemctl status containerd.service
● containerd.service - containerd container runtime
Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-06-10 06:17:14 UTC; 9s ago
Docs: https://containerd.io
Process: 1162 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
```
### 下載 runC
```
$ wget https://github.com/opencontainers/runc/releases/download/v1.3.0/runc.amd64
$ sudo install -m 755 runc.amd64 /usr/local/sbin/runc
```
```
$ runc -v
runc version 1.3.0
commit: v1.3.0-0-g4ca628d1
spec: 1.2.1
go: go1.23.8
libseccomp: 2.5.6
```
### 下載 cni
```
$ sudo mkdir -p /opt/cni/bin
$ curl -sL "$(curl -sL https://api.github.com/repos/containernetworking/plugins/releases/latest | \
jq -r '.assets[].browser_download_url' | grep 'linux-amd64.*.tgz$')" -o ~/cni-plugins.tgz
$ sudo tar xf ~/cni-plugins.tgz -C /opt/cni/bin
```
```
$ sudo ls -l /opt/cni/bin
total 96188
-rwxr-xr-x 1 root root 5033580 Apr 25 12:58 bandwidth
-rwxr-xr-x 1 root root 5694447 Apr 25 12:58 bridge
-rwxr-xr-x 1 root root 13924938 Apr 25 12:58 dhcp
-rwxr-xr-x 1 root root 5247557 Apr 25 12:58 dummy
-rwxr-xr-x 1 root root 5749447 Apr 25 12:58 firewall
-rwxr-xr-x 1 root root 5163089 Apr 25 12:58 host-device
-rwxr-xr-x 1 root root 4364143 Apr 25 12:58 host-local
-rwxr-xr-x 1 root root 5269812 Apr 25 12:58 ipvlan
-rw-r--r-- 1 root root 11357 Apr 25 12:58 LICENSE
-rwxr-xr-x 1 root root 4263979 Apr 25 12:58 loopback
-rwxr-xr-x 1 root root 5305057 Apr 25 12:58 macvlan
-rwxr-xr-x 1 root root 5125860 Apr 25 12:58 portmap
-rwxr-xr-x 1 root root 5477120 Apr 25 12:58 ptp
-rw-r--r-- 1 root root 2343 Apr 25 12:58 README.md
-rwxr-xr-x 1 root root 4488703 Apr 25 12:58 sbr
-rwxr-xr-x 1 root root 3736370 Apr 25 12:58 static
-rwxr-xr-x 1 root root 5332257 Apr 25 12:58 tap
-rwxr-xr-x 1 root root 4352498 Apr 25 12:58 tuning
-rwxr-xr-x 1 root root 5267833 Apr 25 12:58 vlan
-rwxr-xr-x 1 root root 4644777 Apr 25 12:58 vrf
```
### 生成 containerd 設定檔
```
$ sudo mkdir /etc/containerd
$ sudo containerd config default | sudo tee /etc/containerd/config.toml
```
* 修改 `config.toml`
* 新增 `SystemdCgroup = true`
```
$ sudo nano /etc/containerd/config.toml
......
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
......
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
SystemdCgroup = true
```

* 重啟 containerd
```
$ sudo systemctl restart containerd.service
```
### 安裝 kubelet kubeadm kubectl
```
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
```
```
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.13", GitCommit:"50af91c466658b6a33d123fae8a487db1630971c", GitTreeState:"clean", BuildDate:"2025-05-15T09:55:10Z", GoVersion:"go1.23.8", Compiler:"gc", Platform:"linux/amd64"}
```
安裝前可以搜尋套件版本
```
$ sudo apt-cache madison kubelet kubeadm kubectl
```
### 啟動 k8s
* `advertiseAddress` 需更換為自己的 master ip
```
$ nano init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.20.7.70 # change from Master node IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock # change from CRI-O Unix Socket
imagePullPolicy: IfNotPresent
name: m1 # change from Master node hsotname
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.30.13
apiServer:
timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: topgun # set your clusterName
controllerManager:
extraArgs:
bind-address: "0.0.0.0"
secure-port: "10257"
extraVolumes:
- name: tz-config
hostPath: /etc/localtime
mountPath: /etc/localtime
readOnly: true
scheduler:
extraArgs:
bind-address: "0.0.0.0"
secure-port: "10259"
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
listen-metrics-urls: "http://0.0.0.0:2381"
dns: {}
imageRepository: registry.k8s.io
networking:
dnsDomain: cluster.local # DNS domain used by Kubernetes Services.
podSubnet: 10.244.0.0/16 # the subnet used by Pods.
serviceSubnet: 10.98.0.0/24 # subnet used by Kubernetes Services.
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: "0.0.0.0:10249"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 110
# containerLogMaxFiles: 3
# containerLogMaxSize: 1Mi
featureGates:
GracefulNodeShutdown: true
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
imageMinimumGCAge: "2m0s" # 至少要保留 image 2 分鐘
imageMaximumGCAge: "168h" # 設為 1 週 (168 hours) → 超過 1 週未使用的 image 可回收
```
開始安裝
```
$ sudo kubeadm init --config=init-config.yaml
```
輸出結果:
```
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.20.7.70:6443 --token xiiijl.avef5v6vf8wk1lra \
--discovery-token-ca-cert-hash sha256:a2eb13197b6637880708d62f0f794ed2331d10bfa1b14126c6786294638424c8
```
* 設定 kubeconfig
```
$ mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
* 檢查環境狀態
```
$ kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 NotReady control-plane 3h v1.28.15 172.20.7.70 <none> Ubuntu 24.04.2 LTS 6.8.0-60-generic containerd://2.1.1
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5dd5756b68-d6nxr 0/1 Pending 0 179m
kube-system coredns-5dd5756b68-tzr56 0/1 Pending 0 179m
kube-system etcd-m1 1/1 Running 44 (5m42s ago) 179m
kube-system kube-apiserver-m1 1/1 Running 36 (5m42s ago) 179m
kube-system kube-controller-manager-m1 1/1 Running 37 (5m42s ago) 179m
kube-system kube-proxy-6qzqp 1/1 Running 33 (5m42s ago) 179m
kube-system kube-scheduler-m1 1/1 Running 35 (5m42s ago) 3h
```
### 部屬 canal
```
$ curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/canal.yaml | kubectl apply -f -
```
### master 部屬完成
```
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6ffff5b846-r56s2 1/1 Running 0 68s
kube-system canal-hvlf7 2/2 Running 0 69s
kube-system coredns-5dd5756b68-d6nxr 1/1 Running 0 3h1m
kube-system coredns-5dd5756b68-tzr56 1/1 Running 0 3h1m
kube-system etcd-m1 1/1 Running 44 (7m20s ago) 3h1m
kube-system kube-apiserver-m1 1/1 Running 36 (7m20s ago) 3h1m
kube-system kube-controller-manager-m1 1/1 Running 37 (7m20s ago) 3h1m
kube-system kube-proxy-6qzqp 1/1 Running 33 (7m20s ago) 3h1m
kube-system kube-scheduler-m1 1/1 Running 35 (7m20s ago) 3h2m
```
### 設定 crictl
```
$ sudo nano /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
```
```
$ sudo crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
e9debf34fd581 f6a228558381b 2 hours ago Running calico-kube-controllers 0 56d12fa9ea995 calico-kube-controllers-59dbcf78b8-6dnjw
6683f3e741a8d c9fe3bce8a6d8 2 hours ago Running kube-flannel 0 2c949f114525f canal-tn9fj
cda0123f33ea2 c69fa2e9cbf5f 2 hours ago Running coredns 0 b94f930ab06db coredns-55cb58b774-wp4b8
eae1209b8b616 c69fa2e9cbf5f 2 hours ago Running coredns 0 4d5666fac9fe9 coredns-55cb58b774-fhjp9
3554e2785a6e0 048bf7af1f8c6 2 hours ago Running calico-node 0 2c949f114525f canal-tn9fj
5268e2f9373d1 048bf7af1f8c6 2 hours ago Exited mount-bpffs 0 2c949f114525f canal-tn9fj
1fcafc4efb57d cda13293c895a 2 hours ago Exited install-cni 0 2c949f114525f canal-tn9fj
da1f4cea20be2 a6946560b0b08 2 hours ago Running kube-proxy 0 18d0c8da0def6 kube-proxy-728lr
677e9df2df351 2e96e5913fc06 2 hours ago Running etcd 0 ab920eae65ae8 etcd-m1
69676ea6f45fc e60416bd78b65 2 hours ago Running kube-apiserver 0 e2eab9a66ab91 kube-apiserver-m1
ab24f773b4f60 c32cfb72baccb 2 hours ago Running kube-controller-manager 0 1382bb797435c kube-controller-manager-m1
b2e4de9a04614 12675fc0a409d 2 hours ago Running kube-scheduler 0 f654e56e4cb8f kube-scheduler-m1
```
## 設定 worker node,每個節點都需做以下設定
### 設定台灣時區
```
$ sudo timedatectl set-timezone Asia/Taipei
```
```
$ timedatectl
Local time: Tue 2025-06-10 17:53:24 CST
Universal time: Tue 2025-06-10 09:53:24 UTC
RTC time: Tue 2025-06-10 09:53:24
Time zone: Asia/Taipei (CST, +0800)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
```
### 設定校時
```
$ sudo apt -y install chrony
$ sudo systemctl enable chrony
# 設定使用 google ntp
$ sudo nano /etc/chrony/chrony.conf
......
#pool ntp.ubuntu.com iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2
server time.google.com prefer
$ sudo systemctl daemon-reload
$ sudo systemctl restart chrony
```
```
$ chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? time2.google.com 1 6 1 51 +70us[ +70us] +/- 3425us
```
### 關閉 SWAP 交換分區
```
$ sudo swapoff -a
$ sudo sed -i '/swap/s/^/#/' /etc/fstab
```
### 設定網卡
```
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
$ sudo cat <<EOF |sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo sysctl --system
$ sudo reboot
```
### 下載 containerd
```
$ wget https://github.com/containerd/containerd/releases/download/v2.1.1/containerd-2.1.1-linux-amd64.tar.gz
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ sudo tar Cxzvf /usr/local containerd-2.1.1-linux-amd64.tar.gz
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo mv containerd.service /usr/local/lib/systemd/system/containerd.service
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd
```
```
$ sudo systemctl status containerd.service
● containerd.service - containerd container runtime
Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-06-10 06:17:14 UTC; 9s ago
Docs: https://containerd.io
Process: 1162 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
```
### 下載 runC
```
$ wget https://github.com/opencontainers/runc/releases/download/v1.3.0/runc.amd64
$ sudo install -m 755 runc.amd64 /usr/local/sbin/runc
```
```
$ runc -v
runc version 1.3.0
commit: v1.3.0-0-g4ca628d1
spec: 1.2.1
go: go1.23.8
libseccomp: 2.5.6
```
### 下載 cni
```
$ sudo mkdir -p /opt/cni/bin
$ curl -sL "$(curl -sL https://api.github.com/repos/containernetworking/plugins/releases/latest | \
jq -r '.assets[].browser_download_url' | grep 'linux-amd64.*.tgz$')" -o ~/cni-plugins.tgz
$ sudo tar xf ~/cni-plugins.tgz -C /opt/cni/bin
```
```
$ sudo ls -l /opt/cni/bin
total 96188
-rwxr-xr-x 1 root root 5033580 Apr 25 12:58 bandwidth
-rwxr-xr-x 1 root root 5694447 Apr 25 12:58 bridge
-rwxr-xr-x 1 root root 13924938 Apr 25 12:58 dhcp
-rwxr-xr-x 1 root root 5247557 Apr 25 12:58 dummy
-rwxr-xr-x 1 root root 5749447 Apr 25 12:58 firewall
-rwxr-xr-x 1 root root 5163089 Apr 25 12:58 host-device
-rwxr-xr-x 1 root root 4364143 Apr 25 12:58 host-local
-rwxr-xr-x 1 root root 5269812 Apr 25 12:58 ipvlan
-rw-r--r-- 1 root root 11357 Apr 25 12:58 LICENSE
-rwxr-xr-x 1 root root 4263979 Apr 25 12:58 loopback
-rwxr-xr-x 1 root root 5305057 Apr 25 12:58 macvlan
-rwxr-xr-x 1 root root 5125860 Apr 25 12:58 portmap
-rwxr-xr-x 1 root root 5477120 Apr 25 12:58 ptp
-rw-r--r-- 1 root root 2343 Apr 25 12:58 README.md
-rwxr-xr-x 1 root root 4488703 Apr 25 12:58 sbr
-rwxr-xr-x 1 root root 3736370 Apr 25 12:58 static
-rwxr-xr-x 1 root root 5332257 Apr 25 12:58 tap
-rwxr-xr-x 1 root root 4352498 Apr 25 12:58 tuning
-rwxr-xr-x 1 root root 5267833 Apr 25 12:58 vlan
-rwxr-xr-x 1 root root 4644777 Apr 25 12:58 vrf
```
### 生成 containerd 設定檔
```
$ sudo mkdir /etc/containerd
$ sudo containerd config default | sudo tee /etc/containerd/config.toml
```
* 修改 `config.toml`
* 新增 `SystemdCgroup = true`
```
$ sudo nano /etc/containerd/config.toml
......
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
......
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
SystemdCgroup = true
```

* 重啟 containerd
```
$ sudo systemctl restart containerd.service
```
### 安裝 kubelet kubeadm
```
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg socat conntrack
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm
$ sudo apt-mark hold kubelet kubeadm
```
```
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.13", GitCommit:"50af91c466658b6a33d123fae8a487db1630971c", GitTreeState:"clean", BuildDate:"2025-05-15T09:55:10Z", GoVersion:"go1.23.8", Compiler:"gc", Platform:"linux/amd64"}
```
在 worker 節點註冊指令,如果沒紀錄可以透過以下方式重新獲取
```
$ sudo kubeadm token create --print-join-command
```
加入 worker 節點
```
$ sudo kubeadm join 172.20.7.70:6443 --token 0xgctn.a0ytr90vxmi5fzz1 --discovery-token-ca-cert-hash sha256:97fed86a7fbb8e6eefefabbebd1079d8372473803b67d40b44648dc4298a0f5f
```
執行結果:
```
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.111626ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
```
### 設定 crictl
```
$ sudo nano /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
```
## 返回 master 檢查叢集狀態
在 m1 終端機,執行以下命令
```
$ kubectl get no
NAME STATUS ROLES AGE VERSION
m1 Ready control-plane 98m v1.30.13
w1 Ready <none> 6m18s v1.30.13
w2 Ready <none> 6m11s v1.30.13
```
w1、w2 貼上標籤,都是叫 worker
```
$ kubectl label node w1 node-role.kubernetes.io/worker=; kubectl label node w2 node-role.kubernetes.io/worker=
```
```
$ kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 Ready control-plane 99m v1.30.13 172.20.7.70 <none> Ubuntu 24.04.2 LTS 6.8.0-60-generic containerd://2.1.1
w1 Ready worker 7m9s v1.30.13 172.20.7.71 <none> Ubuntu 24.04.2 LTS 6.8.0-60-generic containerd://2.1.1
w2 Ready worker 7m2s v1.30.13 172.20.7.72 <none> Ubuntu 24.04.2 LTS 6.8.0-60-generic containerd://2.1.1
```
## 安裝舊版 k8s(1.21.14)
1. linux os 設定流程上述一致,這邊步驟是從下載 kubeadm、kubelet、kubectl 套件開始。
### CP 下載套件
```
$ wget https://dl.k8s.io/v1.21.14/kubernetes-server-linux-amd64.tar.gz
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes/server/bin && sudo cp kubeadm kubelet kubectl /usr/bin/
```
### Worker 下載套件
```
$ wget https://dl.k8s.io/v1.21.14/kubernetes-node-linux-amd64.tar.gz
$ tar -xzvf kubernetes-node-linux-amd64.tar.gz
$ cd kubernetes/node/bin && sudo cp kubeadm kubelet /usr/bin/
```
### 設定 kubelet service
* 如果有自己更換 cri 需要再更改
```
$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
$ sudo mkdir /etc/systemd/system/kubelet.service.d
$ cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating
# the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
# KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_KUBEADM_ARGS \$KUBELET_EXTRA_ARGS
EOF
```
```
$ sudo systemctl enable --now kubelet.service
```
### 安裝 k8s
```
$ sudo kubeadm init --kubernetes-version v1.21.14 --pod-network-cidr=10.244.0.0/16 --v=5
```