# Kind(Kubernetes in Podman)

kind 是一個使用 Docker(以下示範換成 Podman) 做出 container 並當成「節點」來運行本地 Kubernetes 叢集的工具。kind 主要是為了測試 Kubernetes 本身而設計的。
## Install kind
```
# https://kind.sigs.k8s.io/
# https://kind.sigs.k8s.io/docs/user/quick-start/#installing-from-release-binaries
$ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
$ chmod +x ./kind
$ sudo mv ./kind /usr/local/bin/kind
```
## Install podman
* 在 suse linux 環境安裝 podman
```
# 0. 需先註冊 SCC
# 1.在 SLES 15 SP5 上安裝 4.8.3 版本的 podman 和 netavark
$ sudo SUSEConnect -p sle-module-containers/15.5/x86_64
$ sudo zypper -n install podman-4.8.3-150500.3.9.1 netavark-1.5.0-150500.1.4
# 2. 啟動 cgroup v2
## 2.1. 在 GRUB_CMDLINE_LINUX_DEFAULT 這個變數中添加 "systemd.unified_cgroup_hierarchy=1"
$ sudo nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="splash=silent mitigations=auto quiet security=apparmor crashkernel=302M,high crashkernel=72M,low systemd.unified_cgroup_hierarchy=1"
## 2.2. update GRUB
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
# 3. 設定 Pomdman 的 networkBackend 為 netavark
$ sudo cp /usr/share/containers/containers.conf /etc/containers/containers.conf
## 3.1. 修改設定檔,將 networkBackend 的值改為 netavark
$ sudo nano /etc/containers/containers.conf
[network]
network_backend = "netavark"
## 3.2. 重設 podman
$ sudo podman system reset --force
# 4. 重新開機
$ sudo init 6
# 5. 檢查 podman 網路是否成功
$ sudo podman info --format {{.Host.NetworkBackend}}
netavark
# 6. 檢查 cgroup 版本
$ sudo podman info | grep 'cgroupVersion'
cgroupVersion: v2
```
```
$ sudo podman version
Client: Podman Engine
Version: 4.8.3
API Version: 4.8.3
Go Version: go1.21.10
Built: Sun May 12 15:25:43 2024
OS/Arch: linux/amd64
```
* Alpine 環境設定
```
$ sudo nano /etc/rc.conf
......
rc_cgroup_mode="unified"
......
$ sudo rc-service cgroups start
$ sudo rc-update add cgroups
$ sudo reboot
$ sudo podman info | grep 'cgroupVersion'
cgroupVersion: v2
```
## Install kubectl
```
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
$ sudo rm -r kubectl
```
## 建立 c24 Cluster
```
$ mkdir ~/k8s
$ echo 'kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c24
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
disableDefaultCNI: true
nodes:
- role: control-plane
image: docker.io/kindest/node:v1.24.12
- role: worker
image: docker.io/kindest/node:v1.24.12' > ~/k8s/c24.yaml
```
```
$ sudo kind create cluster --config ~/k8s/c24.yaml
enabling experimental podman provider
Creating cluster "c24" ...
✓ Ensuring node image (docker.io/kindest/node:v1.24.12) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-c24"
You can now use your cluster with:
kubectl cluster-info --context kind-c24
Have a nice day! 👋
```
* 匯出 kubeconfig
```
$ mkdir .kube
$ sudo kind get kubeconfig --name c24 > ~/.kube/config
```
> 建議直接在叢集內管理 k8s ,不要將 kubeconfig 匯出,以降低資安風險
* worker 貼上標籤
```
$ kubectl label node c24-worker node-role.kubernetes.io/worker=
```
## 檢查 c24 Cluster
```
$ sudo kind get clusters
enabling experimental podman provider
c24
$ sudo kind get nodes --name c24
enabling experimental podman provider
c24-worker
c24-control-plane
$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a490bb4d2c3 docker.io/kindest/node:v1.24.12 3 minutes ago Up 3 minutes c24-worker
bc06442dfbce docker.io/kindest/node:v1.24.12 3 minutes ago Up 3 minutes 127.0.0.1:46339->6443/tcp c24-control-plane
```
## 設定 K8S Node
```
$ mkdir -p ~/kind/cni-plugins
$ curl -sL "$(curl -sL https://api.github.com/repos/containernetworking/plugins/releases/latest | jq -r '.assets[].browser_download_url' | grep 'linux-amd64.*.tgz$')" -o ~/kind/cni-plugins.tgz
$ tar xf ~/kind/cni-plugins.tgz -C ~/kind/cni-plugins; rm ~/kind/cni-plugins.tgz
```
```
$ sudo podman cp ~/kind/cni-plugins/bridge c24-control-plane:/opt/cni/bin/bridge
$ sudo podman cp ~/kind/cni-plugins/bridge c24-worker:/opt/cni/bin/bridge
$ alias > alias.txt
$ sudo podman cp ~/alias.txt c24-control-plane:alias.txt
```
## 登入 Control Plane
```
$ sudo podman exec -it c24-control-plane bash
$ root@c24-control-plane:/# cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
```
## 安裝 Canal 網路套件
```
root@c24-control-plane:/# curl -sLO https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/canal.yaml
root@c24-control-plane:/# kubectl apply -f canal.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/canal created
serviceaccount/calico-cni-plugin created
configmap/canal-config created
……
```
## 檢查 Canal 網路套件
```
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5f8d7d44cd-2pxzl 1/1 Running 0 89s
canal-m85wf 2/2 Running 0 89s
canal-ss6nt 2/2 Running 0 89s
coredns-57575c5f89-k77hp 1/1 Running 0 9m26s
coredns-57575c5f89-txwk9 1/1 Running 0 9m26s
etcd-c24-control-plane 1/1 Running 0 9m40s
kube-apiserver-c24-control-plane 1/1 Running 0 9m40s
kube-controller-manager-c24-control-plane 1/1 Running 0 9m39s
kube-proxy-kcm8l 1/1 Running 0 9m27s
kube-proxy-lx2gf 1/1 Running 0 9m23s
kube-scheduler-c24-control-plane 1/1 Running 0 9m39s
```
## 檢測 KIND K8S
```
$ kubectl run nginx --image=quay.io/cloudwalker/nginx
pod/nginx created
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 19s 10.244.1.6 c24-worker <none> <none>
$ curl 10.244.1.6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
......
$ kubectl delete pod nginx
pod "nginx" deleted
$ exit
```
## 關閉 c24 Cluster
```
$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b3a9f75856bc docker.io/kindest/node:v1.24.12 13 minutes ago Up 13 minutes 127.0.0.1:44817->6443/tcp c24-control-plane
886aaa310b92 docker.io/kindest/node:v1.24.12 13 minutes ago Up 13 minutes c24-worker
$ sudo podman stop c24-control-plane c24-worker
```
## 刪除 c24 Cluster
```
$ sudo kind get clusters
c24
$ sudo kind delete clusters c24
enabling experimental podman provider
Deleted nodes: ["c24-worker" "c24-control-plane"]
Deleted clusters: ["c24"]
$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
## 建立 c27 Cluster
* 啟用 ipvs mode canal 需要使用 v3.27.3 版本以上
```
$ curl -sLO https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/canal.yaml
$ kubectl apply -f canal.yaml
```
```
$ echo 'kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c27
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
kubeProxyMode: "ipvs"
disableDefaultCNI: true
nodes:
- role: control-plane
image: quay.io/cloudwalker/node:v1.27.3
- role: worker
image: quay.io/cloudwalker/node:v1.27.3
- role: worker
image: quay.io/cloudwalker/node:v1.27.3' > ~/k8s/c27.yaml
```
## 建立 c28 Cluster
```
$ echo 'kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c28
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
kubeProxyMode: "ipvs"
disableDefaultCNI: true
nodes:
- role: control-plane
image: quay.io/cloudwalker/node:v1.28.7
- role: worker
image: quay.io/cloudwalker/node:v1.28.7' > ~/k8s/c28.yaml
```
## 建立 c29 Cluster
```
$ echo 'kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c29
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
kubeProxyMode: "ipvs"
disableDefaultCNI: true
nodes:
- role: control-plane
image: quay.io/cloudwalker/node:v1.29.2
- role: worker
image: quay.io/cloudwalker/node:v1.29.2
- role: worker
image: quay.io/cloudwalker/node:v1.29.2' > ~/k8s/c29.yaml
```
## 建立 topgun Cluster
```
$ echo 'kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: topgun
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
kubeProxyMode: "ipvs"
nodes:
- role: control-plane
- role: worker
- role: worker' > ~/k8s/topgun.yaml
```