本篇文章會介紹如何安裝 kind、設定 node provider 為 podman,並實作建立一座 kind Cluster
kind is a tool for running local Kubernetes clusters using Podman/Docker container “nodes”.
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI/CD.
# 1.在 SLES 15 SP5 上安裝 4.8.3 版本的 podman 和 netavark
$ sudo SUSEConnect -p sle-module-containers/15.5/x86_64
$ sudo zypper -n install podman-4.8.3-150500.3.9.1 netavark-1.5.0-150500.1.4
# 2. 啟動 cgroup v2
## 2.1. 在 GRUB_CMDLINE_LINUX_DEFAULT 這個變數中添加 "systemd.unified_cgroup_hierarchy=1"
$ sudo nano /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="splash=silent mitigations=auto quiet security=apparmor crashkernel=302M,high crashkernel=72M,low systemd.unified_cgroup_hierarchy=1"
## 2.2. update GRUB 讓設定永久生效
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
# 3. 設定 Pomdman 的 networkBackend 為 netavark
$ sudo cp /usr/share/containers/containers.conf /etc/containers/containers.conf
## 3.1. 修改設定檔,將 networkBackend 的值改為 netavark
$ sudo nano /etc/containers/containers.conf
[network]
network_backend = "netavark"
## 3.2. 重設 Podman
$ sudo podman system reset --force
# 4. 設定 kind 使用 Podman 建立 node
$ sudo nano /etc/profile
...
## 添加以下字串進去
export KIND_EXPERIMENTAL_PROVIDER="podman"
# 5. 重新開機
$ sudo init 6
# 6. 檢查 podman 網路是否成功
$ podman info --format {{.Host.NetworkBackend}}
netavark
# 7. 檢查 cgroup 版本
$ podman info | grep 'cgroupVersion'
cgroupVersion: v2
# 8. 檢查 kind 的 node provider 是否設為 podman
$ env | grep KIND_EXPERIMENTAL_PROVIDER
KIND_EXPERIMENTAL_PROVIDER=podman
# 1. 下載 podman,版本 v4.8.3
$ curl -fsSL -o podman-linux-amd64.tar.gz https://github.com/mgoltzsche/podman-static/releases/download/v4.8.3/podman-linux-amd64.tar.gz && \
tar -xzf podman-linux-amd64.tar.gz && \
sudo cp -r podman-linux-amd64/usr podman-linux-amd64/etc /
> To restart containers with restart-policy=always on boot, enable the podman-restart systemd service
# 2. 下載 netavark、aardvark-dns 和一些必要套件
$ sudo apt update &&
sudo apt-get install -y \
containernetworking-plugins \
protobuf-compiler \
cargo \
make \
jq \
uidmap && \
sudo apt autoremove
$ git clone https://github.com/containers/netavark.git && \
cd netavark/ && \
sudo make && \
sudo cp ./bin/netavark /usr/local/lib/podman/
$ git clone https://github.com/containers/aardvark-dns.git && \
cd aardvark-dns/ && \
sudo make && \
sudo cp ./bin/aardvark-dns /usr/local/lib/podman/
# 3. 修改設定檔,將 networkBackend 的值改為 netavark
$ sudo nano /etc/containers/containers.conf
[network]
network_backend = "netavark"
## 3.1. 重設 Podman
$ sudo podman system reset --force
# 4. 設定 kind 使用 Podman 建立 node
$ sudo nano /etc/profile
...
## 添加以下字串進去
export KIND_EXPERIMENTAL_PROVIDER="podman"
# 5. 重新開機
$ sudo init 6
# 6. 檢查 podman 網路是否成功
$ podman info --format {{.Host.NetworkBackend}}
netavark
# 7. 檢查 cgroup 版本
$ podman info | grep 'cgroupVersion'
cgroupVersion: v2
# 8. 檢查 kind 的 node provider 是否設為 podman
$ env | grep KIND_EXPERIMENTAL_PROVIDER
KIND_EXPERIMENTAL_PROVIDER=podman
以下設定在 Google Cloud Shell 被重製後都會失效。
# 1. 移除 docker
$ sudo apt-get remove -y docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin
$ sudo apt autoremove
# 2. 啟用 IPV6
$ sudo nano /etc/sysctl.conf
...
net.ipv6.conf.all.disable_ipv6=0
net.ipv6.conf.default.disable_ipv6=0
net.ipv6.conf.lo.disable_ipv6=0
$ sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
$ sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
$ sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0
$ sudo sysctl -p
# 3. 下載 podman,版本 v4.8.3
$ curl -fsSL -o podman-linux-amd64.tar.gz https://github.com/mgoltzsche/podman-static/releases/download/v4.8.3/podman-linux-amd64.tar.gz && \
tar -xzf podman-linux-amd64.tar.gz && \
sudo cp -r podman-linux-amd64/usr podman-linux-amd64/etc /
# 4. 下載 netavark、aardvark-dns 和一些必要套件
$ sudo apt update &&
sudo apt-get install -y \
containernetworking-plugins \
protobuf-compiler \
cargo \
make \
jq \
uidmap && \
sudo apt autoremove
$ git clone https://github.com/containers/netavark.git && \
cd netavark/ && \
sudo make && \
sudo cp ./bin/netavark /usr/local/lib/podman/
$ git clone https://github.com/containers/aardvark-dns.git && \
cd aardvark-dns/ && \
sudo make && \
sudo cp ./bin/aardvark-dns /usr/local/lib/podman/
# 5. 修改設定檔,將 networkBackend 的值改為 netavark
$ sudo nano /etc/containers/containers.conf
[network]
network_backend = "netavark"
## 5.1. 重設 Podman
$ sudo podman system reset --force
# 6. 設定 kind 使用 Podman 建立 node
$ sudo nano /etc/profile
...
## 添加以下字串進去
export KIND_EXPERIMENTAL_PROVIDER="podman"
# 7. 檢查 podman 網路是否成功
$ podman info --format {{.Host.NetworkBackend}}
netavark
# 8. 檢查 cgroup 版本
$ podman info | grep 'cgroupVersion'
cgroupVersion: v2
# 9. 檢查 kind 的 node provider 是否設為 podman
$ sudo su - k130
$ env | grep KIND_EXPERIMENTAL_PROVIDER
KIND_EXPERIMENTAL_PROVIDER=podman
# 1. 安裝 kind
$ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64 && \
chmod +x ./kind && \
sudo mv ./kind /usr/local/bin/kind
# 2. 檢查 Podman 版本
$ sudo podman version
Client: Podman Engine
Version: 4.8.3
API Version: 4.8.3
Go Version: go1.21.8
Built: Tue Mar 19 20:00:00 2024
OS/Arch: linux/amd64
# 3. 檢查 kind 版本
$ sudo kind --version
kind version 0.23.0
$ mkdir ~/k8s
$ echo 'kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c30
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
disableDefaultCNI: true
nodes:
- role: control-plane
image: docker.io/kindest/node:v1.30.0
- role: worker
image: docker.io/kindest/node:v1.30.0' > ~/k8s/c30.yaml
$ sudo kind create cluster --config ~/k8s/c30.yaml
enabling experimental podman provider
Creating cluster "c30" ...
✓ Ensuring node image (docker.io/kindest/node:v1.30.0) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-c30"
You can now use your cluster with:
kubectl cluster-info --context kind-c30
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c30
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.98.0.0/12"
disableDefaultCNI: true
nodes:
- role: control-plane
image: docker.io/kindest/node:v1.30.0
- role: worker
image: docker.io/kindest/node:v1.30.0
kind: Cluster
,指定要設定的是 KIND ClusterapiVersion: kind.x-k8s.io/v1alpha4
,KIND 設定檔的版號,不同的版號在設定檔中的一些設定值或選項,它們的含意會變得不同。name: c30
,設定 KIND Cluster 的名字,這裡取名 c30
。neteorking
,設定 Cluster 的網路,有很多設定可以客製化。
podSubnet
,設定 pod 會得到的 IP 範圍。serviceSubnet
,設定 Kubernetes service 會得到的 IP 範圍。disableDefaultCNI: true
,KIND 會附帶一個簡單的網路實作(“kindnetd”),這是根據標準的 CNI 插件實現的(如 ptp
, host-local
等)和簡單的 netlink routes。
$ sudo kind get clusters
enabling experimental podman provider
c30
$ sudo kind get nodes --name c30
enabling experimental podman provider
c30-control-plane
c30-worker
$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3eb5630a3ee3 docker.io/kindest/node:v1.30.0 38 minutes ago Up 38 minutes 127.0.0.1:40297->6443/tcp c30-control-plane
f0d28b8e2a4e docker.io/kindest/node:v1.30.0 38 minutes ago Up 37 minutes c30-worker
$ mkdir ~/cni/
$ curl -sL "$(curl -sL https://api.github.com/repos/containernetworking/plugins/releases/latest | jq -r '.assets[].browser_download_url' | grep 'linux-amd64.*.tgz$')" -o ~/cni/cni-plugins.tgz
# 如果上面那行錯誤,就執行這行命令
$ curl -sL https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz -o ~/cni/cni-plugins.tgz
$ tar xf ~/cni/cni-plugins.tgz -C ~/cni; rm ~/cni/cni-plugins.tgz
$ sudo podman cp ~/cni/bridge c30-control-plane:/opt/cni/bin/bridge
$ sudo podman cp ~/cni/bridge c30-worker:/opt/cni/bin/bridge
$ sudo podman exec -it c30-control-plane bash
root@c30-control-plane:/# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
# 在 c30-control-plane 中執行已下命令
$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/canal.yaml
## 檢查 Canal 網路套件
$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6df7596dbd-c2smj 1/1 Running 0 7m17s
canal-kmjk9 2/2 Running 0 7m17s
canal-vtxlq 2/2 Running 0 7m18s
coredns-7db6d8ff4d-2mrc7 1/1 Running 1 63m
coredns-7db6d8ff4d-cjhnn 1/1 Running 0 63m
etcd-c30-control-plane 1/1 Running 0 64m
kube-apiserver-c30-control-plane 1/1 Running 0 64m
kube-controller-manager-c30-control-plane 1/1 Running 2 (5m57s ago) 64m
kube-proxy-8jrdn 1/1 Running 0 63m
kube-proxy-l9dpx 1/1 Running 0 63m
kube-scheduler-c30-control-plane 1/1 Running 2 (5m ago) 64m
$ kubectl run nginx --image=quay.io/cloudwalker/nginx
pod/nginx created
$ kubectl get pods nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 10m 10.244.1.5 c30-worker <none> <none>
$ curl $(kubectl get pods nginx --no-headers -o custom-columns=ip:.status.podIP)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
$ kubectl delete pod nginx
pod "nginx" deleted
$ exit
$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
3eb5630a3ee3 docker.io/kindest/node:v1.30.0 About an hour ago Up About an hour 127.0.0.1:40297->6443/tcp c30-control-plane
f0d28b8e2a4e docker.io/kindest/node:v1.30.0 About an hour ago Up About an hour
c30-worker
$ sudo podman stop c30-control-plane c30-worker
c30-worker
c30-control-plane
$ sudo kind get clusters
enabling experimental podman provider
c30
$ sudo kind delete clusters c30
enabling experimental podman provider
Deleted nodes: ["c30-control-plane" "c30-worker"]
Deleted clusters: ["c30"]
$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
在建立 Kind 叢集後,預設情況下,如果未設定 $KUBECONFIG
環境變數,則 K8s 叢集入口憑證檔會儲存在 ${HOME}/.kube/config
中。
不過因為我們建 kind 的時候有加 sudo
,所以 kubeconfig
會放在 /root
底下,如果有多個叢集的話,會 append 進同一個 kubeconfig
檔案裡面。
# 1. 檢視在 KubeConfig 中定義的所有叢集
$ sudo kubectl config get-clusters
NAME
kind-c30
kind-c29
# 2. 檢視在 KubeConfig 中定義的所有 Contexts
$ sudo kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kind-c29 kind-c29 kind-c29
* kind-c30 kind-c30 kind-c30
# 3. 檢視當前所在的 Contexts
$ sudo kubectl config current-context
kind-c30
# 4. 切換要管理的 K8s 叢集
$ sudo kubectl config use-context kind-c29
Switched to context "kind-c29".
# 5. 檢視 C29 叢集 Node 狀態
$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
c29-control-plane Ready control-plane 23m v1.29.1
c29-worker Ready <none> 22m v1.29.1
What is context meaning?
- kubeconfig 檔案中的 Context 元素用於將存取叢集的參數集中在一個方便的名稱下。
- 每個 context 有三個參數:
- cluster:集群的網址或名稱
- namespace:要使用的 K8s namespace
- user:登入的使用者
- kubectl 指令 預設會使用 目前設定檔 (
KubeConfig
) 裡current context
的參數來存取要管理的 Kubernetes 集群。
Guacamole 是一款基於 HTML5 的網頁應用程式,提供透過遠端桌面協議(如 VNC 或 RDP)存取桌面環境的功能。它由 Guacamole 專案開發,並包含一套 API,可供其他應用或服務整合使用。
Jun 23, 2025請在 NFS Server 的主機中執行 Step 1~4 的操作命令,Step 6 請在能夠執行 oc 命令管理 OpenShift 的主機操作
Jun 18, 2025本篇文章會介紹,如何在多台 VM 上使用 user provisioned infrastructure 的方式安裝 1 台 Master Node 和 2 台 Worker Node 架構的 OpenShift Container Platform 4.12
Jun 16, 2025建立 Google Mail 帳號
Jun 6, 2025or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up