# K3D in Podman ## 下載 k3d 執行檔 ``` $ wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash $ k3d version k3d version v5.7.1 k3s version v1.29.6-k3s1 (default) ``` ## 在 alpine 設定 podman ``` $ sudo rc-update add cgroups $ sudo rc-service cgroups start $ grep cgroup /proc/filesystems nodev cgroup nodev cgroup2 $ sudo rc-service podman start $ sudo rc-update add podman ``` * 使用軟連結做出 docker.sock ``` $ ls -l /run/podman/podman.sock srw------- 1 root root 0 Jun 6 11:35 /run/podman/podman.sock $ sudo ln -s /run/podman/podman.sock /var/run/docker.sock $ ls -l /var/run/docker.sock lrwxrwxrwx 1 root root 23 Jun 6 11:36 /var/run/docker.sock -> /run/podman/podman.sock ``` * 設定開機自動建置 Podman Socket 軟連結 ``` $ sudo rc-update add local $ sudo rc-service local start $ sudo nano /etc/local.d/rc.local.start [[ -h /var/run/docker.sock ]] || ln -s /run/podman/podman.sock /var/run/docker.sock $ sudo chmod +x /etc/local.d/rc.local.start ``` ``` $ rc-status Runlevel: default sshd [ started ] acpid [ started ] cgroups [ started ] chronyd [ started ] crond [ started ] podman [ started 00:15:24 (0) ] local [ started ] Dynamic Runlevel: hotplugged Dynamic Runlevel: needed/wanted sysfs [ started ] fsck [ started ] root [ started ] localmount [ started ] ``` ## 建立 k3d 1m2w 叢集 * 安裝 kubectl ``` $ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" $ sudo chmod +x kubectl $ sudo mv kubectl /usr/local/bin/ $ kubectl version Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.6+k3s1 ``` * 建立 k3d 叢集 ``` $ sudo k3d cluster create c29 -s 1 -a 2 INFO[0000] Prep: Network INFO[0000] Created network 'k3d-c29' INFO[0000] Created image volume k3d-c29-images INFO[0000] Starting new tools node... INFO[0000] Starting node 'k3d-c29-tools' INFO[0001] Creating node 'k3d-c29-server-0' INFO[0002] Creating node 'k3d-c29-agent-0' INFO[0002] Creating node 'k3d-c29-agent-1' INFO[0002] Creating LoadBalancer 'k3d-c29-serverlb' … INFO[0037] Cluster 'c29' created successfully! INFO[0038] You can now use it like this: kubectl cluster-info ``` * 檢視 container ``` $ sudo podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a779e1ac7675 docker.io/rancher/k3s:v1.29.6-k3s1 server --tls-san ... 27 seconds ago Up 25 seconds k3d-c29-server-0 3dafbad75e7f docker.io/rancher/k3s:v1.29.6-k3s1 agent 27 seconds ago Up 20 seconds k3d-c29-agent-0 b7ca2e9b7dc4 docker.io/rancher/k3s:v1.29.6-k3s1 agent 26 seconds ago Up 20 seconds k3d-c29-agent-1 83996a13fb4a ghcr.io/k3d-io/k3d-proxy:5.7.1 26 seconds ago Up 16 seconds 0.0.0.0:35173->6443/tcp k3d-c29-serverlb ``` ``` $ sudo kubectl cluster-info Kubernetes control plane is running at https://0.0.0.0:34257 CoreDNS is running at https://0.0.0.0:34257/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://0.0.0.0:34257/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ```  * k3d-c29-serverlb 負責接收前端使用者 kubectl 指令,將流量傳到 apiserver,並且他會做附載平衡 ``` $ sudo podman exec k3d-c29-serverlb ps aux PID USER TIME COMMAND 1 root 0:00 /run/podman-init -- /bin/sh -c nginx-proxy 2 root 0:00 {nginx-proxy} /bin/sh /usr/bin/nginx-proxy 27 root 0:00 confd -watch -backend file -file /etc/confd/values.yaml -log-level debug 28 root 0:00 nginx: master process nginx -g daemon off; 42 nginx 0:00 nginx: worker process 43 nginx 0:00 nginx: worker process 44 nginx 0:00 nginx: worker process 45 nginx 0:00 nginx: worker process 46 root 0:00 ps aux ``` ``` $ sudo podman exec k3d-c29-serverlb cat /etc/confd/values.yaml ports: 6443.tcp: - k3d-c29-server-0 settings: workerConnections: 1024 ``` * 檢視所有 k3d 叢集 ``` $ sudo k3d cluster list NAME SERVERS AGENTS LOADBALANCER c29 1/1 2/2 true ``` ``` $ sudo k3d node list NAME ROLE CLUSTER STATUS k3d-c29-agent-0 agent c29 running k3d-c29-agent-1 agent c29 running k3d-c29-server-0 server c29 running k3d-c29-serverlb loadbalancer c29 running ``` * 獲取 kubeconfig ``` $ mkdir .kube $ sudo k3d kubeconfig get c29 > .kube/config $ kubectl get no NAME STATUS ROLES AGE VERSION k3d-c29-agent-0 Ready <none> 38m v1.29.6+k3s1 k3d-c29-agent-1 Ready <none> 38m v1.29.6+k3s1 k3d-c29-server-0 Ready control-plane,master 38m v1.29.6+k3s1 ``` ## 創建高可用性 k3d 3m2w 叢集 ``` $ sudo k3d cluster create c29-ha -s 3 -a 2 ``` * 複製 kubeconfig ``` $ sudo cp /root/.kube/config ~/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` * 檢視 contexts ``` $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE k3d-c29 k3d-c29 admin@k3d-c29 * k3d-c29-ha k3d-c29-ha admin@k3d-c29-ha $ kubectl get no NAME STATUS ROLES AGE VERSION k3d-c29-ha-agent-0 Ready <none> 6m38s v1.29.6+k3s1 k3d-c29-ha-agent-1 Ready <none> 6m38s v1.29.6+k3s1 k3d-c29-ha-server-0 Ready control-plane,etcd,master 7m13s v1.29.6+k3s1 k3d-c29-ha-server-1 Ready control-plane,etcd,master 6m57s v1.29.6+k3s1 k3d-c29-ha-server-2 Ready control-plane,etcd,master 6m43s v1.29.6+k3s1 ``` ## 檢視目前所有叢集 ``` $ sudo k3d cluster list NAME SERVERS AGENTS LOADBALANCER c29 1/1 2/2 true c29-ha 3/3 2/2 true ``` ## 將 image 匯入 c29-ha 叢集節點 * 使用 podman 下載 image ``` $ sudo podman pull nginx:latest ``` * 將 image import 到 k3s 所有節點上 ``` $ sudo k3d image import docker.io/library/nginx:latest -c c29-ha ``` * 檢視 worker 上 image ``` $ sudo podman exec k3d-c29-ha-agent-0 crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/nginx latest fffffc90d343c 192MB docker.io/rancher/klipper-lb v0.4.7 edc812b8e25d0 4.78MB docker.io/rancher/mirrored-pause 3.6 6270bb605e12e 301kB $ sudo podman exec k3d-c29-ha-agent-1 crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/nginx latest fffffc90d343c 192MB docker.io/rancher/klipper-lb v0.4.7 edc812b8e25d0 4.78MB docker.io/rancher/mirrored-pause 3.6 6270bb605e12e 301kB ``` ## 關閉 k3d c29 叢集 ``` $ sudo k3d cluster stop c29 INFO[0000] Stopping cluster 'c29' INFO[0013] Stopped cluster 'c29' ``` ## 開啟 k3d c29 叢集 ``` $ sudo k3d cluster start c29 INFO[0000] Using the k3d-tools node to gather environment information INFO[0000] Starting new tools node... INFO[0000] Starting node 'k3d-c29-tools' ...... ``` ## 刪除 k3d c29 叢集 ``` $ sudo k3d cluster delete c29 INFO[0000] Deleting cluster 'c29' INFO[0012] Deleting cluster network 'k3d-c29' INFO[0012] Deleting 1 attached volumes... INFO[0012] Removing cluster details from default kubeconfig... INFO[0012] Removing standalone kubeconfig file (if there is one)... INFO[0012] Successfully deleted cluster c29! ``` ## 參考文件 https://k3d.io/v5.1.0/usage/configfile/
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up