:::info **spec**: | ip | nodename | cpu/ram/storage | k8s version | role | |:-------------:|:--------:|:-------------------------:|:-----------:|:------:| | 192.168.0.101 | node1 | i5-8xxx/16G/2TB HDD | v1.31 | control plane | | 192.168.0.100 | x470 | r5-3600/80G/512GB m2/3060 | v1.30 | worker | | 192.168.0.102 | node2 | i5-8xxx/16G/512GB SSD | v1.31 | worker | os: ubuntu-server 22.04 LTS ::: ## pre-install 這邊使用kubectl+kubeadm的方式進行架設,比較方便 ### install kubectl (只有CP需要安裝) * 安裝 kubeactl 目的為管理整個cluster 整個cluster的管理程式 參照[k8s的doc](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management)安裝即可 ### install kubeadm (所有node都要安裝) * 安裝 kubeadm 目的為創建 cluster 與把 node 加入到現有cluster node的建立工具 [kubeadm install guide](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) :::warning k8s不能使用swap,安裝前確認一下 暫時:`$ sudo swapoff -a` 永久:[How can i turn off swap permanently?](https://askubuntu.com/questions/440326/how-can-i-turn-off-swap-permanently) ::: ### 設定cri(使用containerd) cri是容器之間溝通用的橋樑 使用`containerd config default > ./config.toml`可以生成containerd 預設值,先放到當前資料夾,後面還要做修改。 若是systemd(原則上都是)須依照[containerd systemd plugin](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd)添加選項: ```toml= [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true ``` :::warning `config.toml`中有許多`systemd_cgroup`的選項,保持`false`就好 ::: 參考: https://github.com/yangwei-ewe/k8s_yaml_repo/blob/main/containerd-config.toml 設定完成後,將`./config.toml` cp到 `/etc/containerd/config.toml` ## cp架設 使用`kubeadm config print init-defaults > ./init.yaml`印出預設 :::danger advertiseAddress 填入 ControlPlane 的IP ```yaml= localAPIEndpoint: advertiseAddress: <cp ip> bindPort: 6443 ``` flannel需在`init.yaml`中指定網段: ```yaml= kubernetesVersion: 1.31.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: "10.244.0.0/16" ``` ::: 加入使用指令 `kubeadm init --config=./init.yaml` 套用剛寫好的設定檔 :::warning 若你使用的是arm架構的電腦則需要輸入以下指令來開啟ip_forward使kubeadm可以正常執行 ```bash= sudo sysctl -w net.ipv4.ip_forward=1 # 或者: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward ``` ::: 完成時應該會跳出 `kubeadm join 192.168.0.XXX:6443 --token XXXXXXXXXXXXXXX \ --discovery-token-ca-cert-hash sha256:XXXXXXXXXXXXXXXX` 什麼鬼的就是成功了 整段記得複製起來,後面加節點時用的到 :::warning 過程中如果卡在 healthy check,通常是 containerd 哪邊不正確導致 containerd 掛掉 ::: ### 將node加進cluster 複製以上的文字(含token) 到所有node上貼上執行 出現`This node has joined the cluster:...`就代表成功加到節點了 這時在cp`kubectl get nodes`應該會看到新加入的節點才對 :::success 加完節點記得更新CA 參見:https://github.com/yangwei-ewe/k8s_yaml_repo/blob/main/update_ca.sh ::: ## 架設flannel :::info 所有節點都要 ::: ### 1. 啟用`br_netfilter` 執行`sudo modprob br_netfilter` 開機自動掛載請用 ```bash sudo bash -c 'echo br_netfilter >> /etc/modules-load.d/k8s.conf' ``` ### 2. 將`10-flannel.conflist`放到`/etc/cni/net.d` 參考: ```YAML= { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } ``` 然後在cp執行`kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml` :::info 若需要客製化須下載下來修改後再執行 參見:https://gist.github.com/yangwei-ewe/e29617241662c1e99981fde4bc341a1a ::: ## gpu node (nvidia k8s-device-plugin) 1. 安裝 [NVIDIA Container Toolkit(nvidia-ctk)](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#with-apt-ubuntu-debian) 2. apply [custom nvidia k8s-device-plugins](https://github.com/ICANLab-THU/k8s_yaml_repo/blob/main/nvidia-device-plugin.yaml)與`rtc.yaml` : ```yaml= apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: nvidia handler: nvidia ``` 或者使用[我們的版本](https://github.com/ICANLab-THU/k8s_yaml_repo/blob/main/rtc.yaml) 3. 為所有gpu node上label `nvidia.com/gpu.present=true`,否則`k8s-device-plugin` ds將不會部署到該node, :::danger `default_runitme_name`方法已被棄用,因為會造成gpu node上所有pod都將可見gpu,造成資源汙染 :::spoiler > 所有gpu node都要 需要將`/etc/containerd/conf.toml`的`plugins."io.containerd.grpc.v1.cri".containerd.default_runtime_name`設成`nvidia`才可以 ```toml= ... [plugins."io.containerd.grpc.v1.cri".containerd] default_runtime_name = "nvidia" ... ... ``` ::: ## k8s useful command 1. 找回`join`連結 ```shell kubeadm token create --print-join-command ``` 2. 為node增加角色(因為預設為`<none>`) ```shell kubectl label node/<node_name> node-role.kubernetes.io/worker=worker ``` 3. ingress開口設成80 在cp的`/etc/kubernetes/manifests/kube-apiserver.yaml -> spec.containers.command`加入: `--service-node-port-range=1-65535` Deployment 設成: ```yaml securityContext: runAsGroup: 0 runAsUser: 101 ``` 與 ```yaml spec: hostNetwork: true ``` Service 即可設成: ```yaml= ports: - appProtocol: http name: http port: 80 protocol: TCP targetPort: http nodePort: 80 - appProtocol: https name: https port: 443 protocol: TCP targetPort: https nodePort: 443 ``` [參見](https://github.com/ICANLab-THU/k8s_yaml_repo/blob/main/igc.yaml) 4. 刪除所有TAG為`<none>`的image ```bash! docker images -a | grep '<none>' | awk '{print $3}' | xargs docker rmi -f docker image prune -f #experiment: clean image docker buildx prune -f #experiment: cache image docker volume prune -f #experiment: clean unused volumes docker system prune -f #experiment: clean all crictl rmi --prune ```
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up