# 從0開始建立原生kubernetes ## 規格: | Hostname | IP Address | CPU | Memory | Disk | OS Version | |-------------|----------------|--------|--------|-------|-------------------| | k8master1 | 192.168.61.100 | 2 core | 4 GB | 50 GB | Rocky Linux 9.0 | | k8worker1 | 192.168.61.103 | 2 core | 4 GB | 50 GB | Rocky Linux 9.0 | | k8worker2 | 192.168.61.104 | 2 core | 4 GB | 50 GB | Rocky Linux 9.0 | K8S version: v1.31.2 ## 前言 master & worker node 都可以按照這個前置作業去設定。 worker 不需要安裝 kubectl # 前置作業 ## 安裝作業系統NODE 1. 略,VMware開起來應該裝一裝就好了 2. 記得選package 要勾選 conatiner (但後來怎麼沒看到這個設定= =) 3. 建議Master1安裝完套件與改完所有設定後 clone 兩台VM出來當 worker(改網路設定與hostname就可以用ㄌ) ### Master1 安裝套件 ```bash= sudo dnf update -y sudo dnf install -y net-tools curl wget vim nano; sudo dnf install -y zip unzip tar gzip bzip2 xz; sudo dnf install -y lsof tree rsync traceroute bash-completion epel-release ; #待研究 #sudo dnf install epel-release -y ``` 設定hostname、IP、dns ```bash= #k8master1 hostnamectl set-hostname k8master1; nmcli con modify ens160 ipv4.address 192.168.61.100/24 ipv4.gateway 192.168.61.2 ipv4.dns 8.8.8.8; echo -e "192.168.61.100 k8master1\n192.168.61.103 k8worker1\n192.168.61.104 k8worker2" | sudo tee -a /etc/hosts ping google.com #測一下; ``` ### Worker1 ```bash= #k8worker1 hostnamectl set-hostname k8worker1; nmcli con modify ens160 ipv4.address 192.168.61.103/24 ipv4.gateway 192.168.61.2 ipv4.dns 8.8.8.8; echo -e "192.168.61.100 k8master1\n192.168.61.103 k8worker1\n192.168.61.104 k8worker2" | sudo tee -a /etc/hosts; ping google.com #測一下 ``` ### Worker2 ```bash= #k8worker2 hostnamectl set-hostname k8worker2; nmcli con modify ens160 ipv4.address 192.168.61.104/24 ipv4.gateway 192.168.61.2 ipv4.dns 8.8.8.8; echo -e "192.168.61.100 k8master1\n192.168.61.103 k8worker1\n192.168.61.104 k8worker2" | sudo tee -a /etc/hosts; ping google.com #測一下 ``` ## 關閉SELinux (我是沒關啦但還是可以跑) ```bash= sudo vim /etc/selinux/config ... ... #SELINUX=enforcing #註解此行 SELINUX=disabled #新增此行 sudo reboot sestatus ``` ## Swap configuration (禁用swap空間) ```bash= sudo vim /etc/fstab ... ... #註解swap #/dev/mapper/rl-swap none swap defaults 0 0 sudo mount -a sudo vim /etc/sysctl.conf #添加此行 vm.swappiness=0 # Apply sysctl params without reboot sudo sysctl --system sudo reboot free -h #swap 為 0B total used free shared buff/cache available Mem: 3.6Gi 280Mi 3.1Gi 8.0Mi 210Mi 3.1Gi Swap: 0B 0B 0B ``` ## 加載 Linux 核心模組 br_netfilter ```bash= #加載 Linux 核心模組 br_netfilter,允許網橋(bridge)在網路封包過濾時與 iptables 協同工作。 sudo modprobe br_netfilter #開機自動加載 cat <<EOF | sudo tee /etc/modules-load.d/br_netfilter.conf br_netfilter EOF #確認此模組已被加載 lsmod | grep br_netfilter sysctl net.bridge.bridge-nf-call-iptables ``` ## 啟用 IP 轉發(IP forwarding) ```bash= #啟用 IP 轉發(IP forwarding),允許系統轉發從一個網路介面到另一個網路介面的 IP 封包。 sudo sysctl -w net.ipv4.ip_forward=1 # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system # 確認 sysctl net.ipv4.ip_forward ``` ## Installing a container runtime (use CRI-O) ### Define the Kubernetes version and used CRI-O stream ```bash= KUBERNETES_VERSION=v1.31; CRIO_VERSION=v1.31; ``` ### Add the Kubernetes repository ```bash= cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/repodata/repomd.xml.key EOF ls -al /etc/yum.repos.d/kubernetes.repo ``` ### Add the CRI-O repository ```bash= cat <<EOF | sudo tee /etc/yum.repos.d/cri-o.repo [cri-o] name=CRI-O baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/repodata/repomd.xml.key EOF ls -al /etc/yum.repos.d/cri-o.repo ``` ### Install package dependencies from the official repositories ```bash sudo dnf install -y container-selinux ``` ### Install the packages ```bash= sudo dnf install -y cri-o kubelet kubeadm kubectl ``` ### Start CRI-O ```bash= sudo systemctl enable crio.service --now systemctl status crio.service ``` ## 確認 CRI-O 的 cgroup 驅動設定 ```bash= sudo vim /etc/crio/crio.conf [crio.runtime] conmon_cgroup = "pod" # 適用於 cgroupfs,若使用 systemd 可省略 cgroup_manager = "systemd" # 設定為 systemd(或 cgroupfs) sudo systemctl restart crio.service ``` ## 設定kubelet開機自動啟動 ```bash= sudo systemctl enable kubelet --now ``` ## 關閉firewalld ```bash= sudo systemctl disable firewalld --now ``` # 開始建立cluster ## 創建初始k8s建置設定檔案 ```bash= kubeadm config print init-defaults > init-config.yaml vim init-config.yaml ``` ```yaml= apiVersion: kubeadm.k8s.io/v1beta4 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.61.100 #你的master node or haproxy IP bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/crio/crio.sock #crio 的socket 位置 imagePullPolicy: IfNotPresent imagePullSerial: true name: k8master1 #主機名稱 taints: null timeouts: controlPlaneComponentHealthCheck: 4m0s discovery: 5m0s etcdAPICall: 2m0s kubeletHealthCheck: 4m0s kubernetesAPICall: 1m0s tlsBootstrap: 5m0s upgradeManifests: 5m0s --- apiServer: {} apiVersion: kubeadm.k8s.io/v1beta4 caCertificateValidityPeriod: 87600h0m0s certificateValidityPeriod: 8760h0m0s certificatesDir: /etc/kubernetes/pki clusterName: kubernetes #可以改 controllerManager: {} dns: {} encryptionAlgorithm: RSA-2048 etcd: local: dataDir: /var/lib/etcd imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: 1.31.2 #改成自己要的版本 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/24 podSubnet: 10.244.0.0/16 #add this proxy: {} scheduler: {} --- ### 配置 kubelet 的 cgroup 驅動 apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd # 設定為與 CRI-O 一致的 cgroup 驅動 ``` ## 安裝cluster ```bash= sudo kubeadm init --config init-config.yaml #取得.kube/config mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config; ``` ## 將master node 移除taint (資源不夠用可考慮) ```bash= kubectl get node kubectl taint node m1 node-role.kubernetes.io/control-plane:NoSchedule- node/m1 untainted ``` ## 安裝Canal網路套件 https://docs.tigera.io/calico/latest/getting-started/kubernetes/flannel/install-for-flannel ```bash= curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/canal.yaml -O kubectl apply -f canal.yaml ls -al /etc/cni/net.d/ total 16 drwxr-xr-x 2 root root 4096 Jun 22 20:02 . drwxr-xr-x 4 root root 4096 Dec 12 2021 .. -rw-r--r-- 1 root root 686 Jun 22 20:02 10-canal.conflist -rw------- 1 root root 2680 Jun 22 20:02 calico-kubeconfig sudo reboot ``` ## 檢視 Kubernetes 系統 ```bash= kubectl get nodes NAME STATUS ROLES AGE VERSION k8master1 Ready control-plane 90m v1.31.2 kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-d4dc4cc65-4w584 1/1 Running 0 8m54s canal-z5dvc 2/2 Running 0 8m54s coredns-7c65d6cfc9-hq4zc 1/1 Running 0 90m coredns-7c65d6cfc9-sbvmx 1/1 Running 0 90m etcd-k8master1 1/1 Running 5 90m kube-apiserver-k8master1 1/1 Running 0 90m kube-controller-manager-k8master1 1/1 Running 4 90m kube-proxy-w845l 1/1 Running 0 90m kube-scheduler-k8master1 1/1 Running 5 90m #sudo reboot ``` ## 設定自動補全功能 ```bash= echo 'source <(kubectl completion bash)' >> ~/.bashrc source ~/.bashrc ``` ```bash= echo 'alias k=kubectl' >> ~/.bashrc echo 'complete -F __start_kubectl k' >> ~/.bashrc ``` ## join node ```bash= #ON master1 export JOIN=$(echo " sudo `kubeadm token create --print-join-command 2>/dev/null`") ssh k8worker1 "$JOIN"; ssh k8worker2 "$JOIN"; ssh k8worker1 sudo reboot; ssh k8worker2 sudo reboot; ``` ## 設定role ```bash= kubectl label node k8worker1 node-role.kubernetes.io/worker=; kubectl label node k8worker2 node-role.kubernetes.io/worker=; [eaadm@k8master1 ~]$ kubectl get node NAME STATUS ROLES AGE VERSION k8master1 Ready control-plane 23h v1.31.2 k8worker1 Ready worker 18m v1.31.2 k8worker2 Ready worker 20s v1.31.2 ``` # 刪除k8s(刪得乾乾淨淨) ```bash= #if worker kubectl delete node k8workerx sudo kubeadm reset sudo rm -rf /etc/cni/net.d sudo rm -rf /etc/cni/ sudo rm -rf /var/lib/cni/ sudo rm -rf /var/lib/kubelet/* sudo rm -rf ~/.kube ```