[TOC]
# Installation
## Tools Needed
- 建構 K8s cluster 的主要工具由以下三個組成:
- **kubeadm**: 類似 Docker swarm,透過初始化 master 快速部屬環境以提供各 nodes 加入 cluster。
- **kublet**: 運行在每個 node 上的 k8s 工具,基於 PodSpec(即配置 Pod 的 .yaml/.json 檔)執行 master 指派的工作。
- **kubectl**: k8s command-line tool。
## Steps
> Host OS: Ubuntu 22.04
> Container runtime: Docker
### 1. Set Hostnames
```bash=
sudo vi /etc/hosts
```
```
10.22.23.188 k8s.master
10.22.23.187 k8s.node1
10.22.23.109 k8s.node2
```
### 2. Disable Swap
- [Why disable swap on kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
- All nodes:
```bash=
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
```
### 3. Add kernel Parameters
```bash=
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
```
```bash=
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
```
- Apply the config
```bash=
sudo sysctl --system
```
### 4. Install Containerd Runtime
:::info
[**Container runtime**](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime)
>**Docker Engine does not implement the CRI which is a requirement for a container runtime to work with Kubernetes.** For that reason, an additional service **cri-dockerd** has to be installed. (cri-dockerd is a project based on the legacy built-in Docker Engine support that was removed from the kubelet in version 1.24.)
:::
```bash=
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
```
### 5. Add Apt Repository for Kubernetes
```bash=
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/kubernetes-xenial.gpg
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
```
### 6. Install Kubectl & Kubeadm & Kubelet
```bash=
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
### 7. Initialize Cluster(on master only)
```bash=
sudo kubeadm init --control-plane-endpoint=k8s.master
```
:::spoiler Output
```shell=
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate author ities
and service account keys on each node and then running the following as root:
kubeadm join k8s.master:6443 --token 39uagv.x8l6fj79vhahuftt \
--discovery-token-ca-cert-hash sha256:fb400df848c45587096b63fbb538a4defcb7da8ec069301b568e50186e71a6c8 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s.master:6443 --token 39uagv.x8l6fj79vhahuftt \
--discovery-token-ca-cert-hash sha256:fb400df848c45587096b63fbb538a4defcb7da8ec069301b568e50186e71a6c8
```
:::
```bash=
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
- Now check the info of the cluster:
```bash=
kubectl cluster-info
kubectl get nodes
```
### 8. Join to the Cluster(nodes only)
- Check the output of `kubeadm init` above, copy the command to the nodes:
```bash=
sudo kubeadm join k8s.master:6443 --token 39uagv.x8l6fj79vhahuftt \
--discovery-token-ca-cert-hash sha256:fb400df848c45587096b63fbb538a4defcb7da8ec069301b568e50186e71a6c8
```
:::warning
Type `kubectl get nodes` command, you will see the status of nodes is "**NotReady**".
To active those nodes, you have to install a network plugin on the master.
:::
### 9. Install Calico Network Plugin(on master only)
> [Calico](https://github.com/projectcalico/calico)
- This network plugin is required to enable communication between pods in the cluster.
```bash=
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
```
- Verify the status of pods in kube-system namespace:
```bash=
kubectl get pods -n kube-system
```
> When these related pods are running, you will observe the node's status change to Ready.

### Re-initialize Cluster
- 當 master 重新開機時發現執行 k8s 出現錯誤,需要 reset 後重新 init kubeadm:
```bash=
sudo kubeadm reset
```
- Reset 後有部分設定檔需要手動刪除,系統通常也會提醒:
```
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
```
- 確認該刪除的都刪了再重新 init 並執行接續的動作(step 7)
```bash=
sudo systemctl enable kubelet
sudo kubeadm init
```
- Nodes 也需確認 kubelet service 是否為 active,若沒有則一樣要重新 enable:
```bash=
sudo systemctl status kubelet
sudo systemctl enable kubelet
```
## Hands-on
### Create Pods
- Listing pods
```bash=
kubectl get pods
```
- Create a single pod.
```bash=
kubectl run nginx --image=nginx --restart=Never
```
- View the configuration of the pod.
```bash=
kubectl describe pod nginx
```
- Delete the pod.
```bash=
kubectl delete pod nginx
```
### Create Deployment
- To create all the necessary components without having to deal with JSON or YAML.
```bash=
kubectl create deployment my-apps --image=nginx --port=8080
```
:::warning
- `--generator` isn't supported starting from version 1.17
```bash=
kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1
```
> ` --image`: the container image.
> `--port`: external port number.
> `--generator`: creates a **ReplicationController** instead of a **Deployment**.
:::
- Listing Deployment
```bash=
kubectl get deployment
```
### Create Services
- To expose the ReplicationController you created earlier, then create the **Service**.
```bash=
kubectl expose deploy my-apps --type=LoadBalancer --name my-svc
```
> `deploy`: deployment.
> `--type`: service type.
- Listing Services
```bash=
kubectl get services
```
## 觀念釐清
### Master Node & Worker Node
- 每個 worker node 都是 K8s cluster 中可調度的資源,而 master Node 扮演的是指揮 worker node 的角色。
- 在 cluster 成功建立後,將工作/任務派發給 Pod(s) 時,我們不會去理會這些 Pods 到底是被建在哪個 worker node 上,需要關注的只有 Pods 本身。
- 假設我們部屬了一個擁有 3 個 replicas(Pods) 的 Deployment,這 3 個 Pods 可能並非建立在同個 worker node 上 —— 但我們無須在意,只要 Pods 有正確在運作才是最重要的。倘若今天有個 worker node 出了問題,K8s 也會自動幫我們將運行在上面的 Pods 移植到其他可用的 worker node 上。
:::spoiler 不負責任舉例
> 假設今天 588 想要辦一場賞櫻茶會,所以要建一個 cluster 來組織這場茶會。
活動需要的是人力,所以這些資源我們只能請校內的各科系來支援,每個系可以當成是一個 worker node —— 目前只有資工系、電機系、土木系願意幫助 588,所以這個 cluster 總共有三個 worker node。
人力要怎麼找呢?要找學生還是老師還是行政人員?(一次只能找一種,不然會吵架ㄋ) —— 找學生好了,所以我們請學生會來負責籌組需要的人力。這樣每個人都是一個 container, 學生會則是 container runtime。(也可以選擇讓行政人員來充當人力,但這樣我們就需要另一種 container runtime 了,因為學生會是無法支使行政人員的。)
接下來成立了幾個任務小組 —— 場地小組、餐飲小組、表演小組。每個小組我們就當作是一個 Pod,所以我們需要建立 3 個 Pods。
我(master node) 提出了以下要求:
- 場地小組要 2 人
- 餐飲小組要 1 人
- 表演小組 3 人
接下來就是讓 K8s 自行調度了,不用在意這些小組的成員來自哪個科系(worker node),會做事就好。
:::
### What is Pod ?
:::info
A **Pod** is similar to a set of containers with shared namespaces and shared filesystem volumes.
**Kubernetes manages Pods rather than managing the containers directly.**
:::
- Pod 是 K8s 可管理、調度的最小單位,因此透過 K8s 來經營(?) cluster 時,我們管理的對象就不是 container 本身,而是 Pod。
- Pods are generally not created directly and are created using workload resources such as **Deployment**. If your Pods need to track state, consider the **StatefulSet** resource.
:::info
You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles **replication** and **rollout** and **automatic healing** in case of Pod failure.
:::
#### Single container of multiple containers per Pod ?
- Single container
- **The "one-container-per-Pod" model is the most common Kubernetes use case.**
- Multiple containers
- The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
- Grouping multiple co-located and co-managed containers in a single Pod is **a relatively advanced use case**.
- See more about [How Pods manage multiple containers](https://kubernetes.io/docs/concepts/workloads/pods/#how-pods-manage-multiple-containers).
:::info
實作練習皆以 **one-container-per-Pod** 為主。
:::
### What is Deployment ?
:::info
A **Deployment** provides declarative updates for Pods and ReplicaSets.
:::
- 在部屬 Deployment 時,你會先針對要運行的 Pod(s) 提出一些需求(透過特定的參數,也許是寫在 YAML 裡面),而 Deployment 會依照你描述的條件去建立 Pod(s) 並且實時監控它(們)是否符合你的要求。
### What is Service ?
:::info
In Kubernetes, a **Service** is a method for exposing a network application that is running as one or more Pods in your cluster.
:::
- 這裡不多贅述,詳細的請看 Service 章節。