# K8S Installation Steps
Date: Oct 2023
OS: Ubuntu 22.04
Kubernetes: 1.26.4
Calico: 3.26.3
Node: 1 x control + 2 x workers
**1. Node preparation**
On the control plane node:
`sudo hostnamectl set-hostname k8s-control`
On the first worker node:
`sudo hostnamectl set-hostname k8s-worker1`
On the second worker node:
`sudo hostnamectl set-hostname k8s-worker2`
On all nodes, set up the hosts file to enable all the nodes to reach each other using these hostnames.
sudo vi /etc/hosts
On all nodes, add the following at the end of the file. You will need to supply the actual private IP address for each node.
```
<control plane node private IP> k8s-control
<worker node 1 private IP> k8s-worker1
<worker node 2 private IP> k8s-worker2
```
Disable and turn off SWAP
```
sudo swapoff -a
sudo sed -i '/swap/s/^/#/' /etc/fstab
```
Stop and disable Ubuntu ufw
`sudo systemctl disable --now ufw`
Loading K8s required Kernel Modules
```
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
#reload kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter
```
Setup iptables to check bridge network flow
```
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# reload sysctl.d
sudo sysctl --system
```
**2. Containerd installation on all nodes**
* Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
* https://gist.github.com/kkbruce/c632e946c59f04ea8d7ce20f6f80b26d
```
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
# install containerd.io
sudo apt update
sudo apt install -y containerd.io
# generate default config
sudo containerd config default | sudo tee /etc/containerd/config.toml
# Configuring a cgroup driver !!very important!!
# https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
```
**3. Install kubeadm, kubelet and kubectl on all nodes**
Make disable all swap before this step
Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
```
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
#Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:
# https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl
sudo apt-get update
sudo apt-get install -y kubelet=1.26.4-00 kubeadm=1.26.4-00 kubectl=1.26.4-00
sudo apt-mark hold kubelet kubeadm kubectl
```
**4. On the control plane node only, initialize the cluster and set up kubectl access.**
Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
```
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.26.4
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Verify the cluster is working. It takes around 15 mins for the status change from NotReady to Ready state
```
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-control Ready control-plane 15m v1.26.4 172.31.22.65 <none> Ubuntu 22.04.3 LTS 6.2.0-1009-aws containerd://1.6.24
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
```
**5. Install Calico network add-on on control node**
Reference: https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
Check the latest version and support matrix before installation
```
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/calico.yaml
```
**6. Join worker node to the cluster**
Get the join command. this command is also printed during kubeadm init
`kubeadm token create --print-join-command`
Copy the join command from the control plane node. Run it on each worker node as root.
`sudo kubeadm join ...`
On the control plane node, verify all nodes in your cluster are ready. Note that it may take a few moments for all of the nodes to
enter the READY state.
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-control Ready control-plane 21m v1.26.4 172.31.22.65 <none> Ubuntu 22.04.3 LTS 6.2.0-1009-aws containerd://1.6.24
k8s-worker1 Ready <none> 79s v1.26.4 172.31.21.187 <none> Ubuntu 22.04.3 LTS 6.2.0-1009-aws containerd://1.6.24
k8s-worker2 Ready <none> 47s v1.26.4 172.31.25.96 <none> Ubuntu 22.04.3 LTS 6.2.0-1009-aws containerd://1.6.2
```