## LAB01: Kubernetes Installation & Configuration
Demo: Prepare VMs
> Start 3 Ubuntu VMs, 2cpu, 4gb RAM, 20-30GB.
1. Set hostname and also map in /etc/hosts
- demo-k8s-m1
- demo-k8s-w1
- demo-k8s-w2
> Full Install Steps
```
sudo swapoff -a
sudo modprobe overlay
sudo modprobe br_netfilter
```
## Check 6443 port is open in the firewall,
> Connection refused in the output means no firewall is blocking this port
> Else it will timed out which means firewall is blocking
```
nc 127.0.0.1 6443 -v
```
```
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
```
```
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
```
```
sudo sysctl --system
```
# Add Docker's official GPG key:
```bash
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gpg
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
```
> Install containerd
```bash
sudo apt-get install containerd.io
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
```
> Edit so it looks like this
```
sudo nano /etc/containerd/config.toml
```
```
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
```
```bash
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd
```
```bash
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo sh -c "echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf"
```
```bash
sudo sysctl -p
```
> Install Kubernetes Binaries - Kubeadm, kubelet, kubectl
```bash
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update #run after updated the sources
sudo apt-get install -y curl gnupg containerd
sudo systemctl enable --now containerd
sudo apt-get install -y kubelet kubeadm kubectl #installs all three required itmes
sudo apt-mark hold kubelet kubeadm kubectl
```
> Start Master Node! Depends on the CNI Deployed Calico is Most Common
> Initialize cluster
```bash
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=/var/run/containerd/containerd.sock --v=5
```
or
```
sudo kubeadm init
```
This command will print node join command like below, keep it for later use while we join the worker node:
```bash
kubeadm join x.x.x.x:6443 --token <api server access token> \
--discovery-token-ca-cert-hash <ca cert hash>
```
> After control-plane initialization, export configuration.
```bash
(regular user)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
(if root)
sudo export KUBECONFIG=/etc/kubernetes/admin.conf
```
> apply labels [optional]
```bash
kubectl label node demo-worker-1 node-role.kubernetes.io/worker=worker
kubectl label node demo-worker-2 node-role.kubernetes.io/worker=worker
```
Untill now you should see nodes is `NotReady` State.
```
kubectl get nodes
```
To be fully ready, kubernetes need pod networking ready. We use calico CNI to enable pod networking.
> Later apply calico configuration
```bash
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
```
https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
> Copy discovery token joining information, the pods will use their local netork.
> Worker nodes need same installation but don't "have" to have kubectl (but it is nice to run it from anywhere)
> Joining worker nodes, run the join command copied at kubernetes initialization phase
e.g.
```bash
sudo kubeadm join 172.31.40.47:6443 --token o6h30c.85ndczvth5veigz4 \
--discovery-token-ca-cert-hash sha256:6cc7d517b8c9979e049e455236eae0650af54d8859318cc6de052371caf6c403
```
> Check pods for master node, and validate issues, but remember this may take some time (10m)
> Nodes will add as the configurtarion takes effect.
```bash
kubectl get pods -n kube-system
```
>Check logs for issues.
```bash
kubectl logs -n kube-system kube-controller-manager-<hostname>
```
> You will know the master node is ready to join worker nodes by running the following, and see "ready"
```bash
kubectl get nodes
```
## Optional
1. crictl
```bash
VERSION="v1.34.0"
curl -LO https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm crictl-$VERSION-linux-amd64.tar.gz
crictl --version
```
2. etcdctl
```bash=
ETCD_VER=v3.4.37
curl -LO https://github.com/etcd-io/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzf etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo mv etcd-${ETCD_VER}-linux-amd64/etcdctl /usr/local/bin/
etcdctl version
```
## Troubleshooting
```bash=
containerd --version
sudo journalctl -u containerd -n 200 --no-pager
sudo apt update
sudo apt install containerd.io
```
## References
1. [Joining new node](https://hackmd.io/@PWUAMc16SL6c88wSibmARw/H1lfMRAlZl)