# In-class Lab Final - Kubernetes using Kubeadm > [name=Aneesh Melkot (1001750503)] [color=#326ce5] ![](https://i.imgur.com/ysAUMf6.png) ## Contents [TOC] ## Minikube ### Installation Minikube was installed using Chocolatey (Windows package manager) ```shell! $ choco install minikube ``` ### Startup Minikube was started using ```shell! $ minikube start ``` ![](https://i.imgur.com/8wppDcl.png) ### Minikube Usage Lets check out my existing pods and deployments on Minikube ```shell! $ kubectl get pods $ kubectl get deployments ``` ![](https://i.imgur.com/PpiadMw.png) ### Minikube deploy service Lets deploy a sample app on minikube ```shell! $ kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0 $ kubectl expose deployment hello-minikube --type=NodePort --port=8080 $ kubectl get services hello-minikube ``` ![](https://i.imgur.com/pB3FSDY.png) Now we can access this service by ```shell! $ minikube service hello-minikube ``` ![](https://i.imgur.com/FW2oEfT.png) Here we can see the service runnin in the browser. ![](https://i.imgur.com/EFH8SAK.png) ## Kubernetes on the Cloud ### Create Base VM Logged into my GCP console and made a base instance as seen below. Will use this instance to make the master and worker nodes. ![](https://i.imgur.com/GZccEGB.png) ### Install Docker Installed Docker on the base VM ```shell! $ sudo apt-get remove docker docker-engine docker.io containerd runc $ sudo apt-get update $ sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release $ sudo mkdir -p /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg \ --dearmor -o /etc/apt/keyrings/docker.gpg ``` ![](https://i.imgur.com/zseqlOY.png) Then installed the Docker Engine ```shell! $ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin ``` ![](https://i.imgur.com/cT3JAi4.png) ![](https://i.imgur.com/arxPgUs.png) After these steps docker is up and running. Lets check the status ```shell! $ sudo systemctl enable docker $ sudo docker info ``` > As can be seen in the above image the cgroup driver is set to `systemd` so we are good to go and install Kubernetes. [color=#326ce5] ![](https://i.imgur.com/tpgH5Yh.png) ### Install kubeadm, kubectl & kubelet Installed Kubernetes and dependencies ```shell! $ sudo apt-get update $ sudo apt-get install -y apt-transport-https ca-certificates curl $ sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg $ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list $ sudo apt-get update $ sudo apt-get install -y kubelet kubeadm kubectl $ sudo apt-mark hold kubelet kubeadm kubectl ``` ![](https://i.imgur.com/nEtEnxD.png) ### Install Docker Shim As of Kubernetes 1.20, Docker Runtime has been deprecated and so I installed a 3'rd party shim to enable Kubernetes to access the Docker Engine. ```shell! $ VER=$(curl -s https://api.github.com/repos/Mirantis/cri-dockerd/releases/latest|grep tag_name | cut -d '"' -f 4|sed 's/v//g') $ wget https://github.com/Mirantis/cri-dockerd/releases/download/v${VER}/cri-dockerd-${VER}.amd64.tgz $ tar xvf cri-dockerd-${VER}.amd64.tgz $ sudo mv cri-dockerd/cri-dockerd /usr/local/bin/ $ wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service $ wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket $ sudo mv cri-docker.socket cri-docker.service /etc/systemd/system/ $ sudo sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service $ sudo systemctl daemon-reload $ sudo systemctl enable cri-docker.service $ sudo systemctl enable --now cri-docker.socket ``` And finally ```shell! $ systemctl status cri-docker.socket ``` ![](https://i.imgur.com/0a5QY7W.png) ### Configure machine image Once our base VM is ready, now we can configure other VMS using the previously created VM as the BASE. We can create a machine image first and configure other VMs using this machine image. ![](https://i.imgur.com/lo4il23.png) #### Master Node Configure Master node using this machine image ![](https://i.imgur.com/8aU3M9p.png) #### Worker Nodes Configured 2 worker nodes from the same machine image. ![](https://i.imgur.com/LesYs3k.png) > In the above image we have the base, master and 3 worker nodes up and running. [color=#326ce5] ### Sanity Lets check the instances from the local CLI ```shell! $ gcloud compute instances list ``` ![](https://i.imgur.com/HChjsei.png) Above we can see the newly created master and worker instances ### SSH into all nodes ```shell! $ gcloud compute ssh master $ gcloud compute ssh worker-<number> ``` ![](https://i.imgur.com/YLGWtO8.png) ## Kubeadm Let us now SSH into ento each of the VM and start Kubeadm svc ```shell! $ kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock ``` ![](https://i.imgur.com/F0qJ4HD.png) kubeadm has generated a token. Let's use this token to add the other VM's as nodes to this cluster. ```shell! $ kubeadm join 10.206.0.14:6443 --token uhmnbm.vv367nyjudyfi8yp \ --discovery-token-ca-cert-hash sha256:<TOKEN> ``` ### Workers Workers were added by SSHing to the VMs and running the join command. ![](https://i.imgur.com/smewgyM.png) All 3 workers were added similarly. Now we can check the nodes from the master. ```shell! $ kubectl get nodes ``` ![](https://i.imgur.com/i9isekF.png) We can see all the nodes in our cluster above But clsuster is in `NotReady` state. Hence, we need to remove the taints on the nodes. ### Taints We can remove the taint using ```shell! $ kubectl taint node --all node.kubernetes.io/not-ready:NoSchedule- ``` ![](https://i.imgur.com/H5LoMid.png) ### Cluster ready All systems are green and good to go. ![](https://i.imgur.com/fNS3sKy.png) ## NGINX Deployment Stateless Nginx deployment is run using ```shell! $ kubectl apply -f https://k8s.io/examples/application/deployment.yaml ``` ![](https://i.imgur.com/OwUVlZg.png) 5 replica sets are created using - ```yaml! apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 5 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.16.1 ports: - containerPort: 80 ``` ![](https://i.imgur.com/IGoBCPE.png) ## Cluster Backup We can use Kasten IO to perform Cluster Backups and restoration. ```shell! $ helm repo add kasten https://charts.kasten.io/ $ kubectl create namespace kasten-io $ helm install k10 kasten/k10 --namespace=kasten-io ``` But we will try to manually backup and restore. Essentialy we need to backup the following - - Backup Cluster Certificates - Backup ETCD by taking its snapshots ### Backup Certificates Lets check the certifictes path ```shell! $ ls /etc/kubernetes/pki/ $ ls /etc/kubernetes/pki/etcd ``` ![](https://i.imgur.com/5z8JVx0.png) Make backup directory ```shell! $ mkdir backup-certs $ sudo cp -r /etc/kubernetes/pki backup-certs ``` ![](https://i.imgur.com/V2sjo6x.png) ### Backup ETCD Before backing up ETCD on kubernetes, we need to have an ETCD Client installed ```shell! $ sudo apt update $ sudo apt install vim wget curl $ export RELEASE=$(curl -s https://api.github.com/repos/etcd-io/etcd/releases/latest|grep tag_name | cut -d '"' -f 4) $ wget https://github.com/etcd-io/etcd/releases/download/${RELEASE}/etcd-${RELEASE}-linux-amd64.tar.gz $ tar xvf etcd-${RELEASE}-linux-amd64.tar.gz $ cd etcd-${RELEASE}-linux-amd64 $ sudo mv etcd etcdctl etcdutl /usr/local/bin $ etcdctl version ``` ![](https://i.imgur.com/uJ9AsHX.png) #### Save Kube ETCD Snapshot ```shell! $ mkdir backup-etcd $ sudo etcdctl snapshot save backup-etcd/snapshot.db \ > --endpoints=https://127.0.0.1:2379 \ > --cacert=/etc/kubernetes/pki/etcd/ca.crt \ > --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \ > --key=/etc/kubernetes/pki/etcd/healthcheck-client.key ``` ![](https://i.imgur.com/Cn6dMst.png) > **At this point the Cluster backup is complete.** [color=#326ce5] ## Cluster Restore Before we can restore lets reset our cluster. ```shell! $ sudo kubeadm reset -f ``` ![](https://i.imgur.com/ImpBrmt.png) > Cluster is reset and connection is being refused as is expected. [color=#326ce5] ### Restore Certificates ```shell! $ sudo cp -r backup-certs/pki/ /etc/kubernetes ``` ![](https://i.imgur.com/8d99lmX.png) Certificates are restored ### Restore ETCD from snapshot ```shell! $ sudo etcdctl snapshot restore backup-etcd/snapshot.db $ sudo mv default.etcd/member /var/lib/etcd/ ``` ![](https://i.imgur.com/z0CzPtp.png) Got a deprecation warning but etcd snapshot was restored sucessully. ### Reinitialize Kubeadm Finally we need to reinitialize kubeadm with an extra flag `--ignore-preflight-errors`. ```shell! $ sudo kubeadm init \ > --ignore-preflight-errors=DirAvailable--var-lib-etcd \ > --cri-socket=unix:///var/run/cri-dockerd.sock ``` **Head** ![](https://i.imgur.com/DuSWTxA.png) **Tail** ![](https://i.imgur.com/XyJvLE1.png) > **Kubernetes cluster successfully restored from a backup.** [color=#326ce5] ## How can you make sure that the pods are able to communicate with each other within the network?  Each Pod in Kubernetes has its own IP address. Although using Services is advised, a Pod can connect with another Pod by simply addressing its IP address. One fixed DNS name or IP address can be used to access a group of Pods together referred to as a Service. Although Kubernetes provides a network paradigm called the container network interface (CNI), network plugins are used in the actual implementation. Internet protocol (IP) addresses are assigned to pods by the network plugin, which also permits pod communication inside the Kubernetes cluster. There are many other network plugins for Kubernetes, however Flannel will be used in this post. Flannel is fairly straightforward and by default employs a Virtual Extensible LAN (VXLAN) overlay. The same IP address is shared by all containers in the same Pod. They can talk to one another by using the localhost address. ## Remote Access ### Kubernetes Dashboard Here we can see my Kube dashboard where I monitor my deployments and services. ![](https://i.imgur.com/cxFGEXk.png)