# Howdy Kubernetes!!!! :wave: This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts, and includes tutorials. ## Kubernetes Cluster :+1: Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is production-ready. ![](https://i.imgur.com/yOS9KP0.png) A Kubernetes cluster consists of two types of resources: * The **Master** coordinates the cluster * **Nodes** are the workers that run applications #### ***Cluster:*** For the convenience and time sake, we already created [Gardener](https://gardener.cloud/) clusters and provisioned for you. Verify it with following commands. > kubectl cluster-info > kubectl get nodes It will give you the cluster details and number of nodes in the cluster. ## Deploy Your App :+1: Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster. ![](https://i.imgur.com/hstZg2F.png) Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. **This provides a self-healing mechanism to address machine failure or maintenance.** > ``` > kubectl create deployment pingme --image=kesavanking/k8s-dkom:1.0 >``` > kubectl get deployments ## Explore Your App :+1: When you created a Deployment, Kubernetes created a **Pod** to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers and some shared resources for those containers (Storage volumes,Unique Cluster IP address) ![](https://i.imgur.com/BSpoOrg.png) A Pod always runs on a **Node**. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster.Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master's automatic scheduling takes into account the available resources on each Node. ![](https://i.imgur.com/12GmhYi.png) You will use kubectl command line interface to get information about deployed applications and thier environments * **kubectl get** - list resources > ``` > kubectl get pods > ``` * **kubectl describe** - show detailed information about a resource > ``` > kubectl describe pod <POD NAME> > ``` * **kubectl logs** - print the logs from a container in a pod > ``` > kubectl logs <POD NAME> -c k8s-dkom -f > ``` * **kubectl exec** - execute a command on a container in a pod > ``` > kubectl exec -it <POD NAME> -- /bin/sh > ``` > ``` > ps -a > ``` You can see the hello application running.... > ``` > exit > ``` We have now opened console on the container.You can check the binary *hello-app* of the go application. ## Expose Your App :+1: Although each Pod has a unique IP address, those IPs are not exposed outside the cluster.A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec: * ***ClusterIP*** (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. * ***NodePort*** - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP. * ***LoadBalancer*** - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort. * ***ExternalName*** - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. ![](https://i.imgur.com/7X440Tu.png) > ``` > kubectl get services > ``` Gives you current services in the cluster.We have a Service called kubernetes that is created by default when minikube starts the cluster. You create a new service and expose it to external traffic we’ll use the expose command with LoadBalancer as parameter > ``` > kubectl expose deployment pingme --type=LoadBalancer --port 80 --target-port 8080 > ``` > kubectl get services > ``` The External IP will be in pending state.Wait for a while till your application get a specific External IP.Once you've determined the external IP address for your application, copy the IP address. Point your browser to External IP URL (such as http://203.0.113.0) to check if your application is accessible. ## Scale Your App :+1: In the previous modules we created a Deployment, and then exposed it publicly via a Service. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand. Scaling is accomplished by changing the number of replicas in a Deployment ![](https://i.imgur.com/ZY4IXlO.png) > ``` > kubectl scale deployment pingme --replicas=3 > ``` > ``` > kubectl get pods > ``` Now, you have multiple instances of your application running independently of each other.The load balancer you provisioned in the previous step will start routing traffic to these new replicas automatically. ## Update Your App Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. **Rolling updates** allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources. ![](https://i.imgur.com/t9g8Hnx.png) To update the image of the application to version 2, use the **set image **command, followed by the deployment name and the new image version: > ``` > kubectl set image deployment/pingme k8s-dkom=kesavanking/k8s-dkom:2.0 > ``` > ``` > kubectl get pods -l app=pingme > ``` You can see new containers will be creating with new image and you can access the service as we did in above module and check the version change in the web output When you access the service output will be similar to this. ``` Hello SAP! Version: 2.0.0 Hostname: pingme-76975fc9d6-bdm47 ``` *Happy Kubernetes* :tada: ## Delete Your Resources --- Once you completed your exercise please delete your resources for other participant's convenience.. > ``` > kubectl delete deployment pingme > ``` > ``` > kubectl delete service pingme > ``` > ---