Week no 4 - 6 DevOps Course 2024 BSCS Link to [Week no 2](https://hackmd.io/PGGZAAHrQA2ZoHHMcAzwNQ?both) learning resources - Linux Essentials - DevOps as a culture and as a process # Sessional 1 Mock Exam ### Section 1: Multiple ***Choice Questions*** (10 Questions, 1 Mark Each) (Choose the correct option for each question) **Which of the following is a Type-1 Hypervisor?** a) VMware Workstation b) VirtualBox c) VMware ESXi d) Docker **What is the primary benefit of containerization over virtualization?** a) Better security b) Isolation at the hardware level c) Lightweight nature and faster startup d) Compatibility with Windows only **Which of the following commands is used to start a Docker container?** a) docker run b) docker build c) docker stop d) docker commit **In Kubernetes, what is a Pod?** a) A group of containers running together on a host b) A network policy c) A volume in the container d) A secret store **Which Kubernetes component ensures the desired state of a container is maintained?** a) kubelet b) Controller Manager c) API Server d) Scheduler **In which file is Docker's image metadata stored?** a) Dockerfile b) DockerCompose.yaml c) metadata.json d) image.json **Which tool can be used for container orchestration?** a) Jenkins b) Kubernetes c) Terraform d) Ansible **What does docker ps command do?** a) Lists all images b) Lists all running containers c) Stops a container d) Prunes all unused containers **What is the purpose of a ReplicaSet in Kubernetes?** a) Scaling up and down Pods based on demand b) Scheduling the Pods on nodes c) Creating a storage volume d) Connecting to external APIs **Which of the following services is typically responsible for container networking in Kubernetes?** a) CoreDNS b) etcd c) Calico d) kube-proxy <hr/> ### Section 2: Case Study/Scenario-Based Questions (2 Questions, 10 Marks Each) #### Scenario: You are deploying a multi-service application using Kubernetes. The application consists of a web front-end, an API service, and a database. **Question:** Design a Kubernetes solution for this setup that handles scaling, service discovery, and persistent storage. Discuss the components you would use (e.g., Deployments, Services, PersistentVolumes) and how they interact. #### Scenario: You are tasked with creating a CI/CD pipeline for an application that needs to be deployed in containers. **Question:** Outline the key stages of the CI/CD pipeline. Highlight how you would integrate Docker and Kubernetes into the process for building, testing, and deploying the application. <hr/> ### Section 3: Short Answer Questions (4 Questions, 5 Marks Each) **Explain the difference between `virtualization and containerization`, and discuss where you would prefer one over the other in a production environment.** **Points to cover:** Definitions of virtualization and containerization, use cases for each, and how they affect performance, security, and resource efficiency. **Describe the Docker container lifecycle and explain how you would handle restarting a failed container.** **Points to cover:** The stages of a container (Created, Running, Paused, Stopped, etc.), and mechanisms like restart policies or monitoring tools to handle failed containers. **How does Kubernetes handle self-healing? Provide examples of Kubernetes features that support this ability.** **Points to cover:** Concepts like health checks (liveness and readiness probes), auto-restarting containers, rescheduling failed pods, and ReplicaSets. ### Section 4: Short Answer Questions (1 Questions, 5 Marks) **Scenario:** `You are managing a Kubernetes cluster hosting a web application with three replicas. These replicas need to communicate with each other and must be accessible from outside the cluster.` **Question:** 1. Define what a Kubernetes Service is, and explain why it is necessary in this scenario. (2 Marks) 1. What type of Kubernetes Service would you choose to expose the web application to external users, and why? (1 Mark) 1. Write a YAML configuration snippet for a Service that exposes the application using the correct service type. (2 Marks) # Learning Resources #### Virtualization vs Containers *Here are some ways virtualization and containers compare:* **Resource usage** Containers are more lightweight and use fewer resources than virtual machines because they share the same operating system as the host system. **Hardware abstraction** Virtualization creates an abstraction layer over hardware, allowing a single computer to be divided into multiple virtual computers. **Application portability** Containers are portable units that can run consistently across any computing platform. **Application development** Containers are well suited for building cloud-native apps, packaging microservices, and incorporating applications into DevOps or CI/CD practices. **Hardware requirements** Virtual machines are needed if a project has specific hardware requirements, or if development is being done on one hardware platform and needs to target another. **Security** Containers share the underlying kernel of the host system, so any security vulnerability that affects the kernel will also affect the container. Link to [Read](https://www.liquidweb.com/blog/virtualization-vs-containerization) <hr/> **Docker in a CI/CD environment** Docker is a set of Platforms as a service (PaaS) products that use Operating system-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than a virtual machine. `Complete Guide:` Link to [Read](https://www.geeksforgeeks.org/introduction-to-docker/) Docker simplifies the `CI/CD process` by allowing developers to package applications with all of their dependencies into a standardized unit for software development. This uniformity means that you can be assured that your application runs the same way in every environment, from development to production. `Build a CI/CD pipeline with Docker:` [Link to Read](https://circleci.com/blog/build-cicd-pipelines-using-docker/) <hr/> **Orchestrating Containers with Kubernetes** Kubernetes is a powerful platform for orchestrating containerized applications. It allows developers to scale, manage, and deploy applications with ease. In this blog, we'll dive deep into key Kubernetes components such as Pods, Deployments, ReplicaSets, Services, and Ingress. We'll also cover advanced deployment techniques like Canary deployments and Rolling upgrades, ensuring that you can expose your applications to the outside world seamlessly. **1. Pods: The Smallest Unit of Deployment** In Kubernetes, Pods are the smallest deployable unit. A Pod typically contains one or more tightly coupled containers that share the same network namespace and storage volumes. Containers inside a Pod can communicate with each other via localhost. Pods are ephemeral in nature. When a Pod dies, Kubernetes automatically schedules a new one to replace it. However, Pods themselves aren’t designed to self-heal, which brings us to Deployments. **2. Deployments: Declarative Application Management** A Deployment in Kubernetes provides declarative updates for Pods and ReplicaSets. It defines the desired state for your application, including the number of replicas, the container image, and resource requests. A Deployment ensures that the correct number of Pods are always running. Here's a simple ***Deployment*** YAML configuration: ``` apiVersion: apps/v1 kind: Deployment metadata: name: my-web-app spec: replicas: 3 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: web-container image: nginx:1.19 ``` <hr/> This Deployment runs 3 replicas of an `nginx` container. If any Pods fail, Kubernetes will automatically recreate them. **3. ReplicaSets: Ensuring Desired Replicas** A ReplicaSet ensures that a specified number of Pod replicas are running at all times. Although ReplicaSets can be used directly, they are usually managed through a Deployment. A Deployment wraps a ReplicaSet and offers additional features like rolling updates and rollbacks. For example, in the Deployment YAML above, the ReplicaSet is created implicitly to manage the three nginx Pods. **4. Services: Exposing Pods to the World** A Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy to access them. Services allow different components of an application to communicate with each other and enable external traffic to reach your Pods. Types of services: * **ClusterIP**: Exposes the Service on an internal IP within the cluster. * **NodePort**: Exposes the Service on a static port on each node. * **LoadBalancer**: Exposes the Service using a cloud provider’s load balancer. Here’s a basic Service that exposes the my-web-app Deployment to external traffic using a NodePort: ``` apiVersion: v1 kind: Service metadata: name: web-service spec: selector: app: web type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30007 ``` <hr/> **5. Ingress: Managing External Access** An Ingress is used to expose HTTP and HTTPS routes from outside the cluster to services within the cluster. It provides a way to handle virtual hosts, SSL, and other features at the network layer. Here’s an example of a simple Ingress resource that routes traffic to the web-service: ``` apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-ingress spec: rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: web-service port: number: 80 ``` <hr/> This configuration routes all traffic from `example.com` to the `web-service` on port 80. **6. Canary Deployments: Introducing New Changes Gradually** A Canary deployment is a strategy that releases new software gradually to a subset of users. This minimizes the risk associated with introducing new versions of an application. In Kubernetes, Canary deployments are often implemented using multiple versions of a Deployment with a traffic-splitting mechanism like a Service or Ingress. For example: 1. Version 1 runs on 90% of the traffic. 1. Version 2 (the canary) runs on 10% of the traffic. To implement this, you can update the Deployment gradually, increasing the number of Pods running the new version until it's serving all traffic. **7. Rolling Updates: Seamless Upgrades** A Rolling Update ensures that new versions of your application are deployed without any downtime. Kubernetes updates a small batch of Pods at a time, ensuring that the new version is stable before proceeding to the next batch. You can trigger a rolling update by modifying the Deployment’s container image: ``` kubectl set image deployment/my-web-app web-container=nginx:1.20 ``` <hr/> Kubernetes will gradually replace the old Pods (running `nginx:1.19`) with new Pods running `nginx:1.20`, ensuring minimal disruption. **8. Accessing Applications from Outside the Cluster** To expose applications to the outside world, you can use either a NodePort or LoadBalancer Service. In cloud environments, a LoadBalancer automatically provisions a load balancer for external access. For HTTP-based applications, it’s best to use an Ingress to manage external access, especially when dealing with multiple services. Ingress Controllers (such as NGINX Ingress Controller) handle routing, SSL termination, and load balancing for your applications. **Conclusion: Orchestrating Modern Applications with Kubernetes** Kubernetes simplifies the orchestration of containerized applications by providing robust mechanisms like Pods, Deployments, ReplicaSets, Services, and Ingress. By leveraging advanced techniques like Canary deployments and Rolling upgrades, you can ensure that your applications are updated with minimal risk. With Kubernetes, scaling and maintaining distributed applications has never been easier, whether you are running machine learning models, microservices, or complex web applications. <hr/> ``` apiVersion: networking.k8s.io/v1 kind: metadata: name: annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: pathType: backend: service: name: port: number: - path: pathType: backend: service: name: port: number: ```