:::success
# LS Lab 4 - Kubernetes Basics
**Name: Ahmad Mohamad Quasem Almoumani**
:::
<!-- This is a comment which gets skiped from rendering -->
---
# Task 1 - Preparation
:::info
**1.1. Choose an application that has such characteristics as:.**
JSON non-confidential data
application configuration values
any secrets (like authorization credentials)
availability from outside (on Internet) *
has any health checks or/and status endpoints *
data that is stored on distributed storage/volumes
you have a microservice that logic is to perform short temporal work
:::
I have chosen this project
> Project Link: https://github.com/topics/microservices-application
Kubernetes is used to manage and deploy containerized applications, with support for a variety of container runtimes such as Docker, containerd, and CRI-O. It provides a way to declaratively specify the desired state of an application and then manages the underlying infrastructure to ensure that the application is running as expected.
Some of the characteristics of Kubernetes that align with the given requirements are:
* **JSON non-confidential data**: The application likely uses JSON as a data format to communicate between different microservices. This data is non-confidential, meaning it doesn't contain sensitive information like personal identifiable information (PII) or financial information.
* **Application configuration values**: The application may have configuration values like the port number it listens on or the location of external services it depends on. These values can be set in a configuration file or via environment variables.
* **Any secrets (like authorization credentials)**: The microservices in the application may need authorization credentials to access external services or other microservices. These credentials should be kept as secrets and not hardcoded in the source code. They can be stored in a secure credential store like HashiCorp Vault or AWS Secrets Manager.
* **Availability from outside (on Internet)**: The microservices in the application should be accessible from outside the application, most likely via HTTP requests. This means the application needs to be deployed to a server that is accessible over the internet.
* **Has any health checks or/and status endpoints**: Each microservice in the application should expose health checks and status endpoints. Health checks allow other microservices to determine if a particular microservice is up and running, while status endpoints provide information about the current state of a microservice.
* **Data that is stored on distributed storage/volumes**: The microservices in the application may store data on distributed storage or volumes like Amazon S3, Google Cloud Storage, or a database like MongoDB or Cassandra.
* **Microservice that logic is to perform short temporal work**: The application likely has microservices that perform specific tasks, such as sending emails or processing payments. These microservices may perform short temporal work and can be deployed and scaled independently of other microservices.
---
:::info
**1.2. Get familiar with Kubernetes (k8s) and Helm concepts.**
:::
**Kubernetes Concepts:**
* **Pods**: The smallest unit of deployment in Kubernetes, which can contain one or more containers.
* **ReplicaSets**: A way to ensure that a specified number of replicas of a Pod are running at any given time.
* **Deployments**: A higher-level abstraction that manages ReplicaSets and provides declarative updates to Pods.
* **Services**: A Kubernetes resource that provides a way to expose an application running on a set of Pods as a network service.
* **ConfigMaps**: A way to store configuration data as key-value pairs that can be consumed by Pods.
* **Secrets**: A Kubernetes resource that provides a way to store sensitive information such as passwords, keys, and certificates.
* **Volumes**: A way to provide persistent storage for Pods.
**Helm Concepts:**
* **Charts**: A Helm package that contains all of the resources necessary to deploy an application on Kubernetes.
* **Releases**: A running instance of a chart, which can be customized using values files.
* **Values**: A YAML file that can be used to provide custom configuration values for a chart.
* **Templates**: The files in a chart that are processed by Helm to generate the Kubernetes manifests.
* **Repository**: A place to store and share Helm charts.
* **Hooks**: Actions that can be run before or after a chart is installed or upgraded.
**Overall, Kubernetes provides a powerful and flexible platform for deploying and managing containerized applications, while Helm provides a way to package and deploy those applications in a repeatable and customizable way. Understanding these concepts is essential for anyone working with Kubernetes and Helm.**
---
:::info
**1.3. Find out which fields every Kubernetes resource requires.**
:::
Kubernetes resources have a set of required fields that must be defined in order for the resource to be created and function properly. The specific fields required may vary depending on the resource type, but the following fields are generally required for most Kubernetes resources:
**1. apiVersion**: The version of the Kubernetes API that the resource uses.
**2. kind**: The type of Kubernetes resource, such as Deployment, Service, or ConfigMap.
**3. metadata:**
* **name**: A name for the resource, which must be unique within its namespace.
* **namespace**: The namespace that the resource belongs to. If not specified, the resource is created in the default namespace.
**4. spec**: A specification of the desired state of the resource, which can include a variety of fields depending on the resource type.
**5. status**: A field that is automatically populated by Kubernetes to reflect the current state of the resource, including any errors or warnings.
For example, here is a YAML file that defines a simple Deployment resource:
> apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
In this example, the required fields for a Deployment resource are apiVersion, kind, metadata, and spec. The metadata field includes the name of the resource, and the spec field defines the desired state of the resource, including the number of replicas, the container image, and the container port.
---
:::info
**1.4. Install and set up the necessary tools:**
* kubectl
* minikube
* helm
:::
**1. Install kubectl:**
To install kubectl i used these commands showing in the photo
Download the latest release with the command:
`curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"`
Download the kubectl checksum file:
`curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"`
Install kubectl
`sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl`
<center>

</center>
**2. Install minikube:**
This to install the latest stable version of kubectl.
> curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
> sudo install minikube-linux-amd64 /usr/local/bin/minikube
Download and install the latest stable version of minikube.
> minikube start
>
start a new Kubernetes cluster using minikube.
To verify that the minikube cluster is running:
> kubectl cluster-info
<center>

</center>
**3. Install Helm:**
> curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
download and install the latest stable version of Helm.
Verify that Helm is installed:
<center>

</center>
---
:::info
**1.5. Get access to Kubernetes Dashboard.**
:::
To get access to the Kubernetes Dashboard, I followed these steps:
1. Start the minikube cluster
> minikube start
Creat Kubernetes Dashboard I used
> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
I used The `kubectl proxy` command starts a proxy server that allows you to access the Kubernetes API server from your local machine.
<center>

</center>
I used this command to create a token for the "admin-user"
`kubectl -n kubernetes-dashboard create token admin-user`
<center>

</center>
I got access to Kubernetes Dashboard
<center>

</center>
---
# Task 2 - k8s Nodes
:::info
**1. After installing all required lab tools and starting minikube cluster, use kubectl commands
to get and describe your cluster nodes.**
:::
To get a list of nodes:
I used this command `kubectl get nodes`
<center>
</center>
this command ` kubectl describe node minikube` use to describe the nodes since minikube is only node available now
<center>

</center>
<center>

</center>
---
:::info
**2.Get the more detailed information about the particular node**
:::
---
:::info
**3. Get the OS and CPU information.**
:::
To get information for OS I used
`kubectl describe node minikube | grep -i 'OS Image'`
To get information for CPU I used
`kubectl describe node minikube | grep -i -A 2 'Capacity'`

To get the CPU information for the host I ran the command `lscpu`

To get the OS information for the host I used `lsb_release -a`

---
:::info
**4. Put the results into report.**
:::
---
# Task 3 - k8s Pod
:::info
**2. Write a Pod spec for your chosen application, deploy the application and run a pod.**
:::
I wrote my pods in one yaml file

---
:::info
**3. With kubectl , get the cluster pods, pod logs, describe pod, go into pod shell.**
:::
To get the cluster pods I run `kubectl get pods`

To get logs I run `kubectl logs <pod-name>`


To describe pod I run `kubectl describe pods <pod-mane>`



---
:::info
**4. Make sure that your app is working correctly inside Pod.**
:::
To go into the pod shell I run `kubectl exec -it <pod-name> -- bash`. Once in the pod, to test if app is running I use the curl command

Here the error "Resource not found" indicates that the app is actually running but was not able to connect to api-service.
---
# Task 4 - k8s Service
:::info
**1. Figure out the necessary Service spec fields**
:::
This Service spec creates a Service named "webapp-service" that selects pods with the label "app: webapp-service". It exposes port 8080 and routes traffic to the same port on the target Pods. The type field is set to LoadBalancer, which will create an external load balancer to distribute traffic to the Pods.
---
:::info
**2. Write a Service spec for your pod(s) and deploy the Service .**
:::
I wrote the following service manifest for my 3 pods

To deploy the Service I run this command `kubectl create -f podspec-service.yaml
`

---
:::info
**3. With kubectl , get the Services and describe them.**
:::
To get and describe the service I run `kubectl get services` ,`kubectl describe service <name of service>`


---
:::info
**4. Make sure that pods can communicate between each other using DNS names, check pods addresses**
:::
To test if they are able to communicate with each other using their DNS name, I connected to one pod's shell(api-pod) and tried a ping to another pod(frontend-pod) DNS name by running the command:
```
ping 10-244-0-64.frontend-service.default.svc.cluster.local
```

---
:::info
**5. Delete any Pod, recreate it and check addresses again. Make sure that traffic is routed to the new Pod correctly.**
:::
I deleted a frontend-pod using `kubectl delete pod frontend-pod` the recreated it by applying the pod spec manifest.

Again I connected back to api-pod and ran the ping to the frontend pod as above
```
ping 10-244-0-66.frontend-service.default.svc.cluster.local
```

---
:::info
**6. Learn about Loadbalancer and NodePort .**
:::
**LoadBalancer:** A LoadBalancer is a Kubernetes object that represents an external load balancer that distributes network traffic to the endpoints of a Service. A LoadBalancer provides a stable IP address and DNS name that clients can use to access the Service. When a Service is exposed as a LoadBalancer, Kubernetes automatically provisions and configures a load balancer in the cloud provider's infrastructure. LoadBalancer Services are commonly used in production environments to expose externally facing Services.
**NodePort:** A NodePort is a Kubernetes object that exposes a Service on a static port on each node in the cluster. This means that the Service is accessible from outside the cluster using the IP address of any node in the cluster, on the specified NodePort. When a Service is exposed as a NodePort, Kubernetes allocates a port from a predefined range (usually 30000-32767) and maps it to the Service's port. NodePort Services are commonly used for development and testing purposes.
---
:::info
**7. Deploy an external Service to access your application from outside, e.g., from your local host.**
:::
I created a new service and specified type to be loadbalancer:


Now to be able to access the application from an external browser, i run the command `minikube service webapp-service` to create the external URL to use

With this my application is now availaible externally


---
# Task 5 - k8s Deployment
:::info
2. Make sure that you wiped previous Pod manifests. Write a Deployment spec for your pod(s) and deploy the application.
:::
I created the deployment spe below for deploying my application.


:::info
3. With kubectl , get the Deployments and describe them.
:::
To get the list of deployments, I used `kubectl get deployments`

To describe them same as for the pods I did `kubectl describe deployment <deployment-name>`


:::info
4. Update your Deployment manifest to scale your application to three replicas.
:::
To scale the application I just changed the replicas attribute to 3 and the applied the new manifest.


:::info
5. Access a pod shell and logs using Deployment labels.
:::
To view logs for a Deployment using the Deployment labels, I use the `kubectl logs deployment/<deployment-name>` command

To access a shell inside a pod that is managed by a Deployment using the Deployment labels, you can use the `kubectl exec -it deployment/<deployment-name> -- bash` command.

:::info
6. Make any application configuration change in your Deployment yaml and try to update the application. Monitor what are happened with pods ( --watch ).
:::
For this task I just retagged my docker image with `v2` and used it for creating a deployment as follows:



:::info
7. Rollback to previous application version using Deployment.
:::
For rolling back and make deplyment to be using previous image, I used `kubectl rollout undo deployment/<deployment-name>`



:::info
8. Set up requests and limits for CPU and Memory for your application pods. Provide a PoC that it works properly.
:::
I changed the deployment to add cpu usage and limits as follows:



Then I connected to a pod shell a ran a stress test using **siege** to overload the container and normally pod should enter **CrashLoopBackOff** mode.


# Task 6 - k8s configMap
:::info
2. Modify your Deployment manifest to set up some app configuration via environment variables .
:::
I added already some environmental variables to my deployement for connecting api pods to quotes pod:

:::info
3. Create a new configMap manifest. In data spec, put some app configuration as key-value pair (it could be the same as in previous exercise). In the Pod spec add the connection to key-value pair from configMap yaml file.
:::
I created the configmap.yaml file below then applied it


Then I put it in the deployment as follows :

:::info
4. Create a new config.json file and put some json data into
:::
For this task i created the config.json with the environmental variables as follows:

:::info
5. Create a new configMap manifest. Connect configMap yaml file with config.json file to read the data from it.
:::
Now I deleted the previously created configmap and edit the yaml spec to read data from the config.json by using the command:
```
kubectl create configmap webapp-configmap --from-file=config.json
```

:::info
6. Update your Deployment to add Volumes and VolumeMounts .
:::
I added to my deployment file under the container specs the following to define volume mounts and volumes:

:::info
7. kubectl , check the configMap details. Make sure that you see the data as plain text.
:::
```
kubectl describe configmaps
```

:::info
8. Check the filesystem inside app container to show the loaded file data on the specified path.
:::
For viewing the loadedfiles, i connected to the pod's shell and ran the ls command.

The config.json was successfully copied to mountpath.
# Task 7 - k8s Secrets
:::info
2. Create and apply a new Secret manifest. For example, it could be login and password to login to your app or something else.
:::
I converted the path to quotes api to base64 as follows:

Then created the secret manifest using this value in base64

:::info
3. With kubectl , get and describe your secret(s).
:::
To get the list of all secrets, I use `kubectl get secrets`
To describe my newly created secret, I used `kubectl describe secret webapp-api`

:::info
4. Decode your secret(s).
:::
To decode my base64 secret, I used the command `echo "aHR0cDovL3F1b3Rlcy1zZXJ2aWNlOjUwMDA=" | base64 --decode`

:::info
5. Update your Deployment to reference to your secret as environment variable.
:::
I changed the deployment to used secrets instead of configmap as shown on the figure below:

:::info
6. Make sure that you are able to see your secret inside pod.
:::
To check this, I connected to a pod and retrived the lis of env loaded:

:::info
7. Create a secret from json file. Hint: use configMap logic that you learned. Repeat steps 3, 5 and 6.
:::
- Creating secret.json file

- Creating secret based on secret.json file
```
kubectl create secret generic webapp-api --from-file=secret.json
```

Implement secret file in deployment

- Check in the pods if secret exists

# Task 10 - k8s Job
:::info
2. Prepare and apply a Job manifest that creates and run a new temporal pod that to some work and then it’s gone.
:::
I created a job manifest to copy a file
# Task 11 - k8s Namespace
:::info
2. Create two different Namespaces in your k8s cluster.
:::
I created 2 namespace named **webapp1** and **webapp2**
```
kubectl create namespace webapp1
kubectl create namespace webapp2
```
:::info
3. Using kubectl , get and describe your Namespaces.
:::
I used kubectl get to have a list of all namespaces:

For describing the namespaces, I used `kubectl describe namespace <namespace-name>`

:::info
4. Deploy two different applications in two different Namespaces with kubectl .
:::
For deploying my app to the different namespaces, i edited the deployment file to add namespace attribute


Then applied the different deployments.
:::info
5. With kubectl , get and describe pods from different Namespaces witn -n flag.
:::
To get a list of all pods in a particular namespace, I used `kubectl get pods -n <namespace-name>`

TO describe a pods in a given namespace I used the command `kubectl describe deployment/<deployment-name> -n <namespace-name>`


:::info
6. Can you see and can you connect to the resources from different Namespaces?
:::
For testing this, I connected to one pod in one namespace and tried to access via ping another pod in the other namespace:

I had to create services for this deployments in each of these namespaces. By so doing pods in these namespaces are able to communicate with each other.

:::info
7. Implement a quota to limit CPU and Memory resources for all pods within namespace.
:::
For doing this, I created a created a quota yaml manifest as follows:

Then applied to both name spaces
