# Set up
We advise you to use Linux to for this TP because it much faster.
## 1. Minikube Installation
### On Linux
Run the following code in terminal:
```
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
```
### On Windows (Not recommended, takes a lot of time)
1. You will need to install a Container or virtual machine manager (***If you already have VirtualBox, you can skip this step.***)
- Install Docker: [Docker Desktop Installer.exe](https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe)
2. Download Minikube:
- Install Minikube: [minikube-installer.exe](https://storage.googleapis.com/minikube/releases/latest/minikube-installer.exe)
- Add the `minikube.exe` binary to your `PATH` by opening Windows PowerShell and run this code:
```
$oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine)
if ($oldPath.Split(';') -inotcontains 'C:\minikube'){
[Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath), [EnvironmentVariableTarget]::Machine)
}
```
### Getting started
Verify installation: `minikube version`
## 2. Kubectl Installation
### On linux
Run the following code in terminal:
```
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
### On Window
Download by the following link : https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#install-kubectl-binary-with-curl-on-windows
### Getting started
Verify installation: `kubectl version`
# Exercise
## 1. Deploy application:
In this exercise, you will learn how to write manifest file for deployment, pod, service and ingress. In addition, you will also understand the high availability of Kubernestes
### 1.1. Deployment
1. Complete the deployment.yaml with this information:
- You will write a deployment
- Number of pods: >= 4 (for later use)
- Image: `hashicorp/http-echo`
- Port: 5678
- Name and label: as you wish
2. Create 3 or more nodes (for later use)
```
minikube start --nodes 3
```
***Note: You can shut down minikube by running following command (if you need):***
```
minikube stop
minikube delete
```
- Verify there are 3 nodes : `kubectl get node`
- You can open a dashboard to have a better view about what happens: `minikube dashboard`
3. Apply deployment.yaml
```
kubectl apply -f <your yaml file>
```
4. Observe your deployment and pod:
<u>***Dashboard:***</u> Look at the Workloads
<u>***Command Line:***</u>
```
kubectl get deployment
kubectl get pod -owide
```
### 1.2. Service
1. Complete the service.yaml with this information:
- You will write a service
- Name: as you wish
- Port: 5678
2. Apply service.yaml
3. Observe your service
<u>***Dashboard:***</u> Look at the Service
<u>***Command Line:***</u>
```
kubectl get service -owide
```
### 1.3. Ingress
1. Complete the ingress.yaml with this information:
- You will write an ingress
- Name: as you wish
- Port: 5678
- Path: as you wish
2. Install NGINX ingress controller
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
```
3. Apply ingress.yaml
4. Open another terminal and open a tunnel
```
minikube tunnel
```
5. Now you can connect to the deploy file. Use the IP address of your service
```
curl <IP_address>:<port>/<path>
```
#### Expected: You will see HELLO_WORLD in your terminal
### 1.4. High availability:
1. Before continue, let look again your pods and nodes. Note which pod is in which node
2. Now delete a node that have at least 1 pod by running:
> (**CAUTION** : **Do not delete the control plane** as minikube currently doesn't support multi control plane cluster) I'm begging you :<
```
minikube node delete <node-name>
```
3. Observe your pod, what do you see?
#### Expected: the pods in the deleted node is transfer to other nodes (see again using dashboard or command line)
## 2. Deploy Kafka
In this exercise, you will learn how to write manifest file for Namespace, StatefulSet, PV/PVC, Headless Service. In addition, you will also understand the Scalability and Disaster Recovery of Kubernetes
### 2.1. Preparation
1. Delete the minikube cluster of the last exercise :
```
minikube stop
minikube delete
```
2. Start a new kubernetes cluster
```
minikube start
```
### 2.2. StatefulSet
1. Apply namespace.yaml
2. Set 'kafka' as your default name space
```
kubectl config set-context --current --namespace=kafka
```
3. Take a look at statefulset.yaml and then apply it
5. Verify that the statefulset is well deployed
a. The command`kubectl get pods -l app="kafka"`should display similar output
```
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 4m
kafka-1 1/1 Running 0 4m
kafka-2 1/1 Running 0 4m
```
b. The command`kubectl get pvc -l app=kafka`should display similar output
```
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-kafka-0 Bound pvc-54e2de86-6c02-4605-9904-00e5cd1f44d1 100Mi RWO standard 27h
data-kafka-1 Bound pvc-e4ef092d-ca3e-4603-b2dc-9207676d4e14 100Mi RWO standard 27h
data-kafka-2 Bound pvc-ccf06bd0-f992-4554-be74-0905f701e824 100Mi RWO standard 27h
```
### 2.3. Create data in the PV
**Objective :** Enter a shell of pod and perform manual action
1. Tap into the shell of `kafka-2` pod via one of the following way :
- <u>***Command Line:***</u> `kubectl exec --stdin --tty <pod_name> -- /bin/bash`
OR
- <u>***Dashboard:***</u>


2. Create a file in the persistant volume, the mount path of the **persistant volume** is defined at `spec.containers.volumeMounts` in the manifest of the stateful set
```
cd <path_to_PV> #TODO
touch dmm.txt
```
### 2.4. Scale down to simulate disaster
1. Scale down the StatefulSet to 2 replicas with the following command `kubectl scale statefulset <name_of_statefulset> --replicas=2 `
2. The command`kubectl get statefulset kafka` should display similar output
```
NAME READY AGE
kafka 2/2 5m
```
3. Verify the all 3 PVs are still alive with `kubectl get pvc`
```
# EXPECTED OUTPUT
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-kafka-0 Bound pvc-d07b8ed3-1b9b-4f82-b8d3-9f58c617b9a6 100Mi RWO standard 22m
data-kafka-1 Bound pvc-82223221-d996-4d0b-8406-1cad1f8ef50b 100Mi RWO standard 22m
data-kafka-2 Bound pvc-398288fe-7bd5-416f-9392-e2bcee7a9c33 100Mi RWO standard 22m
```
### 2.5. Find data in the PV
**Objectif :** find `dmm.txt` in the PV
1. Each pod in a StatefulSet has a specific name for its PV in the following format `data-<pod_name>`. You can inspect the PV by:
+ **Retrieve the information in PV through the <u>Dashboard</u>**
a. Get host path of the PV
b. Open a bash session in the docker container of minikube cluster
c. Find `dmm.txt`
**OR**
+ **Retrieve the information by mounting the PV to a debug pod**
a. Find PV name with `kubectl get pvc`
b. Start a shell session in the debug pod and you can find the information of the PV at `spec.containers.volumeMounts.mountPath` folder.
c. Exit the shell with command `exit`
d. **IMPORTANT :** Delete the debug pod with `kubectl delete pod debug-pod`
### 2.6. Scale up
**Objective:** Verify that StatefulSet will automatically pick up existing PV.
1. Scale up the StatefulSet to 3 replicas and verify that the StatefulSet is scaled up correctly by checking pods in the cluster.
2. Tap into the shell of **kafka-2** pod and verify that `dmm.txt` is there.
### 2.7. Write correct config file for the client
1. Complete secret.yaml and apply it
- name: as you wish
- <TODO_data_key>: as you wish
2. Verify that the secret is well created
```
kubectl get secret -n kafka
```
3. Create a ***kafka client CLI*** with the following template, **please complete the TODO part in the code to mount the secret to the pod correctly.**
4. Apply the manifest of the kafka client
5. Run the next kafka command in `kafka-cli` pod to verify that the config file is functioning correctly.
6. Create a topic (**Remember to change the <TODO_config_key**>):
```
kafka-topics --create --topic test --partitions 3 --replication-factor 3 --bootstrap-server kafka-headless:9092 --command-config /etc/kafka/secrets/<TODO_config_key>.properties
```
+ Expected error:
```
WARN[AdminClient clientId=adminclient-1] Error connecting to node TODO:9092 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient)
```
**This is because the kafka broker is not advertised in the correct domain name of the pod**
### 2.8. Understanding StatefulSet headless service and domain name
**Objective :** Advertise the kafka broker at the correct domain name
**Context :** Each kafka broker must advertise at its correct domain name by setting the env var `KAFKA_ADVERTISED_LISTENERS` so that the client could make request to it.
The domain name of the pod can be determined by **<pod_name>.<headles_service>.<namespace>.svc.<cluster_domain>**
Below is some examples:
| Pod Hostname | Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain | Pod DNS |
|----------------|-----------------|--------------------------|-----------------------|--------------------------------------|----------------------------------------------|
| web-{0..N-1} | cluster.local | default/nginx | default/web | nginx.default.svc.cluster.local | web-{0..N-1}.nginx.default.svc.cluster.local |
| web-{0..N-1} | cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local |
| web-{0..N-1} | kube.local | foo/nginx | foo/web | nginx.foo.svc.kube.local | web-{0..N-1}.nginx.foo.svc.kube.local |
1. Advertise the kafka broker at the correct domain name of the pod by correcting the following line in the **statefulset.yaml** (Replace only the TODO)
```
# Please substitute "TODO" with the correct domain name
export KAFKA_ADVERTISED_LISTENERS=SASL://TODO:9092
```
Hint:
+ The default cluster domain is `cluster.local`
+ You can get the pod name of the StatefulSet with env var `POD_NAME`.
Reapply the manifest of the stateful set
### 2.9. VERIFY THE KAFKA CLUSTER IS FUNCTIONAL !!!
1. Create a topic (**TODO**):
```
kafka-topics --create --topic test --partitions 3 --replication-factor 3 --bootstrap-server kafka-headless:9092 --command-config /etc/kafka/secrets/<TODO_config_key>.properties
```
+ Expected output:
```
Created topic test.
```
3. Produce message to the kafka broker (**TODO**):
```
kafka-console-producer --bootstrap-server kafka-headless:9092 --topic test --producer.config /etc/kafka/secrets/<TODO_config_key>.properties
```
+ Expected output:
```
> #ENTER_SOME_RANDOM_STRING
```
3. Consume message from the kafka broker (**TODO**):
```
kafka-console-consumer --bootstrap-server kafka-headless:9092 --topic test --consumer.config /etc/kafka/secrets/<TODO_config_key>.properties --from-beginning
```
+ Expected output (Something that you entered):
```
xfh
dtr
h
f
gs
dfg
rg
dgf
qsfef
^CProcessed a total of 9 messages
```