# Cedacri CKAD Mock Exam - Group A
## Before we start
- Install autocomplete
- `sudo bash -c 'kubectl completion bash >/etc/bash_completion.d/kubectl'`
- Install aliases
- `echo 'alias k=kubectl' >>~/.bashrc`
- `echo 'alias kx=kubectx' >>~/.bashrc`
- `echo 'alias kbs=kubens' >>~/.bashrc`
- `echo 'complete -F __start_kubectl k' >>~/.bashrc`
- Install jq
- `sudo apt-get install -y jq`
- Install yq
- `sudo wget https://github.com/mikefarah/yq/releases/download/v4.13.4/yq_linux_amd64 -O /usr/bin/yq && sudo chmod +x /usr/bin/yq`
## Victims order
1. massimiliano ortenzi
1. stefano riccardi
1. gianmarco tonelli
1. angela ruscitto
1. stefano lombardi
## Question 0 - Namespaces
Create a namespace called `alpha-x123555`.
After the namespace creation, create a list of all the existing namespaces in `/home/workshop/namespaces.txt`.
The list should contain only the names of the namespaces, one per line:
```txt
## /home/workshop/namespaces.txt
alpha-x123555
default
everything-works
ingress-nginx
kube-node-lease
kube-public
kube-system
local-path-storage
```
### Solution
`vi esercizio0.yaml`
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: alpha-x123555
```
`k create -f esercizio0.yaml`
`k get namespace -o custom-columns=:.metadata.name | tail -n +2 > /home/workshop/namespaces.txt`
Alternative solutions:
with `custom-columns`:
```bash
kubectl get ns --no-headers -o custom-columns=":metadata.name" > /home/workshop/namespaces.txt
```
with `json-path`:
```bash
kubectl get ns -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' > /home/workshop/namespaces.txt
```
with `awk`:
```bash
kubectl get ns --no-headers | awk '{ print $1 }' > /home/workshop/namespaces.txt
```
## Question 1 - Pod
Deploy a pod with name `nginx-xh78` with the image `registry.sighup.io/workshop/nginx:alpine` in the default namespace.
The container inside the pod should be named `nginx-xh78-container`.
After the pod creation, write a script `/home/workshop/get_status.sh` that uses `kubectl` to retrieve the status of the `nginx-xh78` pod when invoked.
### Solution
`k run nginx-xh78 --image=exnginx:alpine --dry-run --output yaml #creazione yaml`
`cat nginx.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-xh78
name: nginx-xh78
spec:
containers:
- image: registry.sighup.io/workshop/nginx:alpine
name: nginx-xh78
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
```
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx-xh78
name: nginx-xh78
spec:
containers:
- image: registry.sighup.io/workshop/nginx:alpine
name: nginx-xh78-container
dnsPolicy: ClusterFirst
restartPolicy: Always
```
`k apply -f nginx.yaml #creazione del pod partendo dallo yaml`
cat get_status.sh #script per ottenere l'output voluto
#!/bin/bash
kubectl get pods nginx-xh78 --no-headers | awk '{print $3}'
## Question 2 - Pod Labels
Deploy a pod with name `child` with the image `registry.sighup.io/workshop/redis:alpine` in the default namespace.
The pod shoud have a label `fruit=pineapple`.
After the pod creation, create a replicaset called `parent` that will adopt the pod `child`.
Check the desired and current pods in the replica set are correct.
Try to delete the pod `child` and see if another one is created.
### Solution
```
vi manifest.yaml
```
contenuto manifest.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: child
labels:
fruit: pineapple
spec:
containers:
- name: redis
image: registry.sighup.io/workshop/redis:alpine
```
Creo il pod:
```
k apply -f manifest.yaml
```
Creo il replicaset
```
vi replica.yaml
```
Contenuto replica.yaml:
```yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: parent
spec:
# modify replicas according to your case
replicas: 1
selector:
matchLabels:
fruit: pineapple
template:
metadata:
labels:
fruit: pineapple
spec:
containers:
- name: redis
image: registry.sighup.io/workshop/redis:alpine
```
Applico Replicaset:
```
k apply -f replica.yaml
```
Visualizzo i pod:
```
NAME READY STATUS RESTARTS AGE
child 1/1 Running 0 10m
```
Visualizzo il replicaset:
```
k get rs
NAME DESIRED CURRENT READY AGE
parent 1 1 1 63s
```
Guardo se relamente il replicaset è configurato correttamente:
```
k describe rs parent
Name: parent
Namespace: default
Selector: fruit=pineapple
Labels: <none>
Annotations: <none>
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: fruit=pineapple
Containers:
redis:
Image: registry.sighup.io/workshop/redis:alpine
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events: <none>
```
Deleto il child:
```
delete pod child
```
Verifico che sia stato 'creato' il nuovo pod:
```
k get pod
NAME READY STATUS RESTARTS AGE
parent-fr9gj 1/1 Running 0 12s
```
## Question 3 - Pod Placement
Deploy a pod with name `busybox-jke3` with the image `registry.sighup.io/workshop/busybox` in the default namespace.
The pod should be scheduled **only** on the **master nodes**.
The solution should work in case the number of master nodes increase in the future.
Do not edit the master node definition.
### Solution
Creazione pod da yaml con aggiunta di Taint e Affinity:
k run nginx-xh78 --image=registry.sighup.io/workshop/nginx:alpine --dry-run=client -o yaml
Modifca del file yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox-jke3
name: busybox-jke3
namespace: default
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: "Exists"
containers:
- image: registry.sighup.io/workshop/nginx:alpine
name: busybox-jke3
resources: {}
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
restartPolicy: Always
Comandi per vedere che il pod vada sul nodo corretto:
workshop@ip-10-0-101-191:~/workshop-advanced-kubernetes-material/kubernetes$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-jke3 1/1 Running 0 16s 192.168.120.140 ip-10-0-101-191 <none> <none>
nginx 1/1 Running 1 (4h10m ago) 22h 192.168.39.80 ip-10-0-1-44 <none> <none>
## Question 4 - Static pod
Create a namespace `static`.
Create a static pod in the first worker node in the namespace `static`.
The pod should:
- be called `static-pod`
- run the image `registry.sighup.io/workshop/busybox`
- execute the command `sleep 1000`
### Solution
il manifest va nel percorso
/etc/kubernetes/manifests
per vedere il percorso,
99 sudo systemctl status kubelet
100 sudo less /var/lib/kubelet/config.yaml
nel mio caso lo scritto fuori e copiato con il comando
sudo cp static-pod.yaml /etc/kubernetes/manifests/
apiVersion: v1
kind: Pod
metadata:
name: static-pod
namespace: static
labels:
role: myrole
spec:
containers:
- name: web
image: registry.sighup.io/workshop/busybox
command: ["/bin/sh"]
args: ["-c"," sleep 1000"]
a differenza di un pod normale non serve l'apply perché lo fa partire kubelet
## Question 5 - Job
Create a namespace `red`.
Create a job called `red` in the `red` namespace.
The job should run the image `registry.sighup.io/workshop/busybox` and execute `sleep 2 && echo done`.
The job should run 10 times and execute at most 3 runs in parallel.
Check the jobs log when terminated.
### Solution
workshop@ip-10-0-101-93:~$ cat esercizio5.yaml
apiVersion: v1
kind: Namespace
metadata:
name: red
k get namespace
kubens red
vi red.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: red
spec:
completions: 10
parallelism: 3
template:
spec:
containers:
- name: red
image: registry.sighup.io/workshop/busybox
command: ["/bin/sh","-c","sleep 2 && echo done"]
restartPolicy: Never
k apply -f esercizio5.yaml
## Question 6 - CronJob
Create a `rooster` CronJob that everyday at `6:00 AM` executes `date; echo chicchirichi`.
You can use the `registry.sighup.io/workshop/busybox` image in the definition.
### Solution
#creazione dello yaml necessario:
workshop@ip-10-0-101-240:~/esercizio_6$ cat rooster.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: rooster
spec:
schedule: "0 6 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: rooster
image: registry.sighup.io/workshop/busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo chicchiricchi
restartPolicy: OnFailure
## Question 7 - Expose deployment internally via ClusterIP
Create the namespace `beta`.
Create a deployment `cache` with the label `flavour=cache` that uses the image `registry.sighup.io/workshop/redis:alpine` in the namespace `beta`. The container inside the pod template definition should expose port `6379`.
Expose the deployment inside the cluster with a service `cache-service` on the port `6379` .
After the deployment and service are created, scale the number of replicas of the `cache` deployment to 3.
### Solution
Create the namespace `beta`:
```bash
kubectl create ns beta
```
Create a `cache.yaml` for the deployment:
```yaml
# file: cache.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cache
namespace: beta
labels:
flavour: cache
spec:
selector:
matchLabels:
flavour: cache
template:
metadata:
labels:
flavour: cache
spec:
containers:
- image: registry.sighup.io/workshop/redis:alpine
name: cache
```
Apply the manifest:
```bash
kubectl apply -f cache.yaml
```
Expose the deployment via a service:
```bash
kubectl expose deployment --name=cache-service cache --port=6379 --target-port=6379 --namespace=beta
```
or creating a `cache_svc.yaml` file:
```yaml
# file: cache_svc.yaml
---
apiVersion: v1
kind: Service
metadata:
name: cache-service
namespace: beta
labels:
flavour: cache
spec:
ports:
- port: 6379
targetPort: 6379
protocol: TCP
selector:
flavour: cache
```
```bash
kubectl apply -f cache_svc.yaml
```
To scale the deployment:
```bash
kubectl scale deployment cache -n beta --replicas=3
```
or edit `cache.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cache
namespace: beta
labels:
flavour: cache
spec:
replicas: 3
selector:
matchLabels:
flavour: cache
template:
metadata:
labels:
flavour: cache
spec:
containers:
- image: registry.sighup.io/workshop/redis:alpine
name: cache
```
and re-apply:
```bash
kubectl apply -f cache.yaml
```
## Question 8 - Expose deployment externally via NodePort
Create the namespace `hello`.
Create a deployment `hello-world` with the label `app=hello` that uses the image `gcr.io/google-samples/node-hello:1.0` in the namespace `hello`. The container inside the pod template definition should expose port `8080`.
Expose the deployment outside the cluster with an appropriate service `hello-service`.
The service should be mapped on the port `30003` of the nodes.
### Solution
per creare il namespace:
k crearte namespace hello
per spostarci nel namescpace:
kubens hello
manifest deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: hello
labels:
app: hello
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- image: gcr.io/google-samples/node-hello:1.0
name: hello-world
ports:
- containerPort: 8080
manifest servizio
apiVersion: v1
kind: Service
metadata:
name: hello-service
namespace: hello
spec:
type: NodePort
selector:
app: hello
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 8080
targetPort: 8080
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30003
per testare
curl http://10.0.1.36:30003
## Question 9 - Deployment update and rollback
1. Create a deployment in the default namespace with the image `registry.sighup.io/workshop/nginx:1.7.9` with 3 replicas called `nginx`.
2. Execute a rolling update saving the change cause to the image `registry.sighup.io/workshop/nginx:1.9.9`.
3. In case of problems rollback to the previous version.
### Solution
```
workshop@ip-10-0-101-240:~/esercizio_9$ cat deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
#labels:
# flavour: cache
spec:
replicas: 3
selector:
matchLabels:
app: nginx-9
template:
metadata:
labels:
app: nginx-9
spec:
containers:
# - image: registry.sighup.io/workshop/nginx:1.7.9
- image: registry.sighup.io/workshop/nginx:1.9.9
name: nginx
```
#setto la nuova immagine con comando imperativo registrando il comando con record=true
`set image deployment/nginx nginx=registry.sighup.io/workshop/nginx:1.9.9 --record=true`
#ci si accorge che l'immagine era sbagliata e quindi si vuole fare rollback
```
workshop@ip-10-0-101-240:~/esercizio_9$ k rollout history deployment
deployment.apps/nginx
REVISION CHANGE-CAUSE
3 <none>
4 kubectl set image deployment/nginx nginx=registry.sighup.io/workshop/nginx:1.9.9 --record=true
```
```
workshop@ip-10-0-101-240:~/esercizio_9$ k rollout undo deployment nginx
deployment.apps/nginx rolled back
workshop@ip-10-0-101-240:~/esercizio_9$ k rollout history deployment
deployment.apps/nginx
REVISION CHANGE-CAUSE
4 kubectl set image deployment/nginx nginx=registry.sighup.io/workshop/nginx:1.9.9 --record=true
5 <none>
```
## Question 10 - Set requests and limits
Create the namespace `blue`.
Create a deployment `blue` with `3` replicas in the `blue` namespace that uses the image `registry.sighup.io/workshop/httpd:latest`. The container should be named `blue-container` and have memory request of `20Mi` and a memory limit of `50Mi`.
### Solution
TBD
Creiamo il namespace:
$ kubectl create namespace blue
Creiamo il deployment con il file blue.yaml:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
namespace: blue
labels:
app: blue
spec:
replicas: 3
selector:
matchLabels:
app: blue
template:
metadata:
labels:
app: blue
spec:
containers:
- name: blue-container
image: registry.sighup.io/workshop/httpd:latest
resources:
requests:
memory: "20Mi"
limits:
memory: "50Mi"
~
```
#verifichiamo le risorse:
$ kubectl describe deployments.apps blue
## Question 11 - Troubleshooting applications
Inside the namespace `everything-works` there is a `website` deployment which is currently not working.
Identify the problem and fix it.
### Solution
non era presente l'immagine
kubectl set image deployment/website website=registry.sighup.io/workshop/nginx:1.7.9 --record=true
k edit deployment website (non era presente sul master)
sui parametri i probe abbiamo messo come path / e non /anotherpath o /apath
## Question 12 - InitContainer and Probes
1. Create a pod in the namespace `default` called `slowstart`. The pod should mount an `emptyDir` volume called `shared` at `/usr/share/nginx/html`. Moreover, the pod should have a container called `nginx` that:
- runs the image `registry.sighup.io/workshop/nginx`
- has a liveness probe that performs an `httpGet` on port `80` at `/filedinamico.html`
2. Run the pod, it should be in `CrashLoopBack` state as the liveness probe is failing.
3. Add an initContainer called `init` that mounts the `shared` volume and create the file `/usr/share/nginx/html/filedinamico.html`. You can use the image `registry.sighup.io/workshop/busybox`
#SOLUTION
manifest:
```
apiVersion: v1
kind: Pod
metadata:
name: slowstart
namespace: default
spec:
containers:
- image: registry.sighup.io/workshop/nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: shared
livenessProbe:
httpGet:
path: /filedinamico.html
port: 80
initialDelaySeconds: 3
periodSeconds: 3
initContainers:
- name: init
image: registry.sighup.io/workshop/busybox
command: ['sh', '-c', "echo ciao > /usr/share/nginx/html/filedinamico.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: shared
volumes:
- name: shared
emptyDir: {}
```
apro una shell bash all'interno del mio pod "slowstart"
per verificare la creazione del file "filedinamico.html".
esiste un'opzione per indicare su quale container salire,
altrimenti sceglie lui:
```
kubectl exec --stdin --tty slowstart -- /bin/bash
```
## Question 13 - Sidecar
Create a pod called `writer-reader` in the `default` namespace with following specifications:
- Use an emptyDir volume called `shared`.
- Have a container `writer` running `registry.sighup.io/workshop/busybox` that mounts the `shared` volume at `/opt/app_logs/` and writes `hello` in a file `/opt/app_logs/wave.log`
- Another container `reader` running `registry.sighup.io/workshop/ubuntu` that outputs the file created by the other container to stout.
Extract the logs of the `reader` container at `/home/workshop/multi.logs`.
SOLUTION
writer-reader.yaml
```
##
---
apiVersion: v1
kind: Pod
metadata:
name: writer-reader
namespace: default
spec:
containers:
- image: registry.sighup.io/workshop/busybox
name: writer
volumeMounts:
- mountPath: /opt/app_logs/
name: shared
command: ['sh', '-c', 'while true; do echo hello >> /opt/app_logs/wave.log; sleep 1; done;']
- name: reader
volumeMounts:
- mountPath: /opt/app_logs/
name: shared
image: registry.sighup.io/workshop/ubuntu
command: ['sh', '-c', 'while true; do cat /opt/app_logs/wave.log; sleep 1; done;']
volumes:
- name: shared
emptyDir: {}
```
Estrarre i log:
`k logs writer-reader -c reader > /home/workshop/multi.logs`
## Question 14 - DNS
Create a deployment in the namespace `default` called `apache` that uses the image `registry.sighup.io/workshop/httpd:latest`.
The container inside the pod template definition should expose port `80`.
Expose the deployment internally with a service `apache-service` on port `8080`.
Verify the DNS resolution of the `apache-service` via `nslookup` using a temporary pod running the image `registry.sighup.io/workshop/busybox`. Save `nslookup` output at `/home/workshop/dnsresolution.txt`
## SOLUTION
abbiamo usato un solo manifest per deployment e servizio (è un semplice esercizio di stile)
```
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: registry.sighup.io/workshop/httpd:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 8080
```
con
`kubectl run nslookup --image=registry.sighup.io/workshop/busybox --command -- nslookup apache-service`
il pod terminaca in errore
abbiamo quindi cambiato metono
1.`kubectl run nslookup --image=registry.sighup.io/workshop/busybox --command -- sleep 1000`
2.`kubectl exec --stdin --tty nslookup -- nslookup apache-service > /home/workshop/multi.logs`
## Question 15 - ConfigMap
1. Create a ConfigMap called `beta-5000` in the namespace `default` with the following values:
- `COLOR=red`
- `FLAVOUR=garlic`
2. Create a pod `configmap-reader` in the namespace `default` that uses the ConfigMap `beta-5000`, mounting `COLOR` and `FLAVOUR` as environment variables inside a container that echos these values every minute.
You can use the `registry.sighup.io/workshop/busybox` image for the container inside the `configmap-reader` pod.
> Link Utile: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
configmap yaml:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: beta-5000
namespace: default
data:
# property-like keys; each key maps to a simple value
color: red
flavour: garlic
```
pod yaml:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: beta-5000
namespace: default
data:
# property-like keys; each key maps to a simple value
color: red
flavour: garlic
workshop@ip-10-0-101-240:~/esercizio_15$ cat podconfreader.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmap-reader
namespace: default
spec:
containers:
- name: reader
image: registry.sighup.io/workshop/busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $COLOR $FLAVOUR; sleep 60;done"]
env:
# Define the environment variable
- name: COLOR # Notice that the case is different here
# from the key name in the ConfigMap.
valueFrom:
configMapKeyRef:
name: beta-5000 # The ConfigMap this value comes from.
key: color # The key to fetch.
- name: FLAVOUR # Notice that the case is different here
# from the key name in the ConfigMap.
valueFrom:
configMapKeyRef:
name: beta-5000 # The ConfigMap this value comes from.
key: flavour # The key to fetch.
```
## Question 16 - Secret
1. Create a secret called `secret-3fg` in the namespace `default` containing the following `secret.config` file:
```text
Hello world
Doing Kubernetes stuff
```
Create a `secret-reader` pod in the namespace `default` that mounts this secret in `/opt/secret.config` and output its content every minute.
SOLUTION:
#File yaml secret3fg.yaml creato in modo imperativo con il comando:
$ kubectl create secret generic secret-3fg --from-file=secret.config --dry-run=client -o yaml -n default > secret3fg.yaml
`
#In questo modo otteniamo il file .yaml:
```
---
apiVersion: v1
data:
secret.config: SGVsbG8gd29ybGQKRG9pbmcgS3ViZXJuZXRlcyBzdHVmZgoK
kind: Secret
metadata:
creationTimestamp: null
name: secret-3fg
namespace: default
```
`
Creiamo il pod che usa il secret: secret-reader.yaml
```
apiVersion: v1
kind: Pod
metadata:
name: secret-reader
namespace: default
spec:
containers:
- name: secret-reader
image: registry.sighup.io/workshop/busybox
volumeMounts:
- name: secret-3fg
mountPath: "/opt/"
readOnly: true
command: ["/bin/sh"]
args: ["-c", "while true; do cat /opt/secret.config; sleep 60;done"]
volumes:
- name: secret-3fg
secret:
secretName: secret-3fg
```
Comando di decodifica:
```
$ echo -n "SGVsbG8gd29ybGQKRG9pbmcgS3ViZXJuZXRlcyBzdHVmZgoK" | base64 -d -
```
## Question 17 - Secret Token of Service Account
Create a service account called `luke` in the namespace `default`.
Retrieve the service account token and write the base64 **decoded** token to file `/home/workshop/token`
creato il manifest del service account
```
apiVersion: v1
kind: ServiceAccount
metadata:
name: luke
```
verificata la presenza del secret e fatto il describe
```
k get secrets
k describe secrets luke k describe secrets luke-token-mgmz5
```
e copiato il token
```
Data
====
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InRuZUtrZjROX0xtSjkzYVBZQ2hJMk9sU1JQaGJIU21SSGZ3UER0S2JiVVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imx1a2UtdG9rZW4tbWdtejUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibHVrZSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJkYWM0ZTZkLWZjZWQtNDA3ZS1iZjUwLTIzMDgzMTdiOWEzYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omx1a2UifQ.rFUvrLYL0w4Z0cwOVpXvmjhsUPx9MG_CYqboueWLFPJUmcbh_IClZshnyRfndXkruSD2P7waVZIHLS6vehbslVGoQLStyoPkC5o7bUqq5I6_dzWPvSofTuEafBmcVF0yCnRM13JahTjZwGPa8GMYMroFhp6VNEj29GQNHVnabMGacqBDeGu_08BCoY063jKhhTKzQixSJLGOkbU2yBJ_S-7mwYrMIzQUgr1XfUihwi7NF4220WyW47Ozm6VztmJ6ie2wYYxZYa58w6mIry4Z2WoCpMCWZwLofdfZERqxAU_Auz6qml2ycaqLdTmpG0_s4nPaoTBDhaDJM9Dy734IUA
```
dato il comando
```echo -n "eyJhbGciOiJSUzI1NiIsImtpZCI6InRuZUtrZjROX0xtSjkzYVBZQ2hJMk9sU1JQaGJIU21SSGZ3UER0S2JiVVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imx1a2UtdG9rZW4tbWdtejUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibHVrZSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJkYWM0ZTZkLWZjZWQtNDA3ZS1iZjUwLTIzMDgzMTdiOWEzYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omx1a2UifQ.rFUvrLYL0w4Z0cwOVpXvmjhsUPx9MG_CYqboueWLFPJUmcbh_IClZshnyRfndXkruSD2P7waVZIHLS6vehbslVGoQLStyoPkC5o7bUqq5I6_dzWPvSofTuEafBmcVF0yCnRM13JahTjZwGPa8GMYMroFhp6VNEj29GQNHVnabMGacqBDeGu_08BCoY063jKhhTKzQixSJLGOkbU2yBJ_S-7mwYrMIzQUgr1XfUihwi7NF4220WyW47Ozm6VztmJ6ie2wYYxZYa58w6mIry4Z2WoCpMCWZwLofdfZERqxAU_Auz6qml2ycaqLdTmpG0_s4nPaoTBDhaDJM9Dy734IUA" | base64 -d - > /home/workshop/token```
letto l'output
{"alg":"RS256","kid":"tneKkf4N_LmJ93aPYChI2OlSRPhbHSmRHfwPDtKbbUY"}
e confrontato con il risultato di decodifica su https://jwt.io/
## Question 18 - Volumes
1. Create a persistent volume claim `alpha-claim` in the namespace `default` with:
- storageClass `local-path`
- access mode `ReadWriteOnce`
- Capacity `1Gi`
2. Create a pod called `volume-user` that uses the image `registry.sighup.io/workshop/nginx:alpine` that mounts this volume on `/usr/share/nginx/html`.
3. Enter the pod `volume-user` and create a file `index.html` inside the mounted directory with arbitrary content.
4. Delete and recreate the pod
5. Check that the file `/usr/share/nginx/html/index.html` inside the pod is still present.
SOLUTION
creo un pvc che sfrutta una storageClassName esistente sul sistema:
```
workshop@ip-10-0-101-240:~/esercizio_18$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: alpha-claim
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
ora creo il pod che a sua volta sfrutta il pvc creato sopra "alpha-claim"
```
workshop@ip-10-0-101-240:~/esercizio_18$ cat podpvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-user
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: alpha-claim
containers:
- name: volume-user
image: registry.sighup.io/workshop/nginx:alpine
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
```
la storageclass presente sul sistema e sfruttata è la seguente:
```
kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 7d19h
```
ora testo che il mio volume non perda i file salvati alla morte del pod.
Salgo in /bin/sh sull'unico container del pod, vado nel path corrispondente al pvc
e creo un file pippo.
```
workshop@ip-10-0-101-240:~/esercizio_18$ kubectl exec --stdin --tty volume-user -- /bin/sh
/ #
/ # cd /usr/share/nginx/html
/usr/share/nginx/html #
/usr/share/nginx/html # ls -l
total 0
/usr/share/nginx/html #
/usr/share/nginx/html # touch pippo
/usr/share/nginx/html #
/usr/share/nginx/html # ls -l
total 0
-rw-r--r-- 1 root root 0 Oct 15 11:06 pippo
```
ora cancello il pod, ne tiro su uno nuovo, risalgo sul path ed il file pippo è ancora presente:
```
workshop@ip-10-0-101-240:~/esercizio_18$
workshop@ip-10-0-101-240:~/esercizio_18$ k delete pod volume-user
pod "volume-user" deleted
workshop@ip-10-0-101-240:~/esercizio_18$
workshop@ip-10-0-101-240:~/esercizio_18$ k apply -f podpvc.yaml
pod/volume-user created
workshop@ip-10-0-101-240:~/esercizio_18$
workshop@ip-10-0-101-240:~/esercizio_18$ kubectl exec --stdin --tty volume-user -- /bin/sh
/ #
/ # cd /usr/share/nginx/html
/usr/share/nginx/html #
/usr/share/nginx/html #
/usr/share/nginx/html # ls -l
total 0
-rw-r--r-- 1 root root 0 Oct 15 11:06 pippo
/usr/share/nginx/html #
```
## Question CKA - ETCD Snaphsot
1. Backup etcd on `/home/workshop/etcd-backup.db`.
2. Restore etcd with the backup made
# Soluzione
```
$ sudo ETCDCTL_API=3 etcdctl --endpoints 10.0.101.191:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt member list
```
#In formato tabella
`$ sudo ETCDCTL_API=3 etcdctl --endpoints 10.0.101.191:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt member list --write-out=table`
#Eseguo il backup
`$ sudo ETCDCTL_API=3 etcdctl --endpoints 10.0.101.191:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt snapshot save /home/workshop/etcd-backup.db`
#verifica file
`$ ls -alhtr /home/workshop/`
#verifica stato backup
```
$ sudo ETCDCTL_API=3 etcdctl --endpoints 10.0.101.191:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt snapshot status /home/workshop/etcd-backup.db --write-out=table
```