# Lab Session Four > These documents are my working notes for each of our weekly sessions. Think of them as a Cliff's Notes version of Kubernetes the Hard Way with some of my own commentary. If we find anything substantive in our sessions, we'll be sure to open and link Pull Requests against the KTHW source of truth on Github. > > -jduncan ## Persistent Notes and links * Kubernetes The Hard Way - https://github.com/kelseyhightower/kubernetes-the-hard-way * GDoc (internal to Google) - [go/k8s-appreciation-month](go/k8s-appreciation-month) * This project in Github - https://github.com/jduncan-rva/k8s-appreciation-month * Session One - https://hackmd.io/6NFDYrWdQC677I-arkQWmg * Session Two - https://hackmd.io/RIEmIlXpRouNlVKhBQ3Thw * Session Three - https://hackmd.io/xs65EKFhRi-PnEDwBdzSYQ * Session Four - https://hackmd.io/S62A412aQb2Q5pBsVsDqgg ## Weclome to Kubernetes Appreciation Month! In our fourth and final livestream we'll cover the following labs from KTHW: * [Provisioning Pod Network Routes](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md) * [Deploying the DNS Cluster Add-on](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/12-dns-addon.md) * [Smoke Test](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/13-smoke-test.md) * [Cleaning Up](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/14-cleanup.md) ## Provisioning Pod Network Routes Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes. In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address. There are also [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the kubernetes networking model. ### Creating a routing table In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network. 1. Print the internal IP address and Pod CIDR range for each worker instance. ``` for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' done ``` Output: ``` 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24 ``` 2. Create network routes for each application node. ``` for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ --next-hop-address 10.240.0.2${i} \ --destination-range 10.200.${i}.0/24 done ``` 3. Confirm your routes were created correctly. ``` gcloud compute routes list --filter "network: kubernetes-the-hard-way" ``` Output: ``` NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-6be823b741087623 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-cebc434ce276fafa kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0 kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 ``` ## Deploying the DNS Cluster Add-On In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster. ### Deploying your DNS Solution 1. Deploy the DNS cluster add-on. ``` kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.7.0.yaml ``` Output: ``` serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created ``` 2. List the pods created by the `kube-dns` deployment. ``` kubectl get pods -l k8s-app=kube-dns -n kube-system ``` Output: ``` NAME READY STATUS RESTARTS AGE coredns-5677dc4cdb-d8rtv 1/1 Running 0 30s coredns-5677dc4cdb-m8n69 1/1 Running 0 30s ``` ### Verifying your DNS Solution 1. Verify DNS is functional by deploying busybox. ``` kubectl run busybox --image=busybox:1.28 --command -- sleep 3600 ``` 2. List the pod created for your `busybox` Deployment. This confirms it's running properly. Notice the `-l` parameter to limit the output to only Pods with the `run=busybox` label. This label was applied automatically to the Pod when you created it using the `kubectl run` command. ``` kubectl get pods -l run=busybox ``` 3. Get the full DNS name for your busybox Pod. ``` POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") ``` 4. Use `kubectl exec` to perform a DNS lookup for your `busybox` Pod. ``` kubectl exec -ti $POD_NAME -- nslookup kubernetes ``` Output: ``` Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local ``` ## Smoke Testing your Cluster In this section you'll put your cluster through its paces to make sure all of the components are functioning properly. ### Testing Data Encryption In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted). 1. Create a generic secret. ``` kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" ``` 2. Print a hexdump of the secret directly from the `etcd` database. The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key. ``` gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem\ /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" ``` Output: ``` 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc| 00000040 3a 76 31 3a 6b 65 79 31 3a 8c 7b 16 f3 26 59 d5 |:v1:key1:.{..&Y.| 00000050 c9 65 1c f0 3a 04 e7 66 2a f6 50 93 4e d4 d7 8c |.e..:..f*.P.N...| 00000060 ca 24 ab 68 54 5f 31 f6 5c e5 5c c6 29 1d cc da |.$.hT_1.\.\.)...| 00000070 22 fc c9 be 23 8a 26 b4 9b 38 1d 57 65 87 2a ac |"...#.&..8.We.*.| 00000080 70 11 ea 06 93 b7 de ba 12 83 42 94 9d 27 8f ee |p.........B..'..| 00000090 95 05 b0 77 31 ab 66 3d d9 e2 38 85 f9 a5 59 3a |...w1.f=..8...Y:| 000000a0 90 c1 46 ae b4 9d 13 05 82 58 71 4e 5b cb ac e2 |..F......XqN[...| 000000b0 3b 6e d7 10 ab 7c fc fe dd f0 e6 0a 7b 24 2e 68 |;n...|......{$.h| 000000c0 5e 78 98 5f 33 40 f8 d2 10 30 1f de 17 3f 06 a1 |^x._3@...0...?..| 000000d0 81 bd 1f 2e be e9 35 26 2c be 39 16 cf ac c2 6d |......5&,.9....m| 000000e0 32 56 05 7d 80 39 5d c0 a4 43 46 75 96 0c 87 49 |2V.}.9]..CFu...I| 000000f0 3c 17 1a 1c 8e 52 b1 e8 42 6b a5 e8 b2 b3 27 bc |<....R..Bk....'.| 00000100 80 a6 53 2a 9f 57 d2 de a3 f8 7f 84 2c 01 c9 d9 |..S*.W......,...| 00000110 4f e0 3f e7 a7 1e 46 b7 47 dc f0 53 d2 d2 e1 99 |O.?...F.G..S....| 00000120 0b b7 b3 49 d0 3c a5 e8 26 ce 2c 51 42 2c 0f 48 |...I.<..&.,QB,.H| 00000130 b1 9a 1a dd 24 d1 06 d8 34 bf 09 2e 20 cc 3d 3d |....$...4... .==| 00000140 e2 5a e5 e4 44 b7 ae 57 49 0a |.Z..D..WI.| 0000014a ``` ### Creating Deployments In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). Deployments are a versative object in kubernetes that combine Pods with additional information like scale, upgrade methodologies, and other ideas crucial for applications to function properly. 1. Create a Deployment for the `nginx` webserver. ``` kubectl create deployment nginx --image=nginx ``` 2. Confirm the Pod for your Deployment was successfully created. ``` kubectl get pods -l app=nginx ``` Output: ``` NAME READY STATUS RESTARTS AGE nginx-f89759699-kpn5m 1/1 Running 0 10s ``` ### Port-Forwarding In this section you will verify the ability to access applications remotely using port forwarding. 1. Retrieve the full name of the nginx pod. ``` POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}") ``` 2. Forward port 8080 on your local machine to port 80 of the nginx pod. ``` kubectl port-forward $POD_NAME 8080:80 ``` Output: ``` Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` 3. In a new terminal make an HTTP request using the forwarding address. ``` curl --head http://127.0.0.1:8080 ``` Output: ``` HTTP/1.1 200 OK Server: nginx/1.19.1 Date: Sat, 18 Jul 2020 07:14:00 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT Connection: keep-alive ETag: "5f049a39-264" Accept-Ranges: bytes ``` 4. Switch back to the previous terminal and stop the port forwarding to the nginx pod by pressing `<ctrl-C>`. ``` Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 ^C ``` ### Retrieving Logs In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/). 1. Print the nginx pod logs. ``` kubectl logs $POD_NAME ``` Output: ``` ... 127.0.0.1 - - [18/Jul/2020:07:14:00 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-" ``` ### Executing commands in a Pod In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container). 1. Print the nginx version by executing the `nginx -v` command in the `nginx` container. ``` kubectl exec -ti $POD_NAME -- nginx -v ``` Output: ``` nginx version: nginx/1.19.1 ``` ### Exposing applications with Services In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/). 1. Expose the nginx deployment using a NodePort service. ``` kubectl expose deployment nginx --port 80 --type NodePort ``` 2. Retrieve the node port assigned to the `nginx` service. ``` NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') ``` 3. Create a firewall rule that allows remote access to the `nginx` node port. ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way ``` 4. Retrieve the external IP address of a worker instance. ``` EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` 5. Make an HTTP request using the external IP address and the `nginx` node port. ``` curl -I http://${EXTERNAL_IP}:${NODE_PORT} ``` Output: ``` HTTP/1.1 200 OK Server: nginx/1.19.1 Date: Sat, 18 Jul 2020 07:16:41 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT Connection: keep-alive ETag: "5f049a39-264" Accept-Ranges: bytes ``` ## Cleaning Up In this lab you will delete the compute resources created during this tutorial. ### Compute Resources 1. Delete the controller and application node compute instances. ``` gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 \ --zone $(gcloud config get-value compute/zone) ``` 2. Delete the external load balancer network resources. ``` gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) gcloud -q compute target-pools delete kubernetes-target-pool gcloud -q compute http-health-checks delete kubernetes gcloud -q compute addresses delete kubernetes-the-hard-way ``` 3. Delete the `kubernetes-the-hard-way` firewall rules. ``` gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-external \ kubernetes-the-hard-way-allow-health-check ``` 4. Delete the `kubernetes-the-hard-way` network VPC. ``` gcloud -q compute routes delete \ kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-1-0-24 \ kubernetes-route-10-200-2-0-24 gcloud -q compute networks subnets delete kubernetes gcloud -q compute networks delete kubernetes-the-hard-way ```