# Lab Session Three
In the previous labs you:
* Configured needed tooling (cfssl, cfssljson, etc.)
* Created a VPC and subnet
* Deployed nodes for your control and data plane
* Created and distributed needed TLS keys and certificates
* Created and distributed needed configuration files
* Bootstrap'd `etcd` as an HA service on your control plane and confirmed it's functionality
In this lab, you'll complete the following KTHW labs:
* [Bootstrapping the Kubernetes Control Plane](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md)
* [Bootstrapping the Kubernetes Worker Nodes](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md)
* [Configuring kubectl for Remote Access](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/10-configuring-kubectl.md)
## Persistent Notes and links
* Kubernetes The Hard Way - https://github.com/kelseyhightower/kubernetes-the-hard-way
* GDoc (internal to Google) - [go/k8s-appreciation-month](go/k8s-appreciation-month)
* This project in Github - https://github.com/jduncan-rva/k8s-appreciation-month
* Session One - https://hackmd.io/6NFDYrWdQC677I-arkQWmg
* Session Two - https://hackmd.io/RIEmIlXpRouNlVKhBQ3Thw
* Session Three - https://hackmd.io/xs65EKFhRi-PnEDwBdzSYQ
* Session Four - https://hackmd.io/S62A412aQb2Q5pBsVsDqgg
## Bootstrapping the Control Plane
In this lab you'll configure your control plan node services and configure them for high availability. Like the previous lab, the fastest way to execute commands across nodes simultaneously is to use `tmux` and its synchronization feature.
### Setting up tmux
`tmux` is an effective tool to, among other things, execute the same command on multiple servers in parrallel. It'll save you a lot of time and prevent a lot of typos during the rest of this lab. Your environment my vary a little, but the general steps should be the same. If you choose to use another method to handle multiple servers, ignore this section. If you're using [GCP Cloud Shell](https://cloud.google.com/shell) to deploy this lab, `tmux` is already installed.
#### Open up three tmux panes
The hot key for `tmux` is typically `ctrl-b`.
1. To open up a new tab in `tmux`, type `ctrl-b` followed by the double-quote (`"`) key. This opens up a new pane in `tmux`. Do this twice for a total of 3 panes.
2. To evenly space the panes, type `ctrl-b` again, followed by `:select-layout even-vertical` and press `Enter`.
You should now have 3 nice evenly spaced `tmux` panes.
#### Connect to your control plane nodes
To move between panes to make the initial connections, use `ctrl-b` and the up/down errors.
1. If you're using Cloud Shell, you will have to run `gcloud init` in each `tmux` pane to connect to your project. Be sure to use the same project and region that houses your KTHW instances and VPC.
2. With this done, connect to one of your control plane or application nodes in each pane. For example:
```
gcloud compute ssh <NODE_NAME>
```
Once you're connected to all 3, synchronize your panes to type in all of them simultaneously.
#### Synchronizing your tmux panes
1. Hit `ctrl-b` on your keyboard, followed by `:setw synchronize-panes on`.
This synchronizes the 3 `tmux` panes to enter the same commands simultaneously. Next, you'll bootstrap your `etcd` cluster using your `tmux` setup.
With `tmux` configured, it's time to bootstrap your control plane.
### Creating the Configuration Directory
1. Create the Kubernetes configuration directory
```
sudo mkdir -p /etc/kubernetes/config
```
### Installing Service Binaries
1. Download and install Control Plane binaries
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl"
```
2. Install your Kubernetes binaries
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
```
### Confiuring kube-apiserver
1. Create `/var/lib/kubernetes`
```
sudo mkdir -p /var/lib/kubernetes/
```
2. Add needed files to `/var/lib/kubernetes`
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
```
3. The instance internal IP address is used to advertise the API server to members of the cluster. Retrieve the internal IP address for each control plane node.
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
4. Create the API Server Systemd Unit File
```
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
This is everything you need to configure `kube-api-server`. The `systemd` unit file references:
* internal IP address for each node
* IP addresses for each control to access `etcd`
* Certificates and keys you created
Don't start `kube-apiserver` just yet. You'll need to configure all the services and start them at the end. This is to prevent one service from looking for another and entering an error condition. Next, configure `kube-controller-manager`.
### Configuring kube-controller-manager
1. Install the configuration file
```
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
2. Create the systemd unit file
```
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Next, configure the `kube-scheduler` service.
### Configuring kube-scheduler
1. Install the `kube-scheduler` config file
```
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
2. The `kube-scheduler` service needs a YAML configuration file to poing to its `kubeconfig` file. Create that file.
```
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
```
3. Create the systemd unit file
```
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Next, enable and start the services with `systemd`.
### Starting Control Plane Services
With the control plane services configured, it's time to enable them.
1. Reload the `systemd` unit files to recognize the new units.
```
sudo systemctl daemon-reload
```
2. Enable control plane services
```
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
```
3. Start the services. It may take up to 10 seconds to fully initialize `kube-apiserver`. So be patient.
```
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
```
4. Check to make sure the services are up and running properly. Do this for each service.
```
systemctl status kube-apiserver
```
With your services up, you've bootstrapped your Kubernetes control plane! In the next section, you create a healt check endpoint for your control plane nodes and set up a Load Balancer to access the API server.
### Enabling HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
By default, the `/healthz` endpoing doesn't require authentication.
1. Install a web server to handle HTTP health check requests.
```
sudo apt-get update
sudo apt-get install -y nginx
```
2. Create a site config for the `/healthz` endpoint
```
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
```
3. Add the site config to `sites-available`
```
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
```
4. Link the site config to the `sites-enabled` directory
```
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
```
5. Ensure nginx is enabled
```
sudo systemctl enable nginx
```
6. Ensure nginx is running
```
sudo systemctl restart nginx
```
7. Verify nginx is running
```
systemctl status nginx
```
### Verifying HTTP Health Checks
Run these commands from each control plane node.
1. Check health of control plane components with kubectl
```
kubectl get componentstatuses --kubeconfig admin.kubeconfig
```
output:
```
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
```
2. Check the /healthz HTTP endpoint
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
output:
```
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sat, 18 Jul 2020 06:20:48 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
ok
```
In the next section, you'll configure the Role Based Access Control (RBAC) to allow the `kubelet` instances and `kube-apiserver` to communicate.
### Configuring kubelet RBAC
The steps in this section only need to be executed on a single node. Once executed, they'll propegate out to the other control plane nodes. That means you'll need to turn off the synchronization for your tmux session. RBAC in kubernetes is part of the authorization API (`authorization.k8s.io/v1beta1`). There are lots of ways to configure authz in Kubernetes. For the kubelet in this tutorial, you'll use the `Webhook` authorization mode in the [`SubjectAccessReview` API](https://kubernetes.io/docs/admin/authorization/#checking-api-access).
#### Disabling tmux synchronization
1. Press `<ctrl-b>` and type the following:
```
:setw synchronize-panes off
```
This disables your synchronization. Use `<ctrl-b>` and the up/down arrow keys to select your `controller-0` host. You can use any control plan node. We just picked this one to have a hostname for the samples.
#### Communication from the API Server to the Kubelet
1. The `kube-apiserver` service needs a [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) to allow it to communicate with the `kubelet` API. This provides most permissions needed for communication from `kube-apiserver` to `kublet` to perform most common lifecyle tasks. The code snippet for this step uses `kubectl apply -f -` to apply the input from the Linux pipe in the command. The `-` at the end of the command instructs `kubectl` to take the data from `STDIN` as its input.
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
```
2. With the `ClusterRole` created, bind this role to the `kubernetes` user using a [`ClusterRoleBinding`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding). This is the user that `kube-apiserver` uses to authenticate with the `kubelet` API. This value is defined by the `--kubelet-client-certificate` flag in the `kube-apiserver.service` systemd unit file.
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
```
A `ClusterRole` defines what a user can do, and it's bound to a given user with a `ClusterRoleBinding`. With the above work complete, `kube-apiserver` can use the `kubernetes` user to interact with the API components that are maintained by the `kubelet` service. Next, you'll create a Load Balancer for your API server.
### Creating a Front End Load Balancer
To perform these steps, disconnect from all three control plane nodes. To terminate all `tmux` panes press `<ctrl-b>` followed by `:kill-session`. This brings you back to a single shell session on the system you used to create your TLS certificates and provision your infrastructure. This is where this section needs to be executed.
This Load Balancer will effectively distribute load across your three control plane nodes. It leverage the `/healthz` endpoint on each control plane node that you created previously as a health check.
1. Get the public IP address you allocated.
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
2. Create a health check for your Load Balancer that checks the `/healthz` endpoint on each control plane node.
```
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
```
3. The health checks for Google Cloud Load Balancers could come from several IP ranges within Google's infrastructure. You'll need a rule to allow them to function properly.
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
```
4. Create a Target Pool for your control plane nodes.
```
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
```
5. Add your control plane nodes to your Target Pool.
```
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
```
6. Create a rule to forward requests on port 6443 on your Public IP Address to your control plane node Target Pool.
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
```
At this point, your Kubernetes control plane should be up and functional and available through your load-balanced public IP address. You'll need to verify everything works.
### Verifying your Control Plane Configuration
Execute the commands in this section from the same node you used to create your compute instances.
1. If you haven't already, get your public IP address.
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
2. Use the `curl` command to check the version of your kubernetes cluster by access the API through your public IP address.
```
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
output:
```
{
"major": "1",
"minor": "18",
"gitVersion": "v1.18.6",
"gitCommit": "dff82dc0de47299ab66c83c626e08b245ab19037",
"gitTreeState": "clean",
"buildDate": "2020-07-15T16:51:04Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}
```
This single command went through your Google Cloud Load Balancer, into your control plane cluster, into the `kube-apiserver` service, into `etcd`, and returned the version information stored there. That is fundamentally how the kubernetes API functions.
In the next section you'll bootstrap your application nodes so you can deploy workloads into your kubernetes cluster.
## Boostrapping Application Nodes
The application nodes are where your container-native applications are deployed and managed by kubernetes. In this lab, you'll deploy the following services to each application node:
* [runc](https://github.com/opencontainers/runc)
* [container networking plugins](https://github.com/containernetworking/cni)
* [containerd](https://github.com/containerd/containerd)
* [kubelet](https://kubernetes.io/docs/admin/kubelet)
* [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies)
### Dependencies and System Configuration
Like your control plane, you'll use `tmux` to bootstrap your application nodes. Refer to the steps above if needed to create a synchronized `tmux` session to `worker-0`, `worker-1`, and `worker-2`.
1. Installing needed dependencies with `apt-get`. The `socat` package enables support for the `kubectl port-forward` command that can be extremely useful when testing or debugging a kubernetes cluster.
```
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
```
2. Test to see if swap is enabled. If there's no output for this command, swap has already been disabled.
```
sudo swapon --show
```
3. If swap is enabled, disable it. For your nodes, swap should be disabled by default. If it's not, you'll need to perform some [additional steps](https://askubuntu.com/questions/440326/how-can-i-turn-off-swap-permanently) to disable it permanently across reboots.
```
sudo swapoff -a
```
### Application Node Software
1. Install containerd, the container runtime you'll be using for your application nodes.
```
wget https://github.com/containerd/containerd/releases/download/v1.3.6/containerd-1.3.6-linux-amd64.tar.gz
mkdir containerd
tar -xvf containerd-1.3.6-linux-amd64.tar.gz -C containerd
sudo mv containerd/bin/* /bin/
```
2. Install `crictl`, a tool used to [interact with the container runtime interface](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/) with container runtimes.
```
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz
tar -xvf crictl-v1.18.0-linux-amd64.tar.gz
chmod +x crictl
sudo mv crictl /usr/local/bin
```
3. Install `runc`, the CLI tool used to interact with `containerd`.
```
wget https://github.com/opencontainers/runc/releases/download/v1.0.0-rc91/runc.amd64
mv runc.amd64 runc
chmod +x runc
sudo mv runc /usr/local/bin
```
4. Install CNI plugins, used to configure application nodes to allow the container runtime and networking stack interact.
```
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
sudo mkdir -p /etc/cni/net.d /opt/cni/bin
sudo tar -xvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
```
5. Install `kubelet`. The `kubelet` on each application node interfaces with the container runtime and `kube-apiserver`.
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubelet
chmod +x kubelet
sudo mv kubelet /usr/local/bin
```
6. Install `kube-proxy`. [Kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) isn't a network proxy, but is used by the API to create and manage service IPs and other related objects.
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-proxy
chmod +x kube-proxy
sudo mv kube-proxy /usr/local/bin
```
7. Install `kubectl`. This is the same `kubectl` you used in a previous lab to create your `kubeconfig` files.
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
### Configuring CNI
In this section you'll configure your application nodes to work properly with your VPC and container workloads. These files are used by the Container Network Interface (CNI) plugin deployed in your kubernetes cluster.
1. Retrieve the Pod CIDR for each application node. This was set when your provisioned the nodes in the first session with the `--metadata pod-cidr=10.200.${i}.0/24` flag. Each pod is allocated an IP address within the `POD_CIDR` range for each application node.
```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
```
2. Create a bridge network configuration file. Each application node has a bridge virtual interface on the `POD_CIDR` network. Each pod's interface is attached to this bridge to allow for proper communication. The bridge interface will be created by the Container Network Interface.
```
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
```
3. Create a loopback network configuration file. Each pod has its own loopback (`127.0.0.1`) interface. This file instructs the CNI to create the interface for each pod when it's created.
```
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"name": "lo",
"type": "loopback"
}
EOF
```
### Configuring containerd
The container runtime actually creates and manages the OCI containers on your application nodes. Your control planes interface with `containerd` via `kubelet`.
1. Create the `containerd` configuration directory.
```
sudo mkdir -p /etc/containerd/
```
2. Create your `containerd` configuration file. This config file is telling `containerd` to use `runc` as well as [`overlayfs`](https://en.wikipedia.org/wiki/OverlayFS).
```
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
```
3. Create your `containerd` systemd unit file. This will start and stop your container runtime.
```
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
### Configuring kubelet
The `kubelet` service is the interface between kubernetes and your container runtime on each application node.
1. Install your CA certificate.
```
sudo mkdir /var/lib/kubernetes
sudo mv ca.pem /var/lib/kubernetes/
```
2. Install your host-specific kubelet key and certificate.
```
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
```
3. Install your host-specific kubelet `kubeconfig` file.
```
sudo mkdir /var/lib/kubelet
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
```
4. Create your `kubelet` configuration file to reference the files you just installed. The `resolvConf` configuration is used to avoid loops when using [CoreDNS](https://coredns.io/) for service discovery on systems running systemd-resolved. The command instructs `CoreDNS` to reference the `resolve.conf` template created and managed by `systemd`.
```
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
```
5. Create your `kubelet.service` systemd unit file.
```
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configuring kube-proxy
The `kube-proxy` service provides an interface between your application nodes and kubernetes pod network objects like [`Sevrices`](https://kubernetes.io/docs/concepts/services-networking/service/).
1. Install your `kube-proxy.kubeconfig` file.
```
sudo mkdir /var/lib/kube-proxy
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
2. Create your `kube-proxy` systemd unit file. This file instructs `kube-proxy` to use iptables to configure pod networking and the location of the `kube-proxy.kubeconfig` file.
```
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
```
3. Create your `kube-proxy.service` systemd unit file.
```
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Starting Application Node Services
With everything configured, it's time to start the services on your application nodes.
1. Reload the `systemd` daemon list.
```
sudo systemctl daemon-reload
```
2. Enable the services to start when the system boots.
```
sudo systemctl enable containerd kubelet kube-proxy
```
3. Start the services.
```
sudo systemctl start containerd kubelet kube-proxy
```
4. Verify the services are running. For each service run `systemctl status <SERVICE>`
### Verifying your Application Nodes are working
This step needs to be executed from the system you used to create your compute resources. To end your `tmux` session, type `<ctrl-b>` followed by `:kill-session`. This will end your `tmux` session and bring you back to a prompt on the system you used to provision your resources.
1. Use `kubectl` to list the nodes in your cluster.
```
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
```
output:
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 24s v1.18.6
worker-1 Ready <none> 24s v1.18.6
worker-2 Ready <none> 24s v1.18.6
```
If all three nodes are marked as `Ready` then you were able to:
* Connect through your Load Balancer to `kube-apiserver` on your control plane.
* Use the `kube-apiserver` service was able to access data from the `nodes` API using the `kubernetes` user and key. The `nodes` API endpoint data is populated by the `kubelet` service on each node.
* Use the `kubelet` service to confirm that your container runtime and hosts were configured properly and functional.
That means you've created a kubernetes cluster from scratch. But we're not done yet. Now you need to configure it for easier use and also deploy some workloads to confirm everything works as it should.
## Configuring kubectl for Remote Access
In the previous section, to examine your nodes using `kubectl` you had to execute it from one of your control plane nodes. Right now, the only valid `kubeconfig` for authenticating with your kubernetes cluster is the `admin.kubeconfig` file you created on your control plane nodes in an earlier lab. In this section, we'll fix that. This will be similar to the process you used to create `kubeconfig` files for your control plane nodes. The file created by these commands will be stored at `$HOME/.kube/config` on your host.
1. Get your Load Balancer public IP address.
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
2. Add information about your Kubernetes API server address and certificate.
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
3. Add information about the admin user key and certificate.
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
4. Create a context for your cluster.
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
```
5. Set your newly created context as the default.
```
kubectl config use-context kubernetes-the-hard-way
```
6. Confirm the `kubeconfig` works.
```
kubectl get componentstatuses
```
output:
```
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
```
```
kubectl get nodes
```
output:
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m30s v1.18.6
worker-1 Ready <none> 2m30s v1.18.6
worker-2 Ready <none> 2m30s v1.18.6
```
## Summary
These labs are where all of the work you've done previously become to come together as a single platform. It's really pretty cool! Next up we'll be going through [Session Four](https://hackmd.io/S62A412aQb2Q5pBsVsDqgg)