# Container Runtime Sandbox - gVisor Installation
Date: Oct 2023
OS: Ubuntu 22.04
Kubernetes: 1.26.4
Containerd: 1.6.24
Calico: 3.26.3
Node: 1 x control + 2 x workers
**Lab setup:**
```
cloud_user@k8s-control:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-control Ready control-plane 3d9h v1.26.4 172.31.22.65 <none> Ubuntu 22.04.3 LTS 6.2.0-1013-aws containerd://1.6.24
k8s-worker1 Ready <none> 3d8h v1.26.4 172.31.21.187 <none> Ubuntu 22.04.3 LTS 6.2.0-1009-aws containerd://1.6.24
k8s-worker2 Ready <none> 3d8h v1.26.4 172.31.25.96 <none> Ubuntu 22.04.3 LTS 6.2.0-1013-aws containerd://1.6.24
```
```
cloud_user@k8s-control:~$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.26.4
Kustomize Version: v4.5.7
Server Version: v1.26.4
```
```
cloud_user@k8s-control:~$ kubelet --version
Kubernetes v1.26.4
```
```
cloud_user@k8s-control:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.4", GitCommit:"f89670c3aa4059d6999cb42e23ccb4f0b9a03979", GitTreeState:"clean", BuildDate:"2023-04-12T12:12:17Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/amd64"}
```
**Install gVisor and runsc**
Reference: https://gvisor.dev/docs/user_guide/install/
On all three nodes (control plane and workers), install gVisor.
```
curl -fsSL https://gvisor.dev/archive.key | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases release main"
sudo apt-get update && sudo apt-get install -y runsc
```
**Configure Containerd for runsc**
Reference: https://gvisor.dev/docs/user_guide/containerd/quick_start/
Edit containerd config.toml
`sudo vi /etc/containerd/config.toml`
Find the block [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] . After the existing runc block, add configuration for a runsc runtime.
```
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
```
It should look like this when done:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
Locate the [plugins."io.containerd.runtime.v1.linux"] block and set shim_debug to true .
```
[plugins."io.containerd.runtime.v1.linux"]
...
shim_debug = true
```
Restart containerd, and verify that it still runs.
```
sudo systemctl restart containerd
sudo systemctl status containerd
```
**## Remarks:**
If you are using runc.v1 in containerd/config.toml
`runtime_type = "io.containerd.runc.v1"`
then find the disabled_plugins section and add the restart plugin
`disabled_plugins = ["io.containerd.internal.v1.restart"]`
I am using runc v2 so that I skip the above configure.
**## end of remarks**
Set up the Kubernetes RuntimeClass
Reference:
https://gvisor.dev/docs/user_guide/containerd/quick_start/
https://kubernetes.io/docs/concepts/containers/runtime-class/
On only the control plane server, create a new RuntimeClass. In this case, the name is runsc-sandbox. You can create any name you want
`vi runsc-sandbox.yml`
```
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: runsc-sandbox
handler: runsc
```
`kubectl create -f runsc-sandbox.yml`
**Create a pod with the runsc runtimeClassName**
Create a Pod that does not use the sandbox.
`vi non-sandbox-pod.yml`
```
apiVersion: v1
kind: Pod
metadata:
name: non-sandbox-pod
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do echo "Running..."; sleep 5; done']
```
`kubectl create -f non-sandbox-pod.yml`
Create a Pod that uses the sandbox. specific the runtimeClassName eqaul to runsc-sandbox that created before
`vi sandbox-pod.yml`
```
apiVersion: v1
kind: Pod
metadata:
name: sandbox-pod
spec:
runtimeClassName: runsc-sandbox
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do echo "Running..."; sleep 5; done']
```
`kubectl create -f sandbox-pod.yml`
**Check the dmesg output of the Pods.**
use dmesg to examine the message from kernel ring buffer. The pod with gvisor can display the message in the simulated kernel
```
cloud_user@k8s-control:~$ kubectl exec non-sandbox-pod -- dmesg
dmesg: klogctl: Operation not permitted
command terminated with exit code 1
```
```
cloud_user@k8s-control:~$ kubectl exec sandbox-pod -- dmesg
[ 0.000000] Starting gVisor...
[ 0.545724] Generating random numbers by fair dice roll...
[ 0.721081] Accelerating teletypewriter to 9600 baud...
[ 1.019521] Creating bureaucratic processes...
[ 1.191531] Committing treasure map to memory...
[ 1.463364] Recruiting cron-ies...
[ 1.794662] Synthesizing system calls...
[ 1.818115] Creating process schedule...
[ 1.932190] Conjuring /dev/null black hole...
[ 2.400497] Verifying that no non-zero bytes made their way into /dev/zero...
[ 2.699343] Digging up root...
[ 3.061568] Setting up VFS...
[ 3.334626] Setting up FUSE...
[ 3.429368] Ready!
```