# Debugging Containerd Wasm Shims
## Table of Contents
- [Running Vanilla Kubernetes natively](#running-vanilla-kubernetes-natively)
- [Cleanup](#cleanup)
- [Debugging the shim](#debugging-the-shim)
- [Troubleshooting](#troubleshooting)
- [Attaching a debugger to containerd](#attaching-a-debugger-to-containerd)
- [Attaching a debugger to kubelet](#attaching-a-debugger-to-kubelet)
- [Running a sample spin app](#running-a-sample-spin-app)
- [Run the shim with `ctr`](#run-the-shim-with-ctr)
- [Deploy a Spin app to a cluster](#deploy-a-spin-app-to-a-cluster)
## Running Vanilla Kubernetes natively
Minified distributions of Kubernetes such as K3s, MicroK8s, Kind, and K3d are great for testing the shim; however, they are hard to configure to use custom build binaries of containerd, the shim and kubelet with debug symbols attached. For that reason, the best first step is to install vanilla K8s with `kubeadm`. Here is a script that does it for you on Ubuntu:
```bash!
SHORTVERSION=1.31
LONGVERSION=1.31.2
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v${SHORTVERSION}/deb/Release.key | sudo gpg --yes --batch --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v${SHORTVERSION}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --yes --batch --dearmor -o /usr/share/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get install -o Dpkg::Options::="--force-overwrite" -y --allow-downgrades kubelet=$LONGVERSION-* kubeadm=$LONGVERSION-* kubectl=$LONGVERSION-* containerd.io
kubectl version && echo "kubectl return code: $?" || echo "kubectl return code: $?"
kubeadm version && echo "kubeadm return code: $?" || echo "kubeadm return code: $?"
kubelet --version && echo "kubelet return code: $?" || echo "kubelet return code: $?"
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" /etc/containerd/config.toml
sudo systemctl restart containerd
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///run/containerd/containerd.sock
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# Remove control plane label
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
until kubectl get node ${HOSTNAME,,} -o jsonpath='{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status}' | grep 'Ready=True'; do echo "waiting for kubernetes to become ready"; sleep 10; done
```
If your node fails to start due to CNI issues, try reinstalling [flannel](https://github.com/flannel-io/flannel):
```shell!
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
```
### Cleanup
```shell!
sudo kubeadm reset
rm $HOME/.kube/config
```
```
## Debugging the shim
1. Install K3s. This uses kwasm to configure the containerd config to use a shim at `/opt/kwasm/bin/containerd-shim-spin-v2` so be sure to move your debug binary here
```json
wget https://gist.githubusercontent.com/kate-goldenring/a90bbe696d2cd48b44c093e1154047c0/raw/93f6ee1281123858290cb2a6ac61141e4671d38c/spin-kube-k3s.sh
chmod +x ./spin-kube-k3s.sh
./spin-kube-k3s.sh
```
2. Download the Native Debug VSCode extension
3. Create [a wrapper script](https://github.com/WebFreak001/code-debug?tab=readme-ov-file#debugging-a-process-from-a-different-user-especially-rootsystem-processes) to enable executing gbd as sudo user:
```json
#!/bin/bash
sudo gdb $*
```
4. Create gdb launch.json
```json
# launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "gdb",
"request": "attach",
"name": "Attach to PID",
"target": "{PID}",
"cwd": "${workspaceRoot}",
"valuesFormatting": "parseText",
"gdbpath": "/home/kagold/projects/containerd-shim-spin/_scratch/resources-debug/sudo-gdb.sh"
}
]
}
```
1. Build debug version of shim and move it to expected shim location for K8s distro (`/opt/kwasm/containerd-shim-spin-v2` for K3s with Kwasm configured)
2. Apply spin app deployment
3.
4. Get spin process PID and update `launch.json` to target it
5. Run debugger in VSCode
6. Pause debugger and add desired breakpoint
7. (repeat)
Troubleshooting
1. Rebuild shim with just a crate update: `cargo clean -p containerd-shim-wasm && cargo build`
## Attaching a debugger to containerd
> Note: this section contains configuration on debugging both `ctr` and `containerd`
1. [Install native Kubernetes](#Running-Vanilla-Kubernetes-natively)
2. Download the Native Debug VSCode extension
3. Clone `containerd`, checking out the release for the version you want to use
```sh
git clone -b release/2.0 git@github.com:containerd/containerd.git --depth 1
```
4. Create a `containerd/.vscode/tasks.json` file with tasks for building containerd and ctr, prerequisite to being able to debug them
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "containerd",
"type": "shell",
"command": "go",
"args": [
"build",
"-gcflags=all=-N -l",
"-o",
"${workspaceFolder}/bin/containerd"
],
"options": {
"cwd": "${workspaceFolder}/cmd/containerd"
}
},
{
"label": "ctr",
"type": "shell",
"command": "go",
"args": [
"build",
"-gcflags=all=-N -l",
"-o",
"${workspaceFolder}/bin/ctr"
],
"options": {
"cwd": "${workspaceFolder}/cmd/ctr"
}
}
]
}
```
4. Create a `containerd/.vscode/launch.json` for debugging containerd and ctr
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "ctr",
"type": "go",
"request": "launch",
"mode": "exec",
"program": "${workspaceFolder}/bin/ctr",
"args": [
"i",
"pull",
"docker.io/jsturtevant/spin-wasm-shim:latest-2.0"
],
"asRoot": true,
"console": "integratedTerminal",
"preLaunchTask": "ctr"
},
{
"name": "containerd",
"type": "go",
"request": "launch",
"mode": "exec",
"program": "${workspaceFolder}/bin/containerd",
"args": [
"-c",
"${workspaceFolder}/bin/config.toml"
],
"asRoot": true,
"console": "integratedTerminal",
"preLaunchTask": "containerd"
},
{
"name": "Attach to Process",
"type": "go",
"request": "attach",
"mode": "local",
"processId":"${command:pickProcess}"
}
]
}
```
5. Copy over your containerd config to your current working directory in the repository
```
sudo cp /etc/containerd/config.toml /path/to/containerd/bin
```
7. Stop the systemd containerd process as the debugger will be starting contianerd
```sh
sudo systemctl stop containerd
```
8. Run debugger in VSCode, selecting `containerd`
9. Add desired breakpoints
## Attaching a debugger to kubelet
> Note: this section contains configuration on debugging both `ctr` and `containerd`
1. [Install native Kubernetes](#Running-Vanilla-Kubernetes-natively)
2. Download the Native Debug VSCode extension
3. Clone `kubernetes`, checking out the release for the version you want to use. Be sure to specify a shallow clone -- Kubernetes is hefty.
```sh
git clone -b release-1.31 git@github.com:kubernetes/kubernetes.git --depth 1
```
4. Build the kubelet with debug symbols and move it to the location expected by the systemd process. Then, restart the kubelet process, and it will start back up with the new binary.
```sh
make WHAT=cmd/kubelet GOFLAGS=-v DBG=1
cp kubernetes/_output/bin/kubelet /usr/bin/kubelet
sudo systemctl restart kubelet
# check status
sudo systemctl status kubelet
```
6. Create a `kubernetes/.vscode/launch.json` to configure debugging the kubelet process
```json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Process",
"type": "go",
"request": "attach",
"mode": "local",
"processId": "${command:pickProcess}",
"asRoot": true,
"console": "integratedTerminal"
}
]
}
```
6. Run the debugger, picking `kubelet` as the process name. Sometimes it takes a bit to restart if you refreshed the debugging session
## Running a sample spin app
1. Create a Spin app and push it to a registry
```sh
spin new -t http-rust hello --accept-defaults
cd hello
spin build
spin registry push ttl.sh/hellospinapp:48h
```
### Run the shim with `ctr`
If you are just trying to test the shim's ability to execute an app, using `ctr` to directly invoke `containerd` is the quickest path
Utilize once or add the following to your `~/.bashrc`:
```sh
function ctrpullrun() {
image=$1
name=$2
sudo ctr image pull $image
sudo ctr run --rm --net-host --runtime io.containerd.spin.v2 $image $name bogus-arg
}
export ctrpullrun
```
This expects the shim to be on path (such as `/usr/bin/containerd-shim-spin-v2`). Update to point to your shim if needed:`--runtime /opt/kwasm/containerd-shim-spin-v2`.
Execute: `ctrpullrun ttl.sh/hellospinapp:48h hello`
Curl: `curl localhost:80` and see "Hello, Fermyon"
### Deploy a Spin app to a cluster
If not using the Spin Operator and SpinApp CRD, it can we hard to know how to template a deployment with the right runtime class. The following deployment can be used as reference:
```yaml!
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: wasmtime-spin
handler: spin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wasm-spin
spec:
replicas: 1
selector:
matchLabels:
app: wasm-spin
template:
metadata:
labels:
app: wasm-spin
spec:
runtimeClassName: wasmtime-spin
containers:
- name: testwasm
image: ttl.sh/hellospinapp:48h
imagePullPolicy: IfNotPresent
command: ["/"]
resources: # limit the resources to 128Mi of memory and 100m of CPU
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: wasm-spin
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: wasm-spin
```
Apply it to a running cluster configured with the shim and runtime class:
`kubectl apply -f deployment.yaml`