# containerd workshop (practice)
[Slides](https://docs.google.com/presentation/d/1dGpPvXMdwznkPgBS9tk05bwclDhcPFC_GkHHx3ehjnQ/edit?usp=sharing).
## runc
[Article on iximiuz.com](https://iximiuz.com/en/posts/implementing-container-runtime-shim/). We're going to need a Linux VM with two terminals.
### Basics: Non-interactive container
First, prepare a new container:
```shell=bash
mkdir -p box1/rootfs
cd box1
# From https://github.com/alpinelinux/docker-alpine
wget https://raw.githubusercontent.com/alpinelinux/docker-alpine/v3.16/x86_64/alpine-minirootfs-3.16.2-x86_64.tar.gz
tar -xvf alpine-minirootfs-3.16.2-x86_64.tar.gz -C rootfs
runc spec
# Edit config.json file
# - set `terminal` to false
# - replace command `yes`
```
Create the first container:
```shell=bash
sudo runc create --bundle . foo
sudo runc list
sudo lsns
ps auxf
```
Start the first container (from terminal 2):
```shell=bash
sudo runc start foo
sudo runc state foo
sudo runc kill foo SIGTERM # No action
sudo runc kill foo SIGKILL # Works as always
sudo runc state foo
```
### Advanced: Interactive shell
Create a new container:
```shell=bash
cd ..
cp -r box1 box2
cd box2
# Edit config.json
# - set `terminal` to true
# - replace command `sh`
```
Run it (create + start):
```
sudo runc run --bundle . bar
cat /etc/alpine-release
ip a # Oops! No interfaces except localhost
```
## ctr
[Article on iximiuz.com](https://iximiuz.com/en/posts/containerd-command-line-clients/). The lab can be done (ab)using a [Play with Docker](https://labs.play-with-docker.com/) playground.
### Playing with ctr
```shell=bash
ctr
export SOCK=/run/docker/containerd/containerd.sock
```
Working with images
```shell=bash
ctr --address=${SOCK} image ls
ctr --address=${SOCK} image pull alpine # Fails
ctr --address=${SOCK} image pull docker.io/alpine # Fails
ctr --address=${SOCK} image pull docker.io/library/alpine # Fails
ctr --address=${SOCK} image pull docker.io/library/alpine:latest
ctr --address=${SOCK} image ls
```
Converting image into a bundle (not something you'd typically do):
```shell=bash
mkdir -p box1/rootfs
cd box1
ctr --address=${SOCK} image mount docker.io/library/alpine:latest rootfs
ctr --address=${SOCK} oci spec > config.json
```
Running a container:
```shell=bash
ctr --address=${SOCK} run --rm -t docker.io/library/alpine:latest cont1
ctr --address=${SOCK} run --rm -d docker.io/library/nginx:latest cont2
ctr --address=${SOCK} image pull docker.io/library/nginx:latest
ctr --address=${SOCK} container ls
ctr --address=${SOCK} task ls
# attach-ing
ctr --address=${SOCK} task attach cont2
# exec-ing
ctr --address=${SOCK} run --rm -d docker.io/library/nginx:latest cont3
ctr --address=${SOCK} task exec -t --exec-id bash_1 cont3 bash
```
Exploring namespaces:
```shell=bash
ctr --address=${SOCK} namespace ls
```
### Limitations of ctr
- Not an official client, no compatibility guarantees
- No `docker logs`-like functionality
- No `docker build`-like functionality
- No Docker-like auth helpers
- No advanced stuff out of the box
- `--publish`
- `--restart=always`
- `--net=bridge`
## Calling containerd API from Go
Gradually building a PoC program.
```go
mkdir myctr
cd myctr
go mod init myctr
```
Creating a client:
```go
package main
import (
"fmt"
"github.com/containerd/containerd"
)
const sock = "/run/containerd/containerd.sock"
func main() {
client, err := containerd.New(sock)
if err != nil {
panic(err)
}
defer client.Close()
fmt.Println("Client created!")
}
```
```shell=bash
go build -o myctr main.go
sudo ./myctr
```
Pulling an image:
```go
package main
import (
"context"
"fmt"
"github.com/containerd/containerd"
)
const sock = "/run/containerd/containerd.sock"
func main() {
client, err := containerd.New(sock)
if err != nil {
panic(err)
}
defer client.Close()
fmt.Println("Client created!")
ctx := context.Background()
image, err := client.Pull(
ctx,
"docker.io/library/nginx:latest",
containerd.WithPullUnpack,
)
if err != nil {
panic(err)
}
fmt.Println("Image pulled:", image.Name())
}
```
Fixing the namespace issue:
```go
package main
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
)
const sock = "/run/containerd/containerd.sock"
func main() {
client, err := containerd.New(sock)
if err != nil {
panic(err)
}
defer client.Close()
fmt.Println("Client created!")
ctx := namespaces.WithNamespace(context.Background(), "my-ns")
image, err := client.Pull(
ctx,
"docker.io/library/nginx:latest",
containerd.WithPullUnpack,
)
if err != nil {
panic(err)
}
fmt.Println("Image pulled:", image.Name())
}
```
Starting a container:
```go
package main
import (
"context"
"fmt"
"os"
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/oci"
"github.com/google/uuid"
)
const sock = "/run/containerd/containerd.sock"
func main() {
client, err := containerd.New(sock)
if err != nil {
panic(err)
}
defer client.Close()
fmt.Println("Client created!")
ctx := namespaces.WithNamespace(context.Background(), "my-ns")
if os.Args[1] == "pull" {
pullImage(ctx, client, os.Args[2])
}
if os.Args[1] == "run" {
runContainer(ctx, client, os.Args[2])
}
}
func pullImage(
ctx context.Context,
client *containerd.Client,
imageName string,
) containerd.Image {
image, err := client.Pull(
ctx,
imageName,
containerd.WithPullUnpack,
)
if err != nil {
panic(err)
}
fmt.Println("Image pulled:", image.Name())
return image
}
func runContainer(
ctx context.Context,
client *containerd.Client,
imageName string,
) {
image := pullImage(ctx, client, imageName)
id := uuid.New().String()
cont, err := client.NewContainer(
ctx,
id,
containerd.WithNewSnapshot(id, image),
containerd.WithNewSpec(oci.WithImageConfig(image)),
)
if err != nil {
panic(err)
}
fmt.Println("Container started:", cont.ID())
}
```
List containers:
```go
package main
import (
"context"
"fmt"
"os"
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/oci"
"github.com/google/uuid"
)
const sock = "/run/containerd/containerd.sock"
func main() {
client, err := containerd.New(sock)
if err != nil {
panic(err)
}
defer client.Close()
fmt.Println("Client created!")
ctx := namespaces.WithNamespace(context.Background(), "my-ns")
if os.Args[1] == "pull" {
pullImage(ctx, client, os.Args[2])
}
if os.Args[1] == "run" {
runContainer(ctx, client, os.Args[2])
}
if os.Args[1] == "ps" {
listContainers(ctx, client)
}
}
func pullImage(
ctx context.Context,
client *containerd.Client,
imageName string,
) containerd.Image {
image, err := client.Pull(
ctx,
imageName,
containerd.WithPullUnpack,
)
if err != nil {
panic(err)
}
fmt.Println("Image pulled:", image.Name())
return image
}
func runContainer(
ctx context.Context,
client *containerd.Client,
imageName string,
) {
image := pullImage(ctx, client, imageName)
id := uuid.New().String()
cont, err := client.NewContainer(
ctx,
id,
containerd.WithNewSnapshot(id, image),
containerd.WithNewSpec(oci.WithImageConfig(image)),
)
if err != nil {
panic(err)
}
fmt.Println("Container started:", cont.ID())
}
func listContainers(
ctx context.Context,
client *containerd.Client,
) {
conts, err := client.Containers(ctx)
if err != nil {
panic(err)
}
fmt.Println("Found", len(conts), "container(s)")
for _, c := range conts {
image, err := c.Image(ctx)
if err != nil {
panic(err)
}
fmt.Printf("%s\t%s\n", c.ID(), image.Name())
}
}
```
More examples:
- [containerd package Go docs](https://pkg.go.dev/github.com/containerd/containerd)
- [Getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#implementing-your-own-containerd-client)
Review:
- [faasd provider.go](https://github.com/openfaas/faasd/blob/a2ea804d2c7b9a70c8867a12a1834f488d490165/cmd/provider.go)
- [faasd deploy.go](https://github.com/openfaas/faasd/blob/e668beef139ee094ad7364a5534b15d84b1b1f6d/pkg/provider/handlers/deploy.go)
## nerdctl
[nerdctl](https://github.com/containerd/nerdctl) is a Docker-compatible CLI for containerd.
### Installing nerdctl
While in a [killercoda playground](https://killercoda.com/playgrounds/scenario/kubernetes) (preferably on a worker node):
```shell=bash
wget https://github.com/containerd/nerdctl/releases/download/v0.23.0/nerdctl-0.23.0-linux-amd64.tar.gz
tar -xvf nerdctl-0.23.0-linux-amd64.tar.gz
./nerdctl version
mv ./nerdctl /usr/local/bin/nerdctl
```
### Basic usage
Interactive shell container (alpine):
```shell=bash
# Should fail complaining about missing CNI bridge plugin
nerdctl run -it alpine
nerdctl run -it --network=host alpine
cat /etc/alpine-release
```
Install CNI plugins:
```shell=bash
wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
tar -xvf cni-plugins-linux-amd64-v1.1.1.tgz -C /usr/libexec/cni
file /usr/libexec/cni/bridge
```
Nginx daemon container:
```shell=bash
nerdctl run -d -p 80 nginx:alpine
nerdctl ps
curl localhost:'PORT'
nerdctl stop 'ID_PREFIX' # And it works!
```
Build (should fail due to missing BuildKit):
```shell=bash
nerdctl build -t foo .
```
### Kubernetes tricks
Watching Kubernetes workloads:
```shell=bash
# From the worker node:
nerdctl namespace ls
nerdctl --namespace k8s.io images
nerdctl --namespace k8s.io ps -a
# From the worker node again:
nerdctl --namespace k8s.io ps -a # And look for sleep 9999
```
Building images right on the Kubernetes node (might be handy during development):
```shell=bash
nerdctl --namespace k8s.io build -t foo .
```
### Advanced stuff
Running container w/o an image:
```shell=bash
# To isolate a process:
mkdir my-cont
cd my-cont
cp $(which kubectl) .
nerdctl run -it --rootfs $(pwd) /kubectl
```
## lima
[lima]([xxx](https://github.com/lima-vm/lima)) - Linux virtual machines, typically on macOS, for running containerd.
### Prerequisites
A macOS machine (Intel or Arm). Lima works on Linux too, but the whole point is to reproduce a Docker Desktop-like UX on macOS.
```shell=bash
uname -a
```
### Installing lima
```shell=bash
brew install lima
limactl start
lima uname -a
```
### Using lima
Like nerdctl, but prefixed with `lima` (or use the `nerdctl.lima` alias):
```shell=bash
lima nerdctl run -d --name nginx -p 127.0.0.1:8080:80 nginx:alpine
curl localhost:8080
```
You can also SSH into the VM:
```shell=bash
limactl list
limactl shell default
uname -a
ps -auxf
# Notice how:
# - containerd works in the rootless mode
# - BuildKit daemon is running
# - stargz snapshotter plugin is running
```
## crictl
[Article on iximiuz.com](https://iximiuz.com/en/posts/containerd-command-line-clients/). The lab can be done (ab)using a [killercoda playground](https://killercoda.com/playgrounds/scenario/kubernetes) playground.
### Playing with crictl
```shell=bash
crictl
# Notice how Pods become firts-class citizens.
# Notice there is no build command.
```
Listing Pods and containers:
```shell=bash
crictl pods
crictl ps
```
### Dissecting Pods
[Article on iximiuz.com](https://iximiuz.com/en/posts/containers-vs-pods/).
First, create a test workload:
```shell=bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: app
image: docker.io/kennethreitz/httpbin
- name: sidecar
image: curlimages/curl
command: ["/bin/sleep", "3650d"]
EOF
```
Jump to the worker node:
```shell=bash
kubectl get nodes -o wide
ssh root@'node01-IP'
```
Look up `python` & `sleep` processes:
```shell=bash
ps auxf
```
Look up the corresponding Pod and containers:
```shell=bash
crictl pods
crictl ps
```
Take the first look at namespaces:
```shell=bash
sudo lsns
# Notice how sleep & python processes are in dedicated
# `pid` and `mnt` namespaces.
```
Inspect `app` and `sidecar` containers:
```shell=bash
crictl inspect 'APP_CONT_ID'
crictl inspect 'SIDECAR_CONT_ID'
# Notice how `uts`, `net`, and `ipc` namespaces refer to
# the same /proc/<pid>/ns/... path - this is Pod's pause container.
```