Try   HackMD

containerd workshop (practice)

Slides.

runc

Article on iximiuz.com. We're going to need a Linux VM with two terminals.

Basics: Non-interactive container

First, prepare a new container:

mkdir -p box1/rootfs
cd box1

# From https://github.com/alpinelinux/docker-alpine
wget https://raw.githubusercontent.com/alpinelinux/docker-alpine/v3.16/x86_64/alpine-minirootfs-3.16.2-x86_64.tar.gz
tar -xvf alpine-minirootfs-3.16.2-x86_64.tar.gz -C rootfs

runc spec
# Edit config.json file
#  - set `terminal` to false
#  - replace command `yes`

Create the first container:

sudo runc create --bundle . foo
sudo runc list
sudo lsns
ps auxf

Start the first container (from terminal 2):

sudo runc start foo
sudo runc state foo

sudo runc kill foo SIGTERM  # No action
sudo runc kill foo SIGKILL  # Works as always
sudo runc state foo

Advanced: Interactive shell

Create a new container:

cd ..
cp -r box1 box2
cd box2

# Edit config.json
#  - set `terminal` to true
#  - replace command `sh`

Run it (create + start):

sudo runc run --bundle . bar
cat /etc/alpine-release
ip a  # Oops! No interfaces except localhost

ctr

Article on iximiuz.com. The lab can be done (ab)using a Play with Docker playground.

Playing with ctr

ctr

export SOCK=/run/docker/containerd/containerd.sock

Working with images

ctr --address=${SOCK} image ls
ctr --address=${SOCK} image pull alpine  # Fails
ctr --address=${SOCK} image pull docker.io/alpine  # Fails
ctr --address=${SOCK} image pull docker.io/library/alpine  # Fails
ctr --address=${SOCK} image pull docker.io/library/alpine:latest

ctr --address=${SOCK} image ls

Converting image into a bundle (not something you'd typically do):

mkdir -p box1/rootfs
cd box1
ctr --address=${SOCK} image mount docker.io/library/alpine:latest rootfs

ctr --address=${SOCK} oci spec > config.json

Running a container:

ctr --address=${SOCK} run --rm -t docker.io/library/alpine:latest cont1

ctr --address=${SOCK} run --rm -d docker.io/library/nginx:latest cont2
ctr --address=${SOCK} image pull docker.io/library/nginx:latest
ctr --address=${SOCK} container ls
ctr --address=${SOCK} task ls

# attach-ing
ctr --address=${SOCK} task attach cont2

# exec-ing
ctr --address=${SOCK} run --rm -d docker.io/library/nginx:latest cont3
ctr --address=${SOCK} task exec -t --exec-id bash_1 cont3 bash

Exploring namespaces:

ctr --address=${SOCK} namespace ls

Limitations of ctr

  • Not an official client, no compatibility guarantees
  • No docker logs-like functionality
  • No docker build-like functionality
  • No Docker-like auth helpers
  • No advanced stuff out of the box
    • --publish
    • --restart=always
    • --net=bridge

Calling containerd API from Go

Gradually building a PoC program.

mkdir myctr
cd myctr

go mod init myctr

Creating a client:

package main
  
import (
    "fmt"

    "github.com/containerd/containerd"
)

const sock = "/run/containerd/containerd.sock"

func main() {
    client, err := containerd.New(sock)
    if err != nil {
        panic(err)
    }
    defer client.Close()
    
    fmt.Println("Client created!")
}
go build -o myctr main.go
sudo ./myctr

Pulling an image:

package main
  
import (
    "context"
    "fmt"

    "github.com/containerd/containerd"
)

const sock = "/run/containerd/containerd.sock"

func main() {
    client, err := containerd.New(sock)
    if err != nil {
        panic(err)
    }
    defer client.Close()
    
    fmt.Println("Client created!")
    
    ctx := context.Background()
    image, err := client.Pull(
        ctx,
        "docker.io/library/nginx:latest",
        containerd.WithPullUnpack,
    )
    if err != nil {
        panic(err)
    }
    
    fmt.Println("Image pulled:", image.Name())
}

Fixing the namespace issue:

package main
  
import (
    "context"
    "fmt"

    "github.com/containerd/containerd"
    "github.com/containerd/containerd/namespaces"
)

const sock = "/run/containerd/containerd.sock"

func main() {
    client, err := containerd.New(sock)
    if err != nil {
        panic(err)
    }
    defer client.Close()
    
    fmt.Println("Client created!")
    
    ctx := namespaces.WithNamespace(context.Background(), "my-ns")
    image, err := client.Pull(
        ctx,
        "docker.io/library/nginx:latest",
        containerd.WithPullUnpack,
    )
    if err != nil {
        panic(err)
    }
    
    fmt.Println("Image pulled:", image.Name())
}

Starting a container:

package main
  
import (
    "context"
    "fmt"
    "os"
    
    "github.com/containerd/containerd"
    "github.com/containerd/containerd/namespaces"
    "github.com/containerd/containerd/oci"
    "github.com/google/uuid"
)

const sock = "/run/containerd/containerd.sock"

func main() {
    client, err := containerd.New(sock)
    if err != nil {
        panic(err)
    }
    defer client.Close()
    
    fmt.Println("Client created!")
    
    ctx := namespaces.WithNamespace(context.Background(), "my-ns")
    
    if os.Args[1] == "pull" {
        pullImage(ctx, client, os.Args[2])
    }
    
    if os.Args[1] == "run" {
        runContainer(ctx, client, os.Args[2])
    }
}

func pullImage(
    ctx context.Context,
    client *containerd.Client,
    imageName string,
) containerd.Image {
    image, err := client.Pull(
        ctx,
        imageName,
        containerd.WithPullUnpack,
    )
    if err != nil {
        panic(err)
    }
    fmt.Println("Image pulled:", image.Name())
    
    return image
}

func runContainer(
    ctx context.Context,
    client *containerd.Client,
    imageName string,
) {
    image := pullImage(ctx, client, imageName)

    id := uuid.New().String()
    cont, err := client.NewContainer(
        ctx,
        id,
        containerd.WithNewSnapshot(id, image),
        containerd.WithNewSpec(oci.WithImageConfig(image)),
    )
    if err != nil {
        panic(err)
    }
    
    fmt.Println("Container started:", cont.ID())
}

List containers:

package main
  
import (
    "context"
    "fmt"
    "os"
    
    "github.com/containerd/containerd"
    "github.com/containerd/containerd/namespaces"
    "github.com/containerd/containerd/oci"
    "github.com/google/uuid"
)

const sock = "/run/containerd/containerd.sock"

func main() {
    client, err := containerd.New(sock)
    if err != nil {
        panic(err)
    }
    defer client.Close()
    
    fmt.Println("Client created!")
    
    ctx := namespaces.WithNamespace(context.Background(), "my-ns")
    
    if os.Args[1] == "pull" {
        pullImage(ctx, client, os.Args[2])
    }
    
    if os.Args[1] == "run" {
        runContainer(ctx, client, os.Args[2])
    }
    
    if os.Args[1] == "ps" {
        listContainers(ctx, client)
    }
}

func pullImage(
    ctx context.Context,
    client *containerd.Client,
    imageName string,
) containerd.Image {
    image, err := client.Pull(
        ctx,
        imageName,
        containerd.WithPullUnpack,
    )
    if err != nil {
        panic(err)
    }
    fmt.Println("Image pulled:", image.Name())
    
    return image
}

func runContainer(
    ctx context.Context,
    client *containerd.Client,
    imageName string,
) {
    image := pullImage(ctx, client, imageName)

    id := uuid.New().String()
    cont, err := client.NewContainer(
        ctx,
        id,
        containerd.WithNewSnapshot(id, image),
        containerd.WithNewSpec(oci.WithImageConfig(image)),
    )
    if err != nil {
        panic(err)
    }
    
    fmt.Println("Container started:", cont.ID())
}

func listContainers(
    ctx context.Context,
    client *containerd.Client,    
) {
    conts, err := client.Containers(ctx)
    if err != nil {
        panic(err)
    }    
    fmt.Println("Found", len(conts), "container(s)")
    
    for _, c := range conts {
      image, err := c.Image(ctx)
      if err != nil {
        panic(err)
      }
      fmt.Printf("%s\t%s\n", c.ID(), image.Name())
    }
}

More examples:

Review:

nerdctl

nerdctl is a Docker-compatible CLI for containerd.

Installing nerdctl

While in a killercoda playground (preferably on a worker node):

wget https://github.com/containerd/nerdctl/releases/download/v0.23.0/nerdctl-0.23.0-linux-amd64.tar.gz

tar -xvf nerdctl-0.23.0-linux-amd64.tar.gz

./nerdctl version
mv ./nerdctl /usr/local/bin/nerdctl

Basic usage

Interactive shell container (alpine):

# Should fail complaining about missing CNI bridge plugin
nerdctl run -it alpine

nerdctl run -it --network=host alpine
cat /etc/alpine-release

Install CNI plugins:

wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz

tar -xvf cni-plugins-linux-amd64-v1.1.1.tgz -C /usr/libexec/cni
file /usr/libexec/cni/bridge

Nginx daemon container:

nerdctl run -d -p 80 nginx:alpine
nerdctl ps

curl localhost:'PORT'

nerdctl stop 'ID_PREFIX'  # And it works!

Build (should fail due to missing BuildKit):

nerdctl build -t foo .

Kubernetes tricks

Watching Kubernetes workloads:

# From the worker node:
nerdctl namespace ls
nerdctl --namespace k8s.io images
nerdctl --namespace k8s.io ps -a

# From the worker node again:
nerdctl --namespace k8s.io ps -a  # And look for sleep 9999

Building images right on the Kubernetes node (might be handy during development):

nerdctl --namespace k8s.io build -t foo .

Advanced stuff

Running container w/o an image:

# To isolate a process:
mkdir my-cont
cd my-cont
cp $(which kubectl) .
nerdctl run -it --rootfs $(pwd) /kubectl

lima

lima - Linux virtual machines, typically on macOS, for running containerd.

Prerequisites

A macOS machine (Intel or Arm). Lima works on Linux too, but the whole point is to reproduce a Docker Desktop-like UX on macOS.

uname -a

Installing lima

brew install lima
limactl start
lima uname -a

Using lima

Like nerdctl, but prefixed with lima (or use the nerdctl.lima alias):

lima nerdctl run -d --name nginx -p 127.0.0.1:8080:80 nginx:alpine
curl localhost:8080

You can also SSH into the VM:

limactl list
limactl shell default

uname -a
ps -auxf

# Notice how:
#  - containerd works in the rootless mode
#  - BuildKit daemon is running
#  - stargz snapshotter plugin is running

crictl

Article on iximiuz.com. The lab can be done (ab)using a killercoda playground playground.

Playing with crictl

crictl

# Notice how Pods become firts-class citizens.
# Notice there is no build command.

Listing Pods and containers:

crictl pods
crictl ps

Dissecting Pods

Article on iximiuz.com.

First, create a test workload:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
    - name: app
      image: docker.io/kennethreitz/httpbin
    - name: sidecar
      image: curlimages/curl
      command: ["/bin/sleep", "3650d"]
EOF

Jump to the worker node:

kubectl get nodes -o wide

ssh root@'node01-IP'

Look up python & sleep processes:

ps auxf

Look up the corresponding Pod and containers:

crictl pods
crictl ps

Take the first look at namespaces:

sudo lsns

# Notice how sleep & python processes are in dedicated
# `pid` and `mnt` namespaces.

Inspect app and sidecar containers:

crictl inspect 'APP_CONT_ID'
crictl inspect 'SIDECAR_CONT_ID'

# Notice how `uts`, `net`, and `ipc` namespaces refer to
# the same /proc/<pid>/ns/... path - this is Pod's pause container.