slide: kind.k8s.work
curl -L kind.k8s.work/fetch | bash
curl -L kind.k8s.work/fetch | bash
# Windows: curl.exe -L kind.k8s.work/fetch | cmd
docker run --rm -i -v /tmp:/tmp --network=host quay.io/kind-workshop/kind-fetch
Note: Using Windows? Be sure to use
curl.exe
and pipe tocmd
![]() |
![]() |
---|---|
![]() |
![]() |
![]() ![]() |
---|
curl -L kind.k8s.work/fetch | bash
A few questions:
curl -L kind.k8s.work/fetch | bash
quay.io/kind-workshop/kind-cache
you don't need to do this! also THANKS!
curl -L kind.k8s.work/fetch | bash
# Windows: curl.exe -L kind.k8s.work/fetch | cmd
This will download a flattened image that has all the bits needed for the workshop.
It will do this using aria2 and will multiget it from 10 servers in us-west-1
What Size
--------------------------------------
kind-buildenv:1.13 1.37GB
kindest/base:latest 329MB
etcd:3.3.10 258MB
kube-cross:v1.12.7-1 1.75GB
coredns:1.3.1 40.3MB
kindnetd:0.5.0 81.8MB
debian-iptables-amd64:v11.0.2 45.4MB
debian-base-amd64:v1.0.0 42.3MB
pause:3.1 742kB
In real life you would have all of kubernetes checked out and would likely be running kind
from you laptop via
brew install kind
choco install kind
Once all of the resources are downloaded we can do this entire lab offline.
go get -d k8s.io/kubernetes This will take a minute
Why not github.com/kubernetes/kubernetes?
For this session, we've asked everyone to download an image ahead of time. This isn't what you'd do at home, but it keeps us from waiting on wifi today :)
Let's start up the cache, and get it's IP:
$ curl -L kind.k8s.work/cache
docker run --rm --name kind-cache -v /var/run/docker.sock:/var/run/docker.sock quay.io/kind-workshop/kind-cache
Note: Using PowerShell? Be sure to use
curl.exe
This has golang, git, bazel and all the other tools you need ready to go.
$ curl -L kind.k8s.work/buildenv
docker run --rm -d --name buildenv --hostname buildenv -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp quay.io/kind-workshop/kind-buildenv:1.13
Once that's started, shell into it
docker exec -it buildenv bash
root@buildenv:/go/src/k8s.io/kubernetes# git log
commit 97c4edaa4f28f914db45766330b6db8953720588 (HEAD -> fix)
Author: louisssgong <louisssgong@tencent.com>
Date: Sat Aug 3 17:44:12 2019 +0800
Fix a bug in the IPVS proxier where virtual servers are not cleaned up even though the corresponding Service object was deleted.
kind build node-image --image=local:master
cd /go
kind create cluster --config kind-config.yaml --image=local:master
kind get kubeconfig --internal > ~/.kube/config
kubectl get pods -Aw
kubectl create deploy test --image=k8s.gcr.io/pause:3.1
kubectl expose deploy test --port 80
docker exec -ti kind-control-plane ip addr show dev kube-ipvs0
kubectl get svc -o wide
kubectl delete svc test
docker exec -ti kind-control-plane ip addr show dev kube-ipvs0
Hmm, the interface should have been removed, but it wasn't
kind delete cluster
kind build node-image --image=local:fix
cd /go
kind create cluster --config kind-config.yaml --image=local:fix
kind get kubeconfig --internal > ~/.kube/config
kubectl get pods -Aw
kubectl delete svc test
docker exec -ti kind-control-plane ip addr show dev kube-ipvs0
kubectl create deploy test --image=k8s.gcr.io/pause:3.1
kubectl expose deploy test --port 80
docker exec -ti kind-control-plane ip addr show dev kube-ipvs0
kubectl get svc -o wide
docker stop buildenv
docker system prune --volumes
docker system prune
KinD