Ion Mudreac
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    --- tags: Publication --- # No Emoji Kubernetes Local Development ![Intro](https://i.imgur.com/JNAMwCX.jpg) If you run your application/service on Kubernetes, you probably already have a Kubernetes cluster for development, probably staging. You may already encounter challenges in managing and promoting k8s and app code. In the past few years, when I was running the [Kubernetes](https://kubernetes.io/) cluster became clear that we had a few challenges to tackle. One of the biggest hurdles was to have a local Kubernetes development for the Dev team. ## Context One of the most overlooked aspects of the development is to have immediate feedback. Even if you have a mature CICD pipeline with proper code testing TDD/BDD and clean code promotion to diff environments, even in the most optimized code promotion pipeline, every time the dev team updates service or Kube YAML file and tries to deploy it to the Dev Kube cluster is taking time 10 to 30 min. It is a waste of productive time even for a single line of code change we need to wait, but we are battling with context switching and time wasted to refocus on the task. ## The Problem How can we eliminate dead times and allow the development team fast feedback on any code changes or deployment changes before promoting the Dev k8s cluster? One of the prerequisites is to have mature mock services, as not everybody can afford to run a full-fledge Kubernetes cluster locally with all services available on the spot. Assuming we can mock external dependencies, we need a flexible local Kubernetes cluster with minimal resource requirements that will allow us to run and validate the changes before promoting further to the CICD pipeline. ## Solution In this article, we will cover a few available options to run local development Kubernetes clusters. This option may not work for all use cases. An example, DevOps/SRE team may need to work and CNI K8s policies that depend on the exact CNI and version to be deployed that some local Kubernetes solutions may not support. We will cover [k3d](https://k3d.io/v4.4.8/), [kind](https://kind.sigs.k8s.io/), and [minikube](https://minikube.sigs.k8s.io/) as potential local Kubernetes clusters for rapid development. ### Minikube [Minikube](https://minikube.sigs.k8s.io/)is a cross-platform, community-driven [Kubernetes](https://kubernetes.io/) distribution targeted to be used primarily in local environments. Minikube supports the latest Kubernetes release, and you can also deploy older versions of Kubernetes up to 6+ previous minor versions. Minikube can be deployed and used on Linux, macOS, and Windows. Minikube has tree deployment mode; as a VM (Hyper-V, VMware, VirtBox, QEMU/KVM, etc.)as a container on bare-metal Minikube supports multiple container runtimes (CRI-O, containerd, docker) Minikube supports multiple features such as LoadBalancer, filesystem mounts, and FeatureGates. In the below example, I will cove minikube running Kubernetes in docker containers on Linux OS as for Windows a few years back, I posted a short tutorial running [minikube in Windows Hyter_V](https://mudrii.medium.com/kubernetes-local-development-with-minikube-on-hyper-v-windows-10-75f52ad1ed42) #### [Minikube](https://minikube.sigs.k8s.io/) Kubernetes in docker By default, minikube will run the latest Kubernetes in the docker container if you are not specifying VM-DRIVER. ```shell= ~> minikube start minikube v1.22.0 on Nixos 21.05.3248.6120ac5cd20 (Okapi) Automatically selected the docker driver. Other choices: kvm2, none, ssh Starting control plane node minikube in cluster minikube Pulling base image ... Creating docker container (CPUs=2, Memory=15900MB) ... Preparing Kubernetes v1.21.2 on Docker 20.10.7 ... Generating certificates and keys ... Booting up control plane ... Configuring RBAC rules ... Verifying Kubernetes components... Using image gcr.io/k8s-minikube/storage-provisioner:v5 Enabled addons: storage-provisioner, default-storageclass Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default minikube>~> ``` Once you run `minikube start`, you will have a single node cluster running in docker. ```shell= minikube>~> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2c70e46a9a73 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49162->22/tcp, 127.0.0.1:49161->2376/tcp, 127.0.0.1:49160->5000/tcp, 127.0.0.1:49159->8443/tcp, 127.0.0.1:49158->32443/tcp minikube ``` Kubernetes version and service ```shell= minikube>~> kubectl version --v=10 I0920 14:42:50.480948 37492 loader.go:372] Config loaded from file: /home/mudrii/.kube/config I0920 14:42:50.481396 37492 round_trippers.go:435] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.22.1 (linux/amd64) kubernetes/632ed30" 'https://192.168.49.2:8443/version?timeout=32s' I0920 14:42:50.481401 37492 cert_rotation.go:137] Starting client certificate rotation controller I0920 14:42:50.486055 37492 round_trippers.go:454] GET https://192.168.49.2:8443/version?timeout=32s 200 OK in 4 milliseconds I0920 14:42:50.486062 37492 round_trippers.go:460] Response Headers: I0920 14:42:50.486065 37492 round_trippers.go:463] Cache-Control: no-cache, private I0920 14:42:50.486068 37492 round_trippers.go:463] Content-Type: application/json I0920 14:42:50.486070 37492 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 3ecc2902-bedc-48a3-ab73-67960d80ad8f I0920 14:42:50.486072 37492 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: 0518e829-243c-4582-a699-250fe4b0be5e I0920 14:42:50.486074 37492 round_trippers.go:463] Content-Length: 263 I0920 14:42:50.486076 37492 round_trippers.go:463] Date: Mon, 20 Sep 2021 06:42:50 GMT I0920 14:42:50.492363 37492 request.go:1181] Response Body: { "major": "1", "minor": "21", "gitVersion": "v1.21.2", "gitCommit": "092fbfbf53427de67cac1e9fa54aaa09a28371d7", "gitTreeState": "clean", "buildDate": "2021-06-16T12:53:14Z", "goVersion": "go1.16.5", "compiler": "gc", "platform": "linux/amd64" } Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"archive", BuildDate:"1980-01-01T00:00:00Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"} ``` ```shell= minikube>~> kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane,master 13m v1.21.2 ``` ```shell= minikube>~> kubectl get po -ALL NAMESPACE NAME READY STATUS RESTARTS AGE L kube-system coredns-558bd4d5db-jn4zr 1/1 Running 0 13m kube-system etcd-minikube 1/1 Running 0 13m kube-system kube-apiserver-minikube 1/1 Running 0 13m kube-system kube-controller-manager-minikube 1/1 Running 0 13m kube-system kube-proxy-ct8hm 1/1 Running 0 13m kube-system kube-scheduler-minikube 1/1 Running 0 13m kube-system storage-provisioner 1/1 Running 1 13m ``` ```shell= minikube>~> kubectl get services -ALL NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE L default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 14m ``` Running minikube Kubernetes in docker allows you to run multiple nodes and with multiple configuration options for CNI, CPU, runtime, etc., as in the below example. _Note: running four-node cluster with calico as CNI and using containerd runtime._ ```shell= ~> minikube start -n=4 --cni='calico' --container-runtime='containerd' minikube v1.22.0 on Nixos 21.05.3248.6120ac5cd20 (Okapi) Automatically selected the docker driver. Other choices: kvm2, none, ssh Starting control plane node minikube in cluster minikube Pulling base image ... Downloading Kubernetes v1.21.2 preload ... > preloaded-images-k8s-v11-v1...: 922.45 MiB / 922.45 MiB 100.00% 53.15 Mi Creating docker container (CPUs=2, Memory=3975MB) ... Preparing Kubernetes v1.21.2 on containerd 1.4.6 ... Generating certificates and keys ... Booting up control plane ... Configuring RBAC rules ... Configuring Calico (Container Networking Interface) ... Verifying Kubernetes components... Using image gcr.io/k8s-minikube/storage-provisioner:v5 Enabled addons: storage-provisioner, default-storageclass Starting node minikube-m02 in cluster minikube Pulling base image ... Creating docker container (CPUs=2, Memory=3975MB) ... Found network options: NO_PROXY=192.168.49.2 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ... env NO_PROXY=192.168.49.2 Verifying Kubernetes components... Starting node minikube-m03 in cluster minikube Pulling base image ... Creating docker container (CPUs=2, Memory=3975MB) ... Found network options: NO_PROXY=192.168.49.2,192.168.49.3 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ... env NO_PROXY=192.168.49.2 env NO_PROXY=192.168.49.2,192.168.49.3 Verifying Kubernetes components... Starting node minikube-m04 in cluster minikube Pulling base image ... Creating docker container (CPUs=2, Memory=3975MB) ... Found network options: NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ... env NO_PROXY=192.168.49.2 env NO_PROXY=192.168.49.2,192.168.49.3 env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4 Verifying Kubernetes components... Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ``` ```shell= minikube>~> kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane,master 4m36s v1.21.2 minikube-m02 Ready <none> 3m36s v1.21.2 minikube-m03 Ready <none> 2m55s v1.21.2 minikube-m04 Ready <none> 2m11s v1.21.2 ``` ```shell= minikube>~> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ac473f83f900 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes 127.0.0.1:49192->22/tcp, 127.0.0.1:49191->2376/tcp, 127.0.0.1:49190->5000/tcp, 127.0.0.1:49189->8443/tcp, 127.0.0.1:49188->32443/tcp minikube-m04 d214ea1bd215 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes 127.0.0.1:49187->22/tcp, 127.0.0.1:49186->2376/tcp, 127.0.0.1:49185->5000/tcp, 127.0.0.1:49184->8443/tcp, 127.0.0.1:49183->32443/tcp minikube-m03 b134ac6d11e1 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 13 minutes ago Up 13 minutes 127.0.0.1:49182->22/tcp, 127.0.0.1:49181->2376/tcp, 127.0.0.1:49180->5000/tcp, 127.0.0.1:49179->8443/tcp, 127.0.0.1:49178->32443/tcp minikube-m02 3479fa2fb55e gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 14 minutes ago Up 14 minutes 127.0.0.1:49177->22/tcp, 127.0.0.1:49176->2376/tcp, 127.0.0.1:49175->5000/tcp, 127.0.0.1:49174->8443/tcp, 127.0.0.1:49173->32443/tcp minikube ``` _Note: In the above example, we have one master node and three worker nodes you can use in our testing. Make sure you have enough resources to allocate for running minikube nodes._ Minikube comes with multiple addons we can use. ```shell= minikube>~> minikube addons list |-----------------------------|----------|--------------|-----------------------| | ADDON NAME | PROFILE | STATUS | MAINTAINER | |-----------------------------|----------|--------------|-----------------------| | ambassador | minikube | disabled | unknown (third-party) | | auto-pause | minikube | disabled | google | | csi-hostpath-driver | minikube | disabled | kubernetes | | dashboard | minikube | disabled | kubernetes | | default-storageclass | minikube | enabled | kubernetes | | efk | minikube | disabled | unknown (third-party) | | freshpod | minikube | disabled | google | | gcp-auth | minikube | disabled | google | | gvisor | minikube | disabled | google | | helm-tiller | minikube | disabled | unknown (third-party) | | ingress | minikube | disabled | unknown (third-party) | | ingress-dns | minikube | disabled | unknown (third-party) | | istio | minikube | disabled | unknown (third-party) | | istio-provisioner | minikube | disabled | unknown (third-party) | | kubevirt | minikube | disabled | unknown (third-party) | | logviewer | minikube | disabled | google | | metallb | minikube | disabled | unknown (third-party) | | metrics-server | minikube | disabled | kubernetes | | nvidia-driver-installer | minikube | disabled | google | | nvidia-gpu-device-plugin | minikube | disabled | unknown (third-party) | | olm | minikube | disabled | unknown (third-party) | | pod-security-policy | minikube | disabled | unknown (third-party) | | registry | minikube | disabled | google | | registry-aliases | minikube | disabled | unknown (third-party) | | registry-creds | minikube | disabled | unknown (third-party) | | storage-provisioner | minikube | enabled | kubernetes | | storage-provisioner-gluster | minikube | disabled | unknown (third-party) | | volumesnapshots | minikube | disabled | kubernetes | |-----------------------------|----------|--------------|-----------------------| ``` ```shell= minikube>~> minikube addons enable dashboard Using image kubernetesui/metrics-scraper:v1.0.4 Using image kubernetesui/dashboard:v2.1.0 Some dashboard features require the metrics-server addon. To enable all features please run: minikube addons enable metrics-server The 'dashboard' addon is enabled ``` ```shell= minikube>~> minikube dashboard --url Verifying dashboard health ... Launching proxy ... Verifying proxy health ... http://127.0.0.1:44069/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ ``` ![Dashboard](https://i.imgur.com/S2kS9ja.png) #### [Minikube](https://minikube.sigs.k8s.io/) Kubernetes in VM (QEMU/KVM) If you intend to run Kubernetes on your laptop or desktop and use Virtualisation, you can run minikube in VM by specifying `--vm-driver`. In the below example, I am running minikube in Linux KVM. This solution is more permanent and implies you have a high-end laptop or desktop with enough resources to spare for running Kubernetes. ```shell= ~> minikube start -n=3 --cni='calico' --container-runtime='containerd' --vm-driver kvm2 minikube v1.22.0 on Nixos 21.05.3248.6120ac5cd20 (Okapi) Using the kvm2 driver based on user configuration Starting control plane node minikube in cluster minikube Creating kvm2 VM (CPUs=2, Memory=5300MB, Disk=20000MB) ... Preparing Kubernetes v1.21.2 on containerd 1.4.4 ... Generating certificates and keys ... Booting up control plane ... Configuring RBAC rules ... Configuring Calico (Container Networking Interface) ... Verifying Kubernetes components... Using image gcr.io/k8s-minikube/storage-provisioner:v5 Enabled addons: default-storageclass, storage-provisioner Starting node minikube-m02 in cluster minikube Creating kvm2 VM (CPUs=2, Memory=5300MB, Disk=20000MB) ... Found network options: NO_PROXY=192.168.39.165 Preparing Kubernetes v1.21.2 on containerd 1.4.4 ... env NO_PROXY=192.168.39.165 Verifying Kubernetes components... Starting node minikube-m03 in cluster minikube Creating kvm2 VM (CPUs=2, Memory=5300MB, Disk=20000MB) ... Found network options: NO_PROXY=192.168.39.165,192.168.39.171 Preparing Kubernetes v1.21.2 on containerd 1.4.4 ... env NO_PROXY=192.168.39.165 env NO_PROXY=192.168.39.165,192.168.39.171 Verifying Kubernetes components... Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ``` ```shell= minikube>~> kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane,master 2m59s v1.21.2 minikube-m02 Ready <none> 111s v1.21.2 minikube-m03 Ready <none> 59s v1.21.2 ``` ![Virtual Machine Manager](https://i.imgur.com/SI48vyk.png) ```shell= minikube>~> virsh list --all Id Name State ------------------------------- 2 minikube running 3 minikube-m02 running 4 minikube-m03 running - nixos shut off ``` Now we can use dashboard and metric-server to monitor worker nodes. ```shell= minikube>~> minikube addons enable dashboard Using image kubernetesui/dashboard:v2.1.0 Using image kubernetesui/metrics-scraper:v1.0.4 Some dashboard features require the metrics-server addon. To enable all features please run: minikube addons enable metrics-server The 'dashboard' addon is enabled ``` ```shell= minikube>~> minikube addons enable metrics-server 897ms Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2 The 'metrics-server' addon is enabled ``` ```shell= minikube>~> minikube dashboard --url Verifying dashboard health ... Launching proxy ... Verifying proxy health ... http://127.0.0.1:43507/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ ``` ![Kubernetes Dashboard VM machines](https://i.imgur.com/SdUWTxh.png) Minikube is an excellent and very flexible solution to run Kubernetes in Local machines or VM. It comes preconfigured with many modules that can be enabled/installed by a single line. But as well is very heavy, and if you need speed over features, you may take a look at the k3d or kind. Minikube best works for DevOps/SRE teams that need to test CNI policies RBACK and need an environment as close as possible with the Kubernetes running in Production/Dev/Staging ### [Kind](https://kind.sigs.k8s.io/) "Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI." Kind is trying to solve the same problem as minikube by creating a local development cluster. Kind is faster than minikube as it only deals with containers as hosts and has less inbuild functionality. "kind or Kubernetes in docker is a suite of tooling for local Kubernetes “clusters” where each “node” is a Docker container. kind is targeted at testing Kubernetes." As a reference kind architecture ![kind architecture](https://i.imgur.com/2cLhNee.png) Kind is completely written in Go, and to install, you need Go language or download the binary that contains all statically linked dependencies. Kind can be installed on Linux, macOS, Windows. Kind Installation ```shell= GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.1 && kind create cluster ``` Another option is to download binary directly from [github kind repository](https://github.com/kubernetes-sigs/kind/releases) ```shell= ~> curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 ~> chmod +x ./kind ~> mv ./kind /some-dir-in-your-PATH/kind ``` Once you have kind installed, you can create a new cluster with; ```shell= ~> kind create cluster Creating cluster "kind" ... Ensuring node image (kindest/node:v1.21.1) Preparing nodes Writing configuration Starting control-plane Installing CNI Installing StorageClass Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ``` By default, kind will create a single node that plays the role of the Master and Worker. Once the Kubernetes cluster is created, kind automatically configures kubectl context. ```shell= kind-kind>~> kubectl cluster-info --context kind-kind Kubernetes control plane is running at https://127.0.0.1:39827 CoreDNS is running at https://127.0.0.1:39827/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ``` ```shell= kind-kind>~> kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane,master 72s v1.21.1 ``` ```shell= kind-kind>~> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7892a9aa93a1 kindest/node:v1.21.1 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:39827->6443/tcp kind-control-plane ``` Kind is very customizable. You can configure kind to run in a multi-node setup. You will need to create the kind configuration file with YAML format as in the below example. Kind configuration file as an example; ```yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane kubeadmConfigPatches: - | kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: "ingress-ready=true" extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP - role: control-plane - role: control-plane - role: worker - role: worker - role: worker ``` Kind multi-node cluster creation ```shell= kind-kind>~> kind create cluster --config kind.yaml Creating cluster "kind" ... Ensuring node image (kindest/node:v1.21.1) Preparing nodes Configuring the external load balancer Writing configuration Starting control-plane Installing CNI Installing StorageClass Joining more control-plane nodes Joining worker nodes Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! ``` ```shell= kind-kind>~> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ced47990f3ca kindest/haproxy:v20200708-548e36db "/docker-entrypoint.…" 7 minutes ago Up 7 minutes 127.0.0.1:42437->6443/tcp kind-external-load-balancer f71f59a44ef9 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes 127.0.0.1:32955->6443/tcp kind-control-plane2 d341ee581b9c kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes kind-worker2 6cd893350528 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes kind-worker3 c75e38bfaa48 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes kind-worker f3290aa18692 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes 127.0.0.1:42019->6443/tcp kind-control-plane3 f5095a2fde95 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:42303->6443/tcp kind-control-plane ``` ```shell= kind-kind>~> kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane,master 7m16s v1.21.1 kind-control-plane2 Ready control-plane,master 6m48s v1.21.1 kind-control-plane3 Ready control-plane,master 5m57s v1.21.1 kind-worker Ready <none> 5m37s v1.21.1 kind-worker2 Ready <none> 5m37s v1.21.1 kind-worker3 Ready <none> 5m37s v1.21.1 ``` _Note: kind by default installed simple networking implementation (“kindnetd”) CNI, but many CNI manifests known to work too like Calico, Wave, Flannel, etc._ Kind a good alternative for minikube and best works for developers who need a fast deployable Kubernetes cluster for testing on local dev environment. ### [K3d](https://k3d.io/) K3d was developed as a wrapper to run [k3s](https://github.com/rancher/k3s) in docker containers. [k3s](https://github.com/rancher/k3s) (Rancher Lab’s minimal Kubernetes distribution) was designed as a very light Kubernetes distribution for edge location, IoT, or for ARM platform where every CPU cycle counts. ![K3S Architecture](https://i.imgur.com/avsCJp6.png) K3d simplifies Kubernetes cluster creation significantly on single or multi-node k3s clusters in docker. K3s has a very small footprint, and it provides many Kubernetes components like [Containerd](https://containerd.io/) and [runc](https://github.com/opencontainers/runc), [Flannel](https://github.com/coreos/flannel) for CNI, [CoreDNS](https://coredns.io/), [Metrics Server](https://github.com/kubernetes-sigs/metrics-server), [Traefik](https://containo.us/traefik/) for ingress, [Klipper-lb](https://github.com/k3s-io/klipper-lb) as an embedded service load balancer provider, [Kube-router](https://www.kube-router.io/) for network policy, [Helm-controller](https://github.com/k3s-io/helm-controller) to allow for CRD-driven deployment of helm manifests, [Kine](https://github.com/k3s-io/kine) as a datastore shim that allows etcd to be replaced with other databases, [Local-path-provisioner](https://github.com/rancher/local-path-provisioner) for provisioning volumes using local storage, [Host utilities](https://github.com/k3s-io/k3s-root) such as iptables/nftables, ebtables, ethtool, and socat. K3d is the fastest Kubernetes cluster creation tool compare with kind and minikube. K3d, similar to kind, runs only on docker and uses a configuration file YAML format. K3d can be deployed on Windows, macOS, Linux. Ex. Deployment on Linux ```shell= ~> wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh # Or ~> curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.0.0 bash ``` Creating a new Kubernetes cluster is as easy as running; ```shell= ~> k3d cluster create mycluster INFO[0000] Prep: Network INFO[0000] Created network 'k3d-mycluster' (4bb8f280f158cc9fc0a7b22268483e10f045641230f5e8df518fce5ebb4733af) INFO[0000] Created volume 'k3d-mycluster-images' INFO[0001] Creating node 'k3d-mycluster-server-0' INFO[0001] Creating LoadBalancer 'k3d-mycluster-serverlb' INFO[0001] Starting cluster 'mycluster' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-mycluster-server-0' INFO[0006] Starting agents... INFO[0006] Starting helpers... INFO[0006] Starting Node 'k3d-mycluster-serverlb' INFO[0007] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0011] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap INFO[0011] Cluster 'mycluster' created successfully! INFO[0011] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0011] You can now use it like this: kubectl config use-context k3d-mycluster kubectl cluster-info ``` _Note: in case you started Kubernetes k3d cluster and is unresponsive you may check the logs for `docker logs k3d-mycluster-server-0` and if you see the Error: ``` I0920 14:01:59.608876 7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288 F0920 14:01:59.608924 7 server.go:495] open /proc/sys/net/netfilter/nf_conntrack_max: permission deni ``` You will need to start k3d with the below settings._ ```shell= ~> k3d cluster create \ --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" \ --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" ``` Kubernetes cluster should be up and running in no time, ```shell= k3d-k3s-default>~> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 11786bfb7cdb rancher/k3d-proxy:v4.4.7 "/bin/sh -c nginx-pr…" About a minute ago Up About a minute 80/tcp, 0.0.0.0:45107->6443/tcp k3d-k3s-default-serverlb 3431f9bd178e rancher/k3s:v1.20.6-k3s1 "/bin/entrypoint.sh …" About a minute ago Up About a minute k3d-k3s-default-server-0 ``` ```shell= k3d-k3s-default>~> kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-k3s-default-server-0 Ready control-plane,master 113s v1.20.6+k3s1 ``` ```shell= k3d-k3s-default>~> kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-5ff76fc89d-pxtzj 1/1 Running 0 107s kube-system metrics-server-86cbb8457f-26kqq 1/1 Running 0 107s kube-system coredns-854c77959c-7mg8g 1/1 Running 0 107s kube-system helm-install-traefik-8ktk2 0/1 Completed 0 107s kube-system svclb-traefik-qxndx 2/2 Running 0 78s kube-system traefik-6f9cbd9bd4-mqkgc 1/1 Running 0 78s ``` K3d, similar to kind, allows you to configure clusters the way you want by creating a configuration YAML formatted file as in the below example. ```yaml= kind: Simple apiVersion: k3d.io/v1alpha2 name: my-cluster image: rancher/k3s:v1.20.4-k3s1 servers: 3 agents: 3 ports: - port: 80:80 nodeFilters: - loadbalancer ``` ```shell= k3d-k3s-default>~> k3d cluster create \ --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" \ --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" \ --config k3d.yaml INFO[0000] Using config file k3d.yaml INFO[0000] Prep: Network INFO[0000] Created network 'k3d-my-cluster' (c8c4a2763f2b4a613189f236c71af10d2a6411d68fec3f0ae3e57617d37ad5e2) INFO[0000] Created volume 'k3d-my-cluster-images' INFO[0000] Creating initializing server node INFO[0000] Creating node 'k3d-my-cluster-server-0' INFO[0003] Pulling image 'rancher/k3s:v1.20.4-k3s1' INFO[0017] Creating node 'k3d-my-cluster-server-1' INFO[0018] Creating node 'k3d-my-cluster-server-2' INFO[0018] Creating node 'k3d-my-cluster-agent-0' INFO[0018] Creating node 'k3d-my-cluster-agent-1' INFO[0018] Creating node 'k3d-my-cluster-agent-2' INFO[0018] Creating LoadBalancer 'k3d-my-cluster-serverlb' INFO[0018] Starting cluster 'my-cluster' INFO[0018] Starting the initializing server... INFO[0019] Starting Node 'k3d-my-cluster-server-0' INFO[0019] Starting servers... INFO[0019] Starting Node 'k3d-my-cluster-server-1' INFO[0039] Starting Node 'k3d-my-cluster-server-2' INFO[0054] Starting agents... INFO[0054] Starting Node 'k3d-my-cluster-agent-0' INFO[0061] Starting Node 'k3d-my-cluster-agent-1' INFO[0069] Starting Node 'k3d-my-cluster-agent-2' INFO[0076] Starting helpers... INFO[0076] Starting Node 'k3d-my-cluster-serverlb' INFO[0077] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0086] Successfully added host record to /etc/hosts in 7/7 nodes and to the CoreDNS ConfigMap INFO[0086] Cluster 'my-cluster' created successfully! INFO[0086] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0086] You can now use it like this: kubectl config use-context k3d-my-cluster kubectl cluster-info ``` ```shell= k3d-k3s-default>~> kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-my-cluster-agent-0 Ready <none> 31s v1.20.4+k3s1 k3d-my-cluster-agent-1 Ready <none> 24s v1.20.4+k3s1 k3d-my-cluster-agent-2 Ready <none> 16s v1.20.4+k3s1 k3d-my-cluster-server-0 Ready control-plane,etcd,master 64s v1.20.4+k3s1 k3d-my-cluster-server-1 Ready control-plane,etcd,master 50s v1.20.4+k3s1 k3d-my-cluster-server-2 Ready control-plane,etcd,master 36s v1.20.4+k3 ``` ```shell= k3d-k3s-default>~> k3d cluster list NAME SERVERS AGENTS LOADBALANCER my-cluster 3/3 3/3 true ``` ```shell= k3d-k3s-default>~> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 62b10f1477c3 rancher/k3d-proxy:v4.4.7 "/bin/sh -c nginx-pr…" 3 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:34057->6443/tcp k3d-my-cluster-serverlb f5edcf43f303 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 2 minutes k3d-my-cluster-agent-2 8d12afe08cde rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 2 minutes k3d-my-cluster-agent-1 7d4d12e7a1d7 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 2 minutes k3d-my-cluster-agent-0 1ba6883505f2 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes k3d-my-cluster-server-2 ebf43e7cbf84 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes k3d-my-cluster-server-1 127c50851a1c rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes k3d-my-cluster-server-0 ``` In the above example, we created 3 Masters nodes and 3 Worker nodes. Now we can start deploying and testing your application/service in a newly created cluster. ## Conclusions Based on my experience using all three local development Kubernetes cluster options: minikube > best works for DevOps/SRE team who need to configure clusters in detail line CNI, policies and test locally. kind and k3d > works best for the development team who want a quick validation and immediate feedback on microservices code change or Kubernetes deployment, service, ConfigMap, or Secrets changes before promoting the code into Dev/Staging environment.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully