changed 4 years ago
Published Linked with GitHub

Airshipctl CAPI Docker Integration (CAPD) [WIP]

This document shows usage of airshipctl and cluster api docker provider (capd) using kind, to create a cluster api management kubernetes cluster with 1 control plane and 3 worker nodes.

Getting Started

Build Airshipctl

$ git clone https://review.opendev.org/airship/airshipctl
$ cd airshipctl
$ ./tools/deployment/21_systemwide_executable.sh

Setup Airship Config

$ cat ~/.airship/config

apiVersion: airshipit.org/v1alpha1
managementConfiguration:
  dummy_management_config:
    type: redfish
    insecure: true
    useproxy: false
    systemActionRetries: 30
    systemRebootDelay: 30
contexts:
  ephemeral-cluster:
    contextKubeconf: ephemeral-cluster_ephemeral
    manifest: dummy_manifest
    managementConfiguration: dummy_management_config
  target-cluster:
    contextKubeconf: target-cluster_target
    manifest: dummy_manifest
    managementConfiguration: dummy_management_config
currentContext: ephemeral-cluster
kind: Config
manifests:
  dummy_manifest:
    phaseRepositoryName: primary
    repositories:
      primary:
        checkout:
          branch: master
          force: false
          remoteRef: ""
          tag: ""
        url: https://review.opendev.org/airship/airshipctl
    metadataPath: manifests/site/docker-test-site/metadata.yaml
    targetPath: /tmp/airship

Deploy Control Plane And Workers

$ export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
$ export KUBECONFIG="${HOME}/.airship/kubeconfig"


$ kind create cluster --name ephemeral-cluster --wait 120s --kubeconfig "${HOME}/.airship/kubeconfig" --config ./tools/deployment/templates/kind-cluster-with-extramounts

$ kubectl config set-context ephemeral-cluster --cluster kind-ephemeral-cluster --user kind-ephemeral-cluster --kubeconfig $KUBECONFIG

$ airshipctl document pull -n --debug

$ airshipctl phase run clusterctl-init-ephemeral --debug --kubeconfig ${KUBECONFIG}

$ airshipctl phase run controlplane-ephemeral --debug --kubeconfig ${KUBECONFIG} --wait-timeout 1000s

$ kubectl --namespace=default get secret/target-cluster-kubeconfig -o jsonpath={.data.value} | base64 --decode > /tmp/target-cluster.kubeconfig

$ kubectl config set-context target-cluster --user target-cluster-admin --cluster target-cluster \
--kubeconfig "/tmp/target-cluster.kubeconfig"

$ kubectl --kubeconfig /tmp/target-cluster.kubeconfig wait --for=condition=Ready nodes --all --timeout 4000s

$ kubectl get nodes --kubeconfig /tmp/target-cluster.kubeconfig -A

Note: Please take note of the control plane node name because it is untainted in the next step
For eg. control plane node name could be something like target-cluster-control-plane-twwsv
$ kubectl taint node target-cluster-control-plane-twwsv node-role.kubernetes.io/master- --kubeconfig /tmp/target-cluster.kubeconfig --request-timeout 10s

$ airshipctl phase run clusterctl-init-target --debug --kubeconfig /tmp/target-cluster.kubeconfig

$ kubectl get pods -A --kubeconfig /tmp/target-cluster.kubeconfig

$ KUBECONFIG="${HOME}/.airship/kubeconfig":/tmp/target-cluster.kubeconfig kubectl config view --merge --flatten > "/tmp/merged_target_ephemeral.kubeconfig"

$ airshipctl phase run clusterctl-move --kubeconfig "/tmp/merged_target_ephemeral.kubeconfig" --debug --progress

$ kubectl get machines --kubeconfig /tmp/target-cluster.kubeconfig

$ kind delete cluster --name "ephemeral-cluster"

$ airshipctl phase run  workers-target --debug --kubeconfig /tmp/target-cluster.kubeconfig

❯ kubectl get machines --kubeconfig /tmp/target-cluster.kubeconfig
NAME                                   PROVIDERID                                        PHASE
target-cluster-control-plane-m5jf7     docker:////target-cluster-control-plane-m5jf7     Running
target-cluster-md-0-84db44cdff-r8dkr   docker:////target-cluster-md-0-84db44cdff-r8dkr   Running

Scale Workers

Worker count can be adjusted in airshipctl/manifests/site/docker-test-site/target/workers/machine_count.json
By default it is 1. In this example, we have changed it to 3.

❯ cat /tmp/airship/airshipctl/manifests/site/docker-test-site/target/workers/machine_count.json

[
  { "op": "replace","path": "/spec/replicas","value": 3 }
]
$ airshipctl phase run  workers-target --debug --kubeconfig /tmp/target-cluster.kubeconfig
$ kubectl get machines --kubeconfig /tmp/target-cluster.kubeconfig

NAME                                   PROVIDERID                                        PHASE
target-cluster-control-plane-m5jf7     docker:////target-cluster-control-plane-m5jf7     Running
target-cluster-md-0-84db44cdff-b6zp6   docker:////target-cluster-md-0-84db44cdff-b6zp6   Running
target-cluster-md-0-84db44cdff-g4nm7   docker:////target-cluster-md-0-84db44cdff-g4nm7   Running
target-cluster-md-0-84db44cdff-r8dkr   docker:////target-cluster-md-0-84db44cdff-r8dkr   Running

Cleanup

$ kind get clusters
target-cluster
$ kind delete clusters --all or kind delete cluster --name target-cluster

More Information

The default config creates target cluster with one control plane and one worker node.

- worker count can be adjusted from airshipctl/manifests/site/docker-test-site/target/workers/machine_count.json
- control plane count can be adjusted from airshipctl/manifests/site/docker-test-site/ephemeral/controlplane/machine_count.json

Select a repo