# CAPD Zuul Scripts
## Overview
Zuul Scripts for testing `airshipctl and cluster api docker integration` is being handled by the job - `airship-airshipctl-gate-script-runner-dockertest`, included in the patchset `https://review.opendev.org/#/c/738682/`
This document contains information on usage of the scripts in patchset `https://review.opendev.org/#/c/738682/` to test
`airshipctl and cluster api docker integration` locally.
Airshipctl and cluster api docker integration is available as a part of patchset - `https://review.opendev.org/#/c/737871/`
For more information on airshipctl and cluster api docker integration visit [Airshipctl And Cluster API Docker Integration](https://hackmd.io/yJDorM4gSwmyRuf7Kh7HZg)
## Scripts And Usage
| script name | purpose |
| ----------------------------------------------------------- | ---------------------- |
| tools/deployment/docker/get_kind.sh | - install kind and display kind version |
| tools/deployment/docker/00_install_go.sh | - install go and add go to systems's PATH variable |
| tools/deployment/docker/01_install_kubectl.sh | - install kubectl |
| tools/deployment/docker/21_systemwide_executable.sh | - install airshipctl |
| tools/deployment/docker/31_create_kind_cluster.sh | - create kind cluster with one control plane <br> - test if all pods are up |
| tools/deployment/docker/41_initialize_management_cluster.sh | - initialize kind cluster with cluster api and cluster api docker provider components <br> - test if control plane is up |
| tools/deployment/docker/51_deploy_workload_cluster.sh | - create controlplane and workers <br> - check if all nodes are ready on workload cluster <br> - check if all pods are running on workload cluster |
| tools/deployment/docker/61_tear_down_clusters.sh | - deletes controlplane and workers |
## Testing Zuul Scripts Locally
```
$./tools/document/get_kind.sh
Installing Kind Version v0.8.1
```
`$ ./tools/deployment/docker/00_install_go.sh`
```
Installing GO go1.14.1.linux-amd64
https://dl.google.com
```
`$ ./tools/deployment/docker/01_install_kubectl.sh`
```
+ : v1.17.4
+ URL=https://storage.googleapis.com
+ sudo -E curl -sSLo /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.17.4/bin/linux/amd64/kubectl
+ sudo -E chmod +x /usr/local/bin/kubectl
```
`$ ./tools/deployment/docker/02_install_apache2.sh`
```
Installing apache2
Hit:1 http://azure.archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://azure.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://azure.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Hit:4 https://download.docker.com/linux/ubuntu bionic InRelease
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Fetched 252 kB in 1s (208 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
8 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
grub-pc-bin libltdl7
Use 'sudo apt autoremove' to remove them.
Suggested packages:
www-browser apache2-doc apache2-suexec-pristine | apache2-suexec-custom
The following NEW packages will be installed:
apache2
0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded.
Need to get 0 B/95.1 kB of archives.
After this operation, 536 kB of additional disk space will be used.
Selecting previously unselected package apache2.
(Reading database ... 139230 files and directories currently installed.)
Preparing to unpack .../apache2_2.4.29-1ubuntu4.14_amd64.deb ...
Unpacking apache2 (2.4.29-1ubuntu4.14) ...
Setting up apache2 (2.4.29-1ubuntu4.14) ...
Processing triggers for systemd (237-3ubuntu10.42) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
```
`$ ./tools/deployment/docker/21_systemwide_executable.sh`
```
+ export USE_PROXY=false
+ USE_PROXY=false
+ export HTTPS_PROXY=
+ HTTPS_PROXY=
+ export HTTPS_PROXY=
+ HTTPS_PROXY=
+ export NO_PROXY=
+ NO_PROXY=
+ echo 'Build airshipctl in docker image'
Build airshipctl in docker image
+ make docker-image
Sending build context to Docker daemon 14.12MB
Step 1/16 : ARG GO_IMAGE=docker.io/golang:1.13.1-stretch
Step 2/16 : ARG RELEASE_IMAGE=scratch
Step 3/16 : FROM ${GO_IMAGE} as builder
---> f8c4e1a86e6d
Step 4/16 : COPY ./certs/* /usr/local/share/ca-certificates/
---> Using cache
---> 0c5756dc5708
Step 5/16 : RUN update-ca-certificates
---> Using cache
---> 6ddb2908cb63
Step 6/16 : SHELL [ "/bin/bash", "-cex" ]
---> Using cache
---> 2ed814a6702e
Step 7/16 : WORKDIR /usr/src/airshipctl
---> Using cache
---> 5097471df92c
Step 8/16 : COPY go.mod go.sum /usr/src/airshipctl/
---> Using cache
---> f929620e228e
Step 9/16 : RUN go mod download
---> Using cache
---> 3276c405c09b
Step 10/16 : COPY . /usr/src/airshipctl/
---> 2f1214313c74
Step 11/16 : ARG MAKE_TARGET=build
---> Running in 3a9c9c809caf
Removing intermediate container 3a9c9c809caf
---> c4c418c881f2
Step 12/16 : RUN for target in $MAKE_TARGET; do make $target; done
---> Running in 7aac686bef96
+ for target in $MAKE_TARGET
+ make build
Removing intermediate container 7aac686bef96
---> 1a84adee3d96
Step 13/16 : FROM ${RELEASE_IMAGE} as release
--->
Step 14/16 : COPY --from=builder /usr/src/airshipctl/bin/airshipctl /usr/local/bin/airshipctl
---> Using cache
---> 47f3f1714dac
Step 15/16 : USER 65534
---> Using cache
---> 34701d2e31f2
Step 16/16 : ENTRYPOINT [ "/usr/local/bin/airshipctl" ]
---> Using cache
---> b50446977a39
Successfully built b50446977a39
Successfully tagged quay.io/airshipit/airshipctl:dev
+ echo 'Copy airshipctl from docker image'
Copy airshipctl from docker image
++ make print-docker-image-tag
+ DOCKER_IMAGE_TAG=quay.io/airshipit/airshipctl:dev
++ docker create quay.io/airshipit/airshipctl:dev
+ CONTAINER=5e4bccb7400e8ea6c82eed8fcdf4fe4e2459b88211a3dac8a4255476837397b8
+ sudo docker cp 5e4bccb7400e8ea6c82eed8fcdf4fe4e2459b88211a3dac8a4255476837397b8:/usr/local/bin/airshipctl /usr/local/bin/airshipctl
+ sudo docker rm 5e4bccb7400e8ea6c82eed8fcdf4fe4e2459b88211a3dac8a4255476837397b8
5e4bccb7400e8ea6c82eed8fcdf4fe4e2459b88211a3dac8a4255476837397b8
+ airshipctl version
+ grep -q airshipctl
+ echo 'Airshipctl version'
Airshipctl version
+ airshipctl version
airshipctl: v0.1.0
+ echo 'Install airshipctl as kustomize plugins'
Install airshipctl as kustomize plugins
+ AIRSHIPCTL=/usr/local/bin/airshipctl
+ ./tools/document/build_kustomize_plugin.sh
The airshipctl kustomize plugin has been installed.
Run kustomize with:
KUSTOMIZE_PLUGIN_HOME=/root/.airship/kustomize-plugins $GOPATH/bin/kustomize build --enable_alpha_plugins ...
```
`$ ./tools/deployment/docker/31_create_kind_cluster.sh`
```
+ export TIMEOUT=3600
+ TIMEOUT=3600
+ export KUBECONFIG=/root/.kube/config
+ KUBECONFIG=/root/.kube/config
+ export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
+ KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
+ REMOTE_WORK_DIR=/tmp
+ echo 'Create Kind Cluster'
Create Kind Cluster
+ cat
+ kind delete cluster --name capi-docker
Deleting cluster "capi-docker" ...
+ kind delete cluster --name dtc
Deleting cluster "dtc" ...
+ kind create cluster --config /tmp/kind-cluster-with-extramounts.yaml --name capi-docker
Creating cluster "capi-docker" ...
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK
WARNING: Here be dragons! This is not supported currently.
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-capi-docker"
You can now use your cluster with:
kubectl cluster-info --context kind-capi-docker
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
++ date +%s
+ end=1597822612
+ echo 'Waiting 3600 seconds for Capi Docker Control Plane node to be ready.'
Waiting 3600 seconds for Capi Docker Control Plane node to be ready.
+ true
+ kubectl --request-timeout 20s --kubeconfig /root/.kube/config get nodes capi-docker-control-plane -o 'jsonpath={.status.conditions[?(@.type=="Ready")].status}'
+ grep -q True
++ date +%s
+ now=1597819012
+ '[' 1597819012 -gt 1597822612 ']'
+ echo -n .
.+ sleep 15
+ true
+ kubectl --request-timeout 20s --kubeconfig /root/.kube/config get nodes capi-docker-control-plane -o 'jsonpath={.status.conditions[?(@.type=="Ready")].status}'
+ grep -q True
++ date +%s
+ now=1597819028
+ '[' 1597819028 -gt 1597822612 ']'
+ echo -n .
.+ sleep 15
+ true
+ kubectl --request-timeout 20s --kubeconfig /root/.kube/config get nodes capi-docker-control-plane -o 'jsonpath={.status.conditions[?(@.type=="Ready")].status}'
+ grep -q True
++ date +%s
+ now=1597819043
+ '[' 1597819043 -gt 1597822612 ']'
+ echo -n .
.+ sleep 15
+ true
+ kubectl --request-timeout 20s --kubeconfig /root/.kube/config get nodes capi-docker-control-plane -o 'jsonpath={.status.conditions[?(@.type=="Ready")].status}'
+ grep -q True
++ date +%s
+ now=1597819059
+ '[' 1597819059 -gt 1597822612 ']'
+ echo -n .
.+ sleep 15
+ true
+ kubectl --request-timeout 20s --kubeconfig /root/.kube/config get nodes capi-docker-control-plane -o 'jsonpath={.status.conditions[?(@.type=="Ready")].status}'
+ grep -q True
+ echo -e '\nCapi Docker Control Plane Node is ready.'
Capi Docker Control Plane Node is ready.
+ kubectl --request-timeout 20s --kubeconfig /root/.kube/config get nodes
NAME STATUS ROLES AGE VERSION
capi-docker-control-plane Ready master 68s v1.18.2
+ break
```
`$ ./tools/deployment/docker/41_initialize_management_cluster.sh`
```
+ export KUBECONFIG=/root/.airship/kubeconfig
+ KUBECONFIG=/root/.airship/kubeconfig
+ mkdir /root/.airship
mkdir: cannot create directory ‘/root/.airship’: File exists
+ echo 'Airship Directory present '
Airship Directory present
+ cp -rp /root/.kube/config /root/.airship/kubeconfig
+ airshipctl config init
+ airshipctl config set-context kind-capi-docker --manifest docker_manifest
Context "kind-capi-docker" modified.
+ airshipctl config get-context
Context: kind-capi-docker
contextKubeconf: kind-capi-docker_target
manifest: docker_manifest
LocationOfOrigin: /root/.airship/kubeconfig
cluster: kind-capi-docker_target
user: kind-capi-docker
+ echo 'Initialize Managment Cluster with CAPI and CAPD Components'
Initialize Managment Cluster with CAPI and CAPD Components
+ airshipctl config set-manifest docker_manifest --repo primary --url https://review.opendev.org/airship/airshipctl --branch master --primary --sub-path manifests/site/docker-test-site --target-path /home/zuul/src/opendev.org/airship/airshipctl
Manifest "docker_manifest" created.
+ airshipctl cluster init --debug
[airshipctl] 2020/08/19 06:37:56 Starting cluster-api initiation
Installing the clusterctl inventory CRD
Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
Fetching providers
[airshipctl] 2020/08/19 06:37:56 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
[airshipctl] 2020/08/19 06:37:56 Setting up airshipctl provider Components client
Provider type: CoreProvider, name: cluster-api
[airshipctl] 2020/08/19 06:37:56 Getting airshipctl provider components, setting skipping variable substitution.
Provider type: CoreProvider, name: cluster-api
Fetching File="components.yaml" Provider="cluster-api" Version="v0.3.3"
[airshipctl] 2020/08/19 06:37:56 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/capi/v0.3.3
[airshipctl] 2020/08/19 06:37:56 Creating arishipctl repository implementation interface for provider kubeadm of type BootstrapProvider
[airshipctl] 2020/08/19 06:37:56 Setting up airshipctl provider Components client
Provider type: BootstrapProvider, name: kubeadm
[airshipctl] 2020/08/19 06:37:56 Getting airshipctl provider components, setting skipping variable substitution.
Provider type: BootstrapProvider, name: kubeadm
Fetching File="components.yaml" Provider="bootstrap-kubeadm" Version="v0.3.3"
[airshipctl] 2020/08/19 06:37:56 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/cabpk/v0.3.3
[airshipctl] 2020/08/19 06:37:56 Creating arishipctl repository implementation interface for provider kubeadm of type ControlPlaneProvider
[airshipctl] 2020/08/19 06:37:56 Setting up airshipctl provider Components client
Provider type: ControlPlaneProvider, name: kubeadm
[airshipctl] 2020/08/19 06:37:56 Getting airshipctl provider components, setting skipping variable substitution.
Provider type: ControlPlaneProvider, name: kubeadm
Fetching File="components.yaml" Provider="control-plane-kubeadm" Version="v0.3.3"
[airshipctl] 2020/08/19 06:37:56 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/cacpk/v0.3.3
[airshipctl] 2020/08/19 06:37:56 Creating arishipctl repository implementation interface for provider docker of type InfrastructureProvider
[airshipctl] 2020/08/19 06:37:56 Setting up airshipctl provider Components client
Provider type: InfrastructureProvider, name: docker
[airshipctl] 2020/08/19 06:37:56 Getting airshipctl provider components, setting skipping variable substitution.
Provider type: InfrastructureProvider, name: docker
Fetching File="components.yaml" Provider="infrastructure-docker" Version="v0.3.7"
[airshipctl] 2020/08/19 06:37:56 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/capd/v0.3.7
[airshipctl] 2020/08/19 06:37:56 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
Fetching File="metadata.yaml" Provider="cluster-api" Version="v0.3.3"
[airshipctl] 2020/08/19 06:37:56 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/capi/v0.3.3
[airshipctl] 2020/08/19 06:37:56 Creating arishipctl repository implementation interface for provider kubeadm of type BootstrapProvider
Fetching File="metadata.yaml" Provider="bootstrap-kubeadm" Version="v0.3.3"
[airshipctl] 2020/08/19 06:37:56 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/cabpk/v0.3.3
[airshipctl] 2020/08/19 06:37:57 Creating arishipctl repository implementation interface for provider kubeadm of type ControlPlaneProvider
Fetching File="metadata.yaml" Provider="control-plane-kubeadm" Version="v0.3.3"
[airshipctl] 2020/08/19 06:37:57 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/cacpk/v0.3.3
[airshipctl] 2020/08/19 06:37:57 Creating arishipctl repository implementation interface for provider docker of type InfrastructureProvider
Fetching File="metadata.yaml" Provider="infrastructure-docker" Version="v0.3.7"
[airshipctl] 2020/08/19 06:37:57 Building cluster-api provider component documents from kustomize path at /home/zuul/src/opendev.org/airship/airshipctl/manifests/function/capd/v0.3.7
Installing cert-manager
Creating Namespace="cert-manager"
Creating CustomResourceDefinition="challenges.acme.cert-manager.io"
Creating CustomResourceDefinition="orders.acme.cert-manager.io"
Creating CustomResourceDefinition="certificaterequests.cert-manager.io"
Creating CustomResourceDefinition="certificates.cert-manager.io"
Creating CustomResourceDefinition="clusterissuers.cert-manager.io"
Creating CustomResourceDefinition="issuers.cert-manager.io"
Creating ServiceAccount="cert-manager-cainjector" Namespace="cert-manager"
Creating ServiceAccount="cert-manager" Namespace="cert-manager"
Creating ServiceAccount="cert-manager-webhook" Namespace="cert-manager"
Creating ClusterRole="cert-manager-cainjector"
Creating ClusterRoleBinding="cert-manager-cainjector"
Creating Role="cert-manager-cainjector:leaderelection" Namespace="kube-system"
Creating RoleBinding="cert-manager-cainjector:leaderelection" Namespace="kube-system"
Creating ClusterRoleBinding="cert-manager-webhook:auth-delegator"
Creating RoleBinding="cert-manager-webhook:webhook-authentication-reader" Namespace="kube-system"
Creating ClusterRole="cert-manager-webhook:webhook-requester"
Creating Role="cert-manager:leaderelection" Namespace="kube-system"
Creating RoleBinding="cert-manager:leaderelection" Namespace="kube-system"
Creating ClusterRole="cert-manager-controller-issuers"
Creating ClusterRole="cert-manager-controller-clusterissuers"
Creating ClusterRole="cert-manager-controller-certificates"
Creating ClusterRole="cert-manager-controller-orders"
Creating ClusterRole="cert-manager-controller-challenges"
Creating ClusterRole="cert-manager-controller-ingress-shim"
Creating ClusterRoleBinding="cert-manager-controller-issuers"
Creating ClusterRoleBinding="cert-manager-controller-clusterissuers"
Creating ClusterRoleBinding="cert-manager-controller-certificates"
Creating ClusterRoleBinding="cert-manager-controller-orders"
Creating ClusterRoleBinding="cert-manager-controller-challenges"
Creating ClusterRoleBinding="cert-manager-controller-ingress-shim"
Creating ClusterRole="cert-manager-view"
Creating ClusterRole="cert-manager-edit"
Creating Service="cert-manager" Namespace="cert-manager"
Creating Service="cert-manager-webhook" Namespace="cert-manager"
Creating Deployment="cert-manager-cainjector" Namespace="cert-manager"
Creating Deployment="cert-manager" Namespace="cert-manager"
Creating Deployment="cert-manager-webhook" Namespace="cert-manager"
Creating APIService="v1beta1.webhook.cert-manager.io"
Creating MutatingWebhookConfiguration="cert-manager-webhook"
Creating ValidatingWebhookConfiguration="cert-manager-webhook"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Creating shared objects Provider="cluster-api" Version="v0.3.3"
Creating Namespace="capi-webhook-system"
Creating CustomResourceDefinition="clusters.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinedeployments.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinehealthchecks.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinepools.exp.cluster.x-k8s.io"
Creating CustomResourceDefinition="machines.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinesets.cluster.x-k8s.io"
Creating MutatingWebhookConfiguration="capi-mutating-webhook-configuration"
Creating Service="capi-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capi-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capi-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capi-selfsigned-issuer" Namespace="capi-webhook-system"
Creating ValidatingWebhookConfiguration="capi-validating-webhook-configuration"
Creating instance objects Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Creating Namespace="capi-system"
Creating Role="capi-leader-election-role" Namespace="capi-system"
Creating ClusterRole="capi-system-capi-aggregated-manager-role"
Creating ClusterRole="capi-system-capi-manager-role"
Creating ClusterRole="capi-system-capi-proxy-role"
Creating RoleBinding="capi-leader-election-rolebinding" Namespace="capi-system"
Creating ClusterRoleBinding="capi-system-capi-manager-rolebinding"
Creating ClusterRoleBinding="capi-system-capi-proxy-rolebinding"
Creating Service="capi-controller-manager-metrics-service" Namespace="capi-system"
Creating Deployment="capi-controller-manager" Namespace="capi-system"
Creating inventory entry Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Creating shared objects Provider="bootstrap-kubeadm" Version="v0.3.3"
Creating CustomResourceDefinition="kubeadmconfigs.bootstrap.cluster.x-k8s.io"
Creating CustomResourceDefinition="kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io"
Creating Service="capi-kubeadm-bootstrap-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capi-kubeadm-bootstrap-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capi-kubeadm-bootstrap-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capi-kubeadm-bootstrap-selfsigned-issuer" Namespace="capi-webhook-system"
Creating instance objects Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Creating Namespace="capi-kubeadm-bootstrap-system"
Creating Role="capi-kubeadm-bootstrap-leader-election-role" Namespace="capi-kubeadm-bootstrap-system"
Creating ClusterRole="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-manager-role"
Creating ClusterRole="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-proxy-role"
Creating RoleBinding="capi-kubeadm-bootstrap-leader-election-rolebinding" Namespace="capi-kubeadm-bootstrap-system"
Creating ClusterRoleBinding="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-manager-rolebinding"
Creating ClusterRoleBinding="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-proxy-rolebinding"
Creating Service="capi-kubeadm-bootstrap-controller-manager-metrics-service" Namespace="capi-kubeadm-bootstrap-system"
Creating Deployment="capi-kubeadm-bootstrap-controller-manager" Namespace="capi-kubeadm-bootstrap-system"
Creating inventory entry Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Creating shared objects Provider="control-plane-kubeadm" Version="v0.3.3"
Creating CustomResourceDefinition="kubeadmcontrolplanes.controlplane.cluster.x-k8s.io"
Creating MutatingWebhookConfiguration="capi-kubeadm-control-plane-mutating-webhook-configuration"
Creating Service="capi-kubeadm-control-plane-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capi-kubeadm-control-plane-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capi-kubeadm-control-plane-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capi-kubeadm-control-plane-selfsigned-issuer" Namespace="capi-webhook-system"
Creating ValidatingWebhookConfiguration="capi-kubeadm-control-plane-validating-webhook-configuration"
Creating instance objects Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Creating Namespace="capi-kubeadm-control-plane-system"
Creating Role="capi-kubeadm-control-plane-leader-election-role" Namespace="capi-kubeadm-control-plane-system"
Creating Role="capi-kubeadm-control-plane-manager-role" Namespace="capi-kubeadm-control-plane-system"
Creating ClusterRole="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-manager-role"
Creating ClusterRole="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-proxy-role"
Creating RoleBinding="capi-kubeadm-control-plane-leader-election-rolebinding" Namespace="capi-kubeadm-control-plane-system"
Creating ClusterRoleBinding="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-manager-rolebinding"
Creating ClusterRoleBinding="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-proxy-rolebinding"
Creating Service="capi-kubeadm-control-plane-controller-manager-metrics-service" Namespace="capi-kubeadm-control-plane-system"
Creating Deployment="capi-kubeadm-control-plane-controller-manager" Namespace="capi-kubeadm-control-plane-system"
Creating inventory entry Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v0.3.7" TargetNamespace="capd-system"
Creating shared objects Provider="infrastructure-docker" Version="v0.3.7"
Creating CustomResourceDefinition="dockerclusters.infrastructure.cluster.x-k8s.io"
Creating CustomResourceDefinition="dockermachines.infrastructure.cluster.x-k8s.io"
Creating CustomResourceDefinition="dockermachinetemplates.infrastructure.cluster.x-k8s.io"
Creating ValidatingWebhookConfiguration="capd-validating-webhook-configuration"
Creating instance objects Provider="infrastructure-docker" Version="v0.3.7" TargetNamespace="capd-system"
Creating Namespace="capd-system"
Creating Role="capd-leader-election-role" Namespace="capd-system"
Creating ClusterRole="capd-system-capd-manager-role"
Creating ClusterRole="capd-system-capd-proxy-role"
Creating RoleBinding="capd-leader-election-rolebinding" Namespace="capd-system"
Creating ClusterRoleBinding="capd-system-capd-manager-rolebinding"
Creating ClusterRoleBinding="capd-system-capd-proxy-rolebinding"
Creating Service="capd-controller-manager-metrics-service" Namespace="capd-system"
Creating Service="capd-webhook-service" Namespace="capd-system"
Creating Deployment="capd-controller-manager" Namespace="capd-system"
Creating Certificate="capd-serving-cert" Namespace="capd-system"
Creating Issuer="capd-selfsigned-issuer" Namespace="capd-system"
Creating inventory entry Provider="infrastructure-docker" Version="v0.3.7" TargetNamespace="capd-system"
+ echo 'Waiting for all pods to come up'
Waiting for all pods to come up
+ kubectl --kubeconfig /root/.airship/kubeconfig wait --for=condition=ready pods --all --timeout=1000s -A
pod/capd-controller-manager-6f67b8886f-n7cnx condition met
pod/capi-kubeadm-bootstrap-controller-manager-66c6b6857b-g4xjt condition met
pod/capi-kubeadm-control-plane-controller-manager-688f7ccc56-c9cfz condition met
pod/capi-controller-manager-549c757797-tkqbx condition met
pod/capi-controller-manager-5f8fc485bb-qspg7 condition met
pod/capi-kubeadm-bootstrap-controller-manager-6b645d9d4c-7rm2q condition met
pod/capi-kubeadm-control-plane-controller-manager-65dbd6f999-2vpl8 condition met
pod/cert-manager-77d8f4d85f-z8frb condition met
pod/cert-manager-cainjector-75f88c9f56-bnxfs condition met
pod/cert-manager-webhook-56669d7fcb-hmtn4 condition met
pod/coredns-66bff467f8-8w8tz condition met
pod/coredns-66bff467f8-lkhhb condition met
pod/etcd-capi-docker-control-plane condition met
pod/kindnet-mg9pn condition met
pod/kube-apiserver-capi-docker-control-plane condition met
pod/kube-controller-manager-capi-docker-control-plane condition met
pod/kube-proxy-s6bbq condition met
pod/kube-scheduler-capi-docker-control-plane condition met
pod/local-path-provisioner-bd4bb6b75-xlvwq condition met
+ kubectl --kubeconfig /root/.airship/kubeconfig get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
capd-system capd-controller-manager-6f67b8886f-n7cnx 2/2 Running 0 54s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-66c6b6857b-g4xjt 2/2 Running 0 55s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-688f7ccc56-c9cfz 2/2 Running 0 55s
capi-system capi-controller-manager-549c757797-tkqbx 2/2 Running 0 56s
capi-webhook-system capi-controller-manager-5f8fc485bb-qspg7 2/2 Running 0 57s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6b645d9d4c-7rm2q 2/2 Running 0 56s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-65dbd6f999-2vpl8 2/2 Running 0 55s
cert-manager cert-manager-77d8f4d85f-z8frb 1/1 Running 0 79s
cert-manager cert-manager-cainjector-75f88c9f56-bnxfs 1/1 Running 0 79s
cert-manager cert-manager-webhook-56669d7fcb-hmtn4 1/1 Running 0 79s
kube-system coredns-66bff467f8-8w8tz 1/1 Running 0 2m12s
kube-system coredns-66bff467f8-lkhhb 1/1 Running 0 2m12s
kube-system etcd-capi-docker-control-plane 1/1 Running 0 2m27s
kube-system kindnet-mg9pn 1/1 Running 0 2m12s
kube-system kube-apiserver-capi-docker-control-plane 1/1 Running 0 2m27s
kube-system kube-controller-manager-capi-docker-control-plane 1/1 Running 0 2m27s
kube-system kube-proxy-s6bbq 1/1 Running 0 2m12s
kube-system kube-scheduler-capi-docker-control-plane 1/1 Running 0 2m27s
local-path-storage local-path-provisioner-bd4bb6b75-xlvwq 1/1 Running 0 2m12s
```
`$ ./tools/deployment/docker/51_deploy_workload_cluster.sh`
```
Deploy Target Workload Cluster: ControlPlane
[airshipctl] 2020/08/19 17:42:55 building bundle from kustomize path /home/zuul/src/opendev.org/airship/airshipctl/manifests/site/docker-test-site/target/controlplane
[airshipctl] 2020/08/19 17:42:55 Applying bundle, inventory id: kind-capi-docker-target-controlplane
[airshipctl] 2020/08/19 17:42:55 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/08/19 17:42:55 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-docker-target-controlplane"},"name":"airshipit-kind-capi-docker-target-controlplane","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/08/19 17:42:55 Making sure that inventory object namespace airshipit exists
configmap/airshipit-kind-capi-docker-target-controlplane-87efb53a created
cluster.cluster.x-k8s.io/dtc created
machinehealthcheck.cluster.x-k8s.io/dtc-mhc-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/dtc-control-plane created
dockercluster.infrastructure.cluster.x-k8s.io/dtc created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-control-plane created
6 resource(s) applied. 6 created, 0 unchanged, 0 configured
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/dtc-control-plane is NotFound: Resource not found
dockercluster.infrastructure.cluster.x-k8s.io/dtc is NotFound: Resource not found
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-control-plane is NotFound: Resource not found
configmap/airshipit-kind-capi-docker-target-controlplane-87efb53a is NotFound: Resource not found
cluster.cluster.x-k8s.io/dtc is NotFound: Resource not found
machinehealthcheck.cluster.x-k8s.io/dtc-mhc-0 is NotFound: Resource not found
configmap/airshipit-kind-capi-docker-target-controlplane-87efb53a is Current: Resource is always ready
cluster.cluster.x-k8s.io/dtc is Current: Resource is current
machinehealthcheck.cluster.x-k8s.io/dtc-mhc-0 is Current: Resource is current
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/dtc-control-plane is Current: Resource is current
dockercluster.infrastructure.cluster.x-k8s.io/dtc is Current: Resource is current
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-control-plane is Current: Resource is current
all resources has reached the Current status
Deploy Target Workload Cluster: Workers
[airshipctl] 2020/08/19 17:42:59 building bundle from kustomize path /home/zuul/src/opendev.org/airship/airshipctl/manifests/site/docker-test-site/target/workers
[airshipctl] 2020/08/19 17:42:59 Applying bundle, inventory id: kind-capi-docker-target-workers
[airshipctl] 2020/08/19 17:42:59 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/08/19 17:42:59 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-docker-target-workers"},"name":"airshipit-kind-capi-docker-target-workers","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/08/19 17:42:59 Making sure that inventory object namespace airshipit exists
configmap/airshipit-kind-capi-docker-target-workers-b56f83 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/dtc-md-0 created
machinedeployment.cluster.x-k8s.io/dtc-md-0 created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-md-0 created
4 resource(s) applied. 4 created, 0 unchanged, 0 configured
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/dtc-md-0 is NotFound: Resource not found
machinedeployment.cluster.x-k8s.io/dtc-md-0 is NotFound: Resource not found
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-md-0 is NotFound: Resource not found
configmap/airshipit-kind-capi-docker-target-workers-b56f83 is NotFound: Resource not found
configmap/airshipit-kind-capi-docker-target-workers-b56f83 is Current: Resource is always ready
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/dtc-md-0 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/dtc-md-0 is Current: Resource is current
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-md-0 is Current: Resource is current
Get kubeconfig from secret
Error from server (NotFound): secrets "dtc-kubeconfig" not found
1: Retry to get kubeconfig from secret.
Generate kubeconfig
Generate kubeconfig: /tmp/dtc.kubeconfig
Wait for kubernetes cluster to be up
Unable to connect to the server: EOF
1: Retry to get kubectl version.
Check nodes status
node/dtc-dtc-control-plane-8gvxj condition met
NAME STATUS ROLES AGE VERSION
dtc-dtc-control-plane-8gvxj Ready master 89s v1.18.6
dtc-dtc-md-0-94c79cf9c-dt28v NotReady <none> 34s v1.18.6
Waiting for all pods to come up
pod/calico-kube-controllers-59b699859f-5h2rr condition met
pod/calico-node-56xks condition met
pod/calico-node-nskvz condition met
pod/coredns-6955765f44-fbd64 condition met
pod/coredns-6955765f44-n24kg condition met
pod/etcd-dtc-dtc-control-plane-8gvxj condition met
pod/kube-apiserver-dtc-dtc-control-plane-8gvxj condition met
pod/kube-controller-manager-dtc-dtc-control-plane-8gvxj condition met
pod/kube-proxy-2fw2t condition met
pod/kube-proxy-4xkdq condition met
pod/kube-scheduler-dtc-dtc-control-plane-8gvxj condition met
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59b699859f-5h2rr 1/1 Running 0 94s
kube-system calico-node-56xks 1/1 Running 0 89s
kube-system calico-node-nskvz 1/1 Running 0 94s
kube-system coredns-6955765f44-fbd64 1/1 Running 0 94s
kube-system coredns-6955765f44-n24kg 1/1 Running 0 94s
kube-system etcd-dtc-dtc-control-plane-8gvxj 1/1 Running 0 2m22s
kube-system kube-apiserver-dtc-dtc-control-plane-8gvxj 1/1 Running 0 2m22s
kube-system kube-controller-manager-dtc-dtc-control-plane-8gvxj 1/1 Running 1 2m23s
kube-system kube-proxy-2fw2t 1/1 Running 0 89s
kube-system kube-proxy-4xkdq 1/1 Running 0 94s
kube-system kube-scheduler-dtc-dtc-control-plane-8gvxj 1/1 Running 1 2m23s
Waiting for all machines to come up
NAME PROVIDERID PHASE
dtc-control-plane-8gvxj docker:////dtc-dtc-control-plane-8gvxj Running
dtc-md-0-94c79cf9c-dt28v docker:////dtc-dtc-md-0-94c79cf9c-dt28v Running
Get cluster state for target workload cluster
NAME PHASE
dtc Provisioned
```
``$ ./tools/deployment/docker/61_tear_down_clusters.sh`
```
cluster.cluster.x-k8s.io "dtc" deleted
Deleting cluster "capi-docker" ...
```
## See Also
### Airshipctl And Cluster API Docker Integration
* [Airshipctl And Cluster API Docker Integration](https://hackmd.io/yJDorM4gSwmyRuf7Kh7HZg)
<style>.markdown-body { max-width: 1500px; }</style>