# Airshipctl and Cluster API GCP Integration
## Overview
Airshipctl and cluster api gcp integration facilitates usage of `airshipctl` to create cluster api management and workload clusters using `gcp as infrastructure provider`. Airshipctl and cluster api gcp integration is available as a part of patchset - `https://review.opendev.org/#/c/748063/`. This document provides information on usage of the patchset.
Zuul scripts to test the cluster api gcp integration is available in patchset - https://review.opendev.org/#/c/749165/

## Workflow
A simple workflow that can be tested using the patchset, involves the following operations:
**Initialize the management cluster with cluster api and gcp provider components**
> airshipctl cluster init --debug
**Create a workload cluster, with control plane and worker nodes**
> airshipctl phase apply controlplane
> airshipctl phase apply workers
Note: `airshipctl phase apply initinfra` is not used because all the provider components
are initialized using `airshipctl cluster init`
The phase `initinfra` is included in the patchset just to get `validate docs` to pass.
For more information. [Check](https://hackmd.io/MFOB-oaxRHuD39gGB7GCTQ?view)
## GCP Prerequisites
### Create Service Account
To create and manager clusters, this infrastructure providers uses a service account to authenticate with GCP's APIs.
From your cloud console, follow [these instructions](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating) to create a new service account with Editor permissions. Afterwards, generate a JSON Key and store it somewhere safe. Use cloud shell to install ansible, packer, and build the CAPI compliant vm image.
### Install Ansible
Start by launching cloud shell.
$ export GCP_PROJECT_ID=<project-id>
$ export GOOGLE_APPLICATION_CREDENTIALS=</path/to/serviceaccount-key.json>
$ sudo apt-get update
$ sudo apt-get install ansible -y
### Install Packer
$ mkdir packer
$ cd packer
$ wget https://releases.hashicorp.com/packer/1.6.0/packer_1.6.0_linux_amd64.zip
$ unzip packer_1.6.0_linux_amd64.zip
$ sudo mv packer /usr/local/bin/
### Build Cluster API Compliant VM Image
$ git clone https://sigs.k8s.io/image-builder.git
$ cd image-builder/images/capi/
$ make build-gce-default
$ gcloud compute images list --project ${GCP_PROJECT_ID} --no-standard-images
```
NAME PROJECT FAMILY DEPRECATED STATUS
cluster-api-ubuntu-1804-v1-16-14-1599066516 virtual-anchor-281401 capi-ubuntu-1804-k8s-v1-16 READY
```
### Create Cloud NAT Router
Kubernetes nodes, to communicate with the control plane, pull container images from registried (e.g. gcr.io or dockerhub) need to have NAT access or a public ip. By default, the provider creates Machines without a public IP.
To make sure your cluster can communicate with the outside world, and the load balancer, you can create a Cloud NAT in the region you'd like your Kubernetes cluster to live in by following [these instructions](https://cloud.google.com/nat/docs/using-nat#specify_ip_addresses_for_nat).
For reference, use the below images. You can create 2 cloud NAT routers for region us-west1 and us-east1



## Other Common Pre-requisites
These prerequistes are required on the VM that will be used to create workload cluster on gcp
* Install [Docker](https://www.docker.com/)
* Install [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* Install [Kind](https://kind.sigs.k8s.io/)
* Install [Kustomize](https://kubernetes-sigs.github.io/kustomize/installation/binaries/)
* Install [Airshipctl](https://docs.airshipit.org/airshipctl/developers.html)
Also, check [Software Version Information](#Software-Version-Information), [Special Instructions](#Special-Instructions) and [Virtual Machine Specification](#Virtual-Machine-Specification)
## Getting Started
Kind will be used to setup a kubernetes cluster, that will be later transformed into a management cluster using airshipctl. The kind kubernetes cluster will be initialized with cluster API and Cluster API gcp provider components.
$ export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
$ kind create cluster --name capi-gcp
```
Creating cluster "capi-gcp" ...
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK
WARNING: Here be dragons! This is not supported currently.
â Ensuring node image (kindest/node:v1.18.2) đŧ
â Preparing nodes đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
Set kubectl context to "kind-capi-gcp"
You can now use your cluster with:
kubectl cluster-info --context kind-capi-gcp
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-kmg7c 1/1 Running 0 82s
kube-system coredns-66bff467f8-lg8qc 1/1 Running 0 82s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 91s
kube-system kindnet-dzp8v 1/1 Running 0 82s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 91s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 90s
kube-system kube-proxy-zvdh8 1/1 Running 0 82s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 83s
local-path-storage local-path-provisioner-bd4bb6b75-6drnt 1/1 Running 0 82s
```
## Create airshipctl configuration files
$ mkdir ~/.airship
$ airshipctl config init
$ airshipctl config import $HOME/.kube/config
$ airshipctl config set-context kind-capi-gcp --manifest gcp_manifest
```
Context "kind-capi-gcp" modified.
```
$ airshipctl config set-manifest gcp_manifest --repo primary --url https://opendev.org/airship/airshipctl --branch master --primary --sub-path manifests/site/gcp-test-site --target-path /tmp/airship/airshipctl
```
Manifest "gcp_manifest" created.
```
$ airshipctl config use-context kind-capi-gcp
$ airshipctl config get-context
```
Context: kind-capi-gcp
contextKubeconf: kind-capi-gcp_target
manifest: gcp_manifest
LocationOfOrigin: /home/rishabh/.airship/kubeconfig
cluster: kind-capi-gcp_target
user: kind-capi-gcp
```
## Use the latest patchset
Go to `https://review.opendev.org/#/c/748063/`
Navigate to Download -> Archive -> Tar
Right click on `tar`, and copy the link address

Run the following commands to download and extract the latest patchset
`$ mkdir -p /tmp/airship/airshipctl`
`$ export PATCH_URL=<paste_link_address_here>`
`$ wget ${PATCH_URL} -O /tmp/airship/airshipctl/manifests.tar`
`$ cd /tmp/airship/airshipctl && tar xvf manifests.tar`
## Configure gcp site variables
`configure project_id`
$ cat /tmp/airship/airshipctl/manifests/site/gcp-test-site/target/controlplane/project_name.json
```
[
{ "op": "replace","path": "/spec/project","value": "<project_id>"}
]
```
Include gcp variables in clusterctl.yaml
The original values for the below variables are as follows:
```
GCP_CONTROL_PLANE_MACHINE_TYPE="n1-standard-4"
GCP_NODE_MACHINE_TYPE="n1-standard-4"
GCP_REGION="us-west1"
GCP_NETWORK_NAME="default"
GCP_PROJECT="<your_project_id>"
GCP_CREDENTIALS="$( cat ~/</path/to/serviceaccount-key.json>)"
```
Edit `airshipctl/manifests/site/gcp-test-site/shared/clusterctl/clusterctl.yaml` to include gcp variables
and their values in base64 encoded format. Use https://www.base64decode.org/ if required.
To get the GCP_CREDENTIALS in base64 format, use the below command.
$ export GCP_B64ENCODED_CREDENTIALS=$( cat ~/</path/to/serviceaccount-key.json> | base64 | tr -d '\n' )
$ echo $GCP_B64ENCODED_CREDENTIALS
In the below example, I have encoded the values for all variables except GCP_PROJECT and GCP_CREDENTIALS.
$ cat /tmp/airship/airshipctl/manifests/site/gcp-test-site/shared/clusterctl/clusterctl.yaml
```
apiVersion: airshipit.org/v1alpha1
kind: Clusterctl
metadata:
labels:
airshipit.org/deploy-k8s: "false"
name: clusterctl-v1
init-options:
core-provider: "cluster-api:v0.3.3"
bootstrap-providers:
- "kubeadm:v0.3.3"
infrastructure-providers:
- "gcp:v0.3.0"
control-plane-providers:
- "kubeadm:v0.3.3"
providers:
- name: "gcp"
type: "InfrastructureProvider"
variable-substitution: true
versions:
v0.3.0: manifests/function/capg/v0.3.0
- name: "kubeadm"
type: "BootstrapProvider"
versions:
v0.3.3: manifests/function/cabpk/v0.3.3
- name: "cluster-api"
type: "CoreProvider"
versions:
v0.3.3: manifests/function/capi/v0.3.3
- name: "kubeadm"
type: "ControlPlaneProvider"
versions:
v0.3.3: manifests/function/cacpk/v0.3.3
additional-vars:
GCP_CONTROL_PLANE_MACHINE_TYPE: "bjEtc3RhbmRhcmQtNA=="
GCP_NODE_MACHINE_TYPE: "bjEtc3RhbmRhcmQtNA=="
GCP_PROJECT: "<B64ENCODED_GCP_PROJECT_ID>"
GCP_REGION: "dXMtd2VzdDE="
GCP_NETWORK_NAME: "ZGVmYXVsdA=="
GCP_B64ENCODED_CREDENTIALS: "<GCP_B64ENCODED_CREDENTIALS>"
```
## Initialize Management Cluster
$ airshipctl cluster init --debug
```
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_REGION is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_B64ENCODED_CREDENTIALS is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_CONTROL_PLANE_MACHINE_TYPE is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_NETWORK_NAME is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_NODE_MACHINE_TYPE is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_PROJECT is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Starting cluster-api initiation
Installing the clusterctl inventory CRD
Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
Fetching providers
[airshipctl] 2020/09/02 11:14:15 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
[airshipctl] 2020/09/02 11:14:15 Setting up airshipctl provider Components client
Provider type: CoreProvider, name: cluster-api
[airshipctl] 2020/09/02 11:14:15 Getting airshipctl provider components, skipping variable substitution: true.
Provider type: CoreProvider, name: cluster-api
Fetching File="components.yaml" Provider="cluster-api" Version="v0.3.3"
[airshipctl] 2020/09/02 11:14:15 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capi/v0.3.3
[airshipctl] 2020/09/02 11:14:16 Creating arishipctl repository implementation interface for provider kubeadm of type BootstrapProvider
[airshipctl] 2020/09/02 11:14:16 Setting up airshipctl provider Components client
Provider type: BootstrapProvider, name: kubeadm
[airshipctl] 2020/09/02 11:14:16 Getting airshipctl provider components, skipping variable substitution: true.
Provider type: BootstrapProvider, name: kubeadm
Fetching File="components.yaml" Provider="bootstrap-kubeadm" Version="v0.3.3"
[airshipctl] 2020/09/02 11:14:16 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cabpk/v0.3.3
[airshipctl] 2020/09/02 11:14:17 Creating arishipctl repository implementation interface for provider kubeadm of type ControlPlaneProvider
[airshipctl] 2020/09/02 11:14:17 Setting up airshipctl provider Components client
Provider type: ControlPlaneProvider, name: kubeadm
[airshipctl] 2020/09/02 11:14:17 Getting airshipctl provider components, skipping variable substitution: true.
Provider type: ControlPlaneProvider, name: kubeadm
Fetching File="components.yaml" Provider="control-plane-kubeadm" Version="v0.3.3"
[airshipctl] 2020/09/02 11:14:17 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cacpk/v0.3.3
[airshipctl] 2020/09/02 11:14:17 Creating arishipctl repository implementation interface for provider gcp of type InfrastructureProvider
[airshipctl] 2020/09/02 11:14:17 Setting up airshipctl provider Components client
Provider type: InfrastructureProvider, name: gcp
[airshipctl] 2020/09/02 11:14:17 Getting airshipctl provider components, skipping variable substitution: false.
Provider type: InfrastructureProvider, name: gcp
Fetching File="components.yaml" Provider="infrastructure-gcp" Version="v0.3.0"
[airshipctl] 2020/09/02 11:14:17 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capg/v0.3.0
[airshipctl] 2020/09/02 11:14:18 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
Fetching File="metadata.yaml" Provider="cluster-api" Version="v0.3.3"
[airshipctl] 2020/09/02 11:14:18 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capi/v0.3.3
[airshipctl] 2020/09/02 11:14:18 Creating arishipctl repository implementation interface for provider kubeadm of type BootstrapProvider
Fetching File="metadata.yaml" Provider="bootstrap-kubeadm" Version="v0.3.3"
[airshipctl] 2020/09/02 11:14:18 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cabpk/v0.3.3
[airshipctl] 2020/09/02 11:14:19 Creating arishipctl repository implementation interface for provider kubeadm of type ControlPlaneProvider
Fetching File="metadata.yaml" Provider="control-plane-kubeadm" Version="v0.3.3"
[airshipctl] 2020/09/02 11:14:19 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cacpk/v0.3.3
[airshipctl] 2020/09/02 11:14:19 Creating arishipctl repository implementation interface for provider gcp of type InfrastructureProvider
Fetching File="metadata.yaml" Provider="infrastructure-gcp" Version="v0.3.0"
[airshipctl] 2020/09/02 11:14:19 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capg/v0.3.0
Installing cert-manager
Creating Namespace="cert-manager"
Creating CustomResourceDefinition="challenges.acme.cert-manager.io"
Creating CustomResourceDefinition="orders.acme.cert-manager.io"
Creating CustomResourceDefinition="certificaterequests.cert-manager.io"
Creating CustomResourceDefinition="certificates.cert-manager.io"
Creating CustomResourceDefinition="clusterissuers.cert-manager.io"
Creating CustomResourceDefinition="issuers.cert-manager.io"
Creating ServiceAccount="cert-manager-cainjector" Namespace="cert-manager"
Creating ServiceAccount="cert-manager" Namespace="cert-manager"
Creating ServiceAccount="cert-manager-webhook" Namespace="cert-manager"
Creating ClusterRole="cert-manager-cainjector"
Creating ClusterRoleBinding="cert-manager-cainjector"
Creating Role="cert-manager-cainjector:leaderelection" Namespace="kube-system"
Creating RoleBinding="cert-manager-cainjector:leaderelection" Namespace="kube-system"
Creating ClusterRoleBinding="cert-manager-webhook:auth-delegator"
Creating RoleBinding="cert-manager-webhook:webhook-authentication-reader" Namespace="kube-system"
Creating ClusterRole="cert-manager-webhook:webhook-requester"
Creating Role="cert-manager:leaderelection" Namespace="kube-system"
Creating RoleBinding="cert-manager:leaderelection" Namespace="kube-system"
Creating ClusterRole="cert-manager-controller-issuers"
Creating ClusterRole="cert-manager-controller-clusterissuers"
Creating ClusterRole="cert-manager-controller-certificates"
Creating ClusterRole="cert-manager-controller-orders"
Creating ClusterRole="cert-manager-controller-challenges"
Creating ClusterRole="cert-manager-controller-ingress-shim"
Creating ClusterRoleBinding="cert-manager-controller-issuers"
Creating ClusterRoleBinding="cert-manager-controller-clusterissuers"
Creating ClusterRoleBinding="cert-manager-controller-certificates"
Creating ClusterRoleBinding="cert-manager-controller-orders"
Creating ClusterRoleBinding="cert-manager-controller-challenges"
Creating ClusterRoleBinding="cert-manager-controller-ingress-shim"
Creating ClusterRole="cert-manager-view"
Creating ClusterRole="cert-manager-edit"
Creating Service="cert-manager" Namespace="cert-manager"
Creating Service="cert-manager-webhook" Namespace="cert-manager"
Creating Deployment="cert-manager-cainjector" Namespace="cert-manager"
Creating Deployment="cert-manager" Namespace="cert-manager"
Creating Deployment="cert-manager-webhook" Namespace="cert-manager"
Creating APIService="v1beta1.webhook.cert-manager.io"
Creating MutatingWebhookConfiguration="cert-manager-webhook"
Creating ValidatingWebhookConfiguration="cert-manager-webhook"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Creating shared objects Provider="cluster-api" Version="v0.3.3"
Creating Namespace="capi-webhook-system"
Creating CustomResourceDefinition="clusters.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinedeployments.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinehealthchecks.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinepools.exp.cluster.x-k8s.io"
Creating CustomResourceDefinition="machines.cluster.x-k8s.io"
Creating CustomResourceDefinition="machinesets.cluster.x-k8s.io"
Creating MutatingWebhookConfiguration="capi-mutating-webhook-configuration"
Creating Service="capi-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capi-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capi-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capi-selfsigned-issuer" Namespace="capi-webhook-system"
Creating ValidatingWebhookConfiguration="capi-validating-webhook-configuration"
Creating instance objects Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Creating Namespace="capi-system"
Creating Role="capi-leader-election-role" Namespace="capi-system"
Creating ClusterRole="capi-system-capi-aggregated-manager-role"
Creating ClusterRole="capi-system-capi-manager-role"
Creating ClusterRole="capi-system-capi-proxy-role"
Creating RoleBinding="capi-leader-election-rolebinding" Namespace="capi-system"
Creating ClusterRoleBinding="capi-system-capi-manager-rolebinding"
Creating ClusterRoleBinding="capi-system-capi-proxy-rolebinding"
Creating Service="capi-controller-manager-metrics-service" Namespace="capi-system"
Creating Deployment="capi-controller-manager" Namespace="capi-system"
Creating inventory entry Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Creating shared objects Provider="bootstrap-kubeadm" Version="v0.3.3"
Creating CustomResourceDefinition="kubeadmconfigs.bootstrap.cluster.x-k8s.io"
Creating CustomResourceDefinition="kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io"
Creating Service="capi-kubeadm-bootstrap-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capi-kubeadm-bootstrap-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capi-kubeadm-bootstrap-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capi-kubeadm-bootstrap-selfsigned-issuer" Namespace="capi-webhook-system"
Creating instance objects Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Creating Namespace="capi-kubeadm-bootstrap-system"
Creating Role="capi-kubeadm-bootstrap-leader-election-role" Namespace="capi-kubeadm-bootstrap-system"
Creating ClusterRole="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-manager-role"
Creating ClusterRole="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-proxy-role"
Creating RoleBinding="capi-kubeadm-bootstrap-leader-election-rolebinding" Namespace="capi-kubeadm-bootstrap-system"
Creating ClusterRoleBinding="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-manager-rolebinding"
Creating ClusterRoleBinding="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-proxy-rolebinding"
Creating Service="capi-kubeadm-bootstrap-controller-manager-metrics-service" Namespace="capi-kubeadm-bootstrap-system"
Creating Deployment="capi-kubeadm-bootstrap-controller-manager" Namespace="capi-kubeadm-bootstrap-system"
Creating inventory entry Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Creating shared objects Provider="control-plane-kubeadm" Version="v0.3.3"
Creating CustomResourceDefinition="kubeadmcontrolplanes.controlplane.cluster.x-k8s.io"
Creating MutatingWebhookConfiguration="capi-kubeadm-control-plane-mutating-webhook-configuration"
Creating Service="capi-kubeadm-control-plane-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capi-kubeadm-control-plane-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capi-kubeadm-control-plane-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capi-kubeadm-control-plane-selfsigned-issuer" Namespace="capi-webhook-system"
Creating ValidatingWebhookConfiguration="capi-kubeadm-control-plane-validating-webhook-configuration"
Creating instance objects Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Creating Namespace="capi-kubeadm-control-plane-system"
Creating Role="capi-kubeadm-control-plane-leader-election-role" Namespace="capi-kubeadm-control-plane-system"
Creating Role="capi-kubeadm-control-plane-manager-role" Namespace="capi-kubeadm-control-plane-system"
Creating ClusterRole="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-manager-role"
Creating ClusterRole="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-proxy-role"
Creating RoleBinding="capi-kubeadm-control-plane-leader-election-rolebinding" Namespace="capi-kubeadm-control-plane-system"
Creating ClusterRoleBinding="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-manager-rolebinding"
Creating ClusterRoleBinding="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-proxy-rolebinding"
Creating Service="capi-kubeadm-control-plane-controller-manager-metrics-service" Namespace="capi-kubeadm-control-plane-system"
Creating Deployment="capi-kubeadm-control-plane-controller-manager" Namespace="capi-kubeadm-control-plane-system"
Creating inventory entry Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-gcp" Version="v0.3.0" TargetNamespace="capg-system"
Creating shared objects Provider="infrastructure-gcp" Version="v0.3.0"
Creating CustomResourceDefinition="gcpclusters.infrastructure.cluster.x-k8s.io"
Creating CustomResourceDefinition="gcpmachines.infrastructure.cluster.x-k8s.io"
Creating CustomResourceDefinition="gcpmachinetemplates.infrastructure.cluster.x-k8s.io"
Creating Service="capg-webhook-service" Namespace="capi-webhook-system"
Creating Deployment="capg-controller-manager" Namespace="capi-webhook-system"
Creating Certificate="capg-serving-cert" Namespace="capi-webhook-system"
Creating Issuer="capg-selfsigned-issuer" Namespace="capi-webhook-system"
Creating ValidatingWebhookConfiguration="capg-validating-webhook-configuration"
Creating instance objects Provider="infrastructure-gcp" Version="v0.3.0" TargetNamespace="capg-system"
Creating Namespace="capg-system"
Creating Role="capg-leader-election-role" Namespace="capg-system"
Creating ClusterRole="capg-system-capg-manager-role"
Creating ClusterRole="capg-system-capg-proxy-role"
Creating RoleBinding="capg-leader-election-rolebinding" Namespace="capg-system"
Creating ClusterRoleBinding="capg-system-capg-manager-rolebinding"
Creating ClusterRoleBinding="capg-system-capg-proxy-rolebinding"
Creating Secret="capg-manager-bootstrap-credentials" Namespace="capg-system"
Patching Secret="capg-manager-bootstrap-credentials" Namespace="capg-system"
Creating Service="capg-controller-manager-metrics-service" Namespace="capg-system"
Creating Deployment="capg-controller-manager" Namespace="capg-system"
Creating inventory entry Provider="infrastructure-gcp" Version="v0.3.0" TargetNamespace="capg-system"
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-b8655ddb4-swwzk 2/2 Running 0 54s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-66c6b6857b-22hg4 2/2 Running 0 73s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-688f7ccc56-7g676 2/2 Running 0 65s
capi-system capi-controller-manager-549c757797-6vscq 2/2 Running 0 84s
capi-webhook-system capg-controller-manager-d5f85c48d-74gj6 2/2 Running 0 61s
capi-webhook-system capi-controller-manager-5f8fc485bb-stflj 2/2 Running 0 88s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6b645d9d4c-2crk7 2/2 Running 0 81s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-65dbd6f999-cghmx 2/2 Running 0 70s
cert-manager cert-manager-77d8f4d85f-cqp7m 1/1 Running 0 115s
cert-manager cert-manager-cainjector-75f88c9f56-qh9m8 1/1 Running 0 115s
cert-manager cert-manager-webhook-56669d7fcb-6zddl 1/1 Running 0 115s
kube-system coredns-66bff467f8-kmg7c 1/1 Running 0 3m55s
kube-system coredns-66bff467f8-lg8qc 1/1 Running 0 3m55s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 4m4s
kube-system kindnet-dzp8v 1/1 Running 0 3m55s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 4m4s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 4m3s
kube-system kube-proxy-zvdh8 1/1 Running 0 3m55s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 3m56s
local-path-storage local-path-provisioner-bd4bb6b75-6drnt 1/1 Running 0 3m55s
```
## Create control plane and worker nodes
$ airshipctl phase apply controlplane --debug
```
[airshipctl] 2020/09/02 11:21:08 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/gcp-test-site/target/controlplane
[airshipctl] 2020/09/02 11:21:08 Applying bundle, inventory id: kind-capi-gcp-target-controlplane
[airshipctl] 2020/09/02 11:21:08 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/09/02 11:21:08 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-gcp-target-controlplane"},"name":"airshipit-kind-capi-gcp-target-controlplane","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/09/02 11:21:08 Making sure that inventory object namespace airshipit exists
configmap/airshipit-kind-capi-gcp-target-controlplane-5ab3466f created
cluster.cluster.x-k8s.io/gtc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane created
gcpcluster.infrastructure.cluster.x-k8s.io/gtc created
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane created
5 resource(s) applied. 5 created, 0 unchanged, 0 configured
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane is NotFound: Resource not found
gcpcluster.infrastructure.cluster.x-k8s.io/gtc is NotFound: Resource not found
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane is NotFound: Resource not found
configmap/airshipit-kind-capi-gcp-target-controlplane-5ab3466f is NotFound: Resource not found
cluster.cluster.x-k8s.io/gtc is NotFound: Resource not found
configmap/airshipit-kind-capi-gcp-target-controlplane-5ab3466f is Current: Resource is always ready
cluster.cluster.x-k8s.io/gtc is Current: Resource is current
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane is Current: Resource is current
gcpcluster.infrastructure.cluster.x-k8s.io/gtc is Current: Resource is current
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane is Current: Resource is current
all resources has reached the Current status
```
$ airshipctl phase apply workers --debug
```
[airshipctl] 2020/09/02 11:21:20 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/gcp-test-site/target/workers
[airshipctl] 2020/09/02 11:21:20 Applying bundle, inventory id: kind-capi-gcp-target-workers
[airshipctl] 2020/09/02 11:21:20 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/09/02 11:21:20 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-gcp-target-workers"},"name":"airshipit-kind-capi-gcp-target-workers","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/09/02 11:21:20 Making sure that inventory object namespace airshipit exists
configmap/airshipit-kind-capi-gcp-target-workers-1a36e40a created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 created
machinedeployment.cluster.x-k8s.io/gtc-md-0 created
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 created
4 resource(s) applied. 4 created, 0 unchanged, 0 configured
configmap/airshipit-kind-capi-gcp-target-workers-1a36e40a is NotFound: Resource not found
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 is NotFound: Resource not found
machinedeployment.cluster.x-k8s.io/gtc-md-0 is NotFound: Resource not found
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 is NotFound: Resource not found
configmap/airshipit-kind-capi-gcp-target-workers-1a36e40a is Current: Resource is always ready
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/gtc-md-0 is Current: Resource is current
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 is Current: Resource is current
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-b8655ddb4-swwzk 2/2 Running 0 6m9s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-66c6b6857b-22hg4 2/2 Running 0 6m28s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-688f7ccc56-7g676 2/2 Running 0 6m20s
capi-system capi-controller-manager-549c757797-6vscq 2/2 Running 0 6m39s
capi-webhook-system capg-controller-manager-d5f85c48d-74gj6 2/2 Running 0 6m16s
capi-webhook-system capi-controller-manager-5f8fc485bb-stflj 2/2 Running 0 6m43s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6b645d9d4c-2crk7 2/2 Running 0 6m36s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-65dbd6f999-cghmx 2/2 Running 0 6m25s
cert-manager cert-manager-77d8f4d85f-cqp7m 1/1 Running 0 7m10s
cert-manager cert-manager-cainjector-75f88c9f56-qh9m8 1/1 Running 0 7m10s
cert-manager cert-manager-webhook-56669d7fcb-6zddl 1/1 Running 0 7m10s
kube-system coredns-66bff467f8-kmg7c 1/1 Running 0 9m10s
kube-system coredns-66bff467f8-lg8qc 1/1 Running 0 9m10s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 9m19s
kube-system kindnet-dzp8v 1/1 Running 0 9m10s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 9m19s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 9m18s
kube-system kube-proxy-zvdh8 1/1 Running 0 9m10s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 9m11s
local-path-storage local-path-provisioner-bd4bb6b75-6drnt 1/1 Running 0 9m10s
```
To check logs run the below command
$ kubectl logs capg-controller-manager-b8655ddb4-swwzk -n capg-system --all-containers=true -f
```
I0902 18:15:30.884391 1 main.go:213] Generating self signed cert as no cert is provided
I0902 18:15:35.135060 1 main.go:243] Starting TCP socket on 0.0.0.0:8443
I0902 18:15:35.175185 1 main.go:250] Listening securely on 0.0.0.0:8443
I0902 18:15:51.111202 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0902 18:15:51.113054 1 main.go:205] setup "msg"="starting manager"
I0902 18:15:51.113917 1 leaderelection.go:242] attempting to acquire leader lease capg-system/controller-leader-election-capg...
I0902 18:15:51.114691 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0902 18:15:51.142032 1 leaderelection.go:252] successfully acquired lease capg-system/controller-leader-election-capg
I0902 18:15:51.145165 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "c
```
$ kubectl get machines
```
NAME PROVIDERID PHASE
gtc-control-plane-cxcd4 gce://virtual-anchor-281401/us-west1-a/gtc-control-plane-vmplz Running
gtc-md-0-6cf7474cff-zpbxv gce://virtual-anchor-281401/us-west1-a/gtc-md-0-7mccx Running
```
$ kubectl --namespace=default get secret/gtc-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./gtc.kubeconfig
$ kubectl get pods -A --kubeconfig ~/gtc.kubeconfig
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6d4fbb6df9-8lf4f 1/1 Running 0 5m18s
kube-system calico-node-6lmqw 1/1 Running 0 73s
kube-system calico-node-qtgzj 1/1 Running 1 5m18s
kube-system coredns-5644d7b6d9-dqd75 1/1 Running 0 5m18s
kube-system coredns-5644d7b6d9-ls2q9 1/1 Running 0 5m18s
kube-system etcd-gtc-control-plane-vmplz 1/1 Running 0 4m53s
kube-system kube-apiserver-gtc-control-plane-vmplz 1/1 Running 0 4m42s
kube-system kube-controller-manager-gtc-control-plane-vmplz 1/1 Running 0 4m59s
kube-system kube-proxy-6hk8c 1/1 Running 0 5m18s
kube-system kube-proxy-b8mqw 1/1 Running 0 73s
kube-system kube-scheduler-gtc-control-plane-vmplz 1/1 Running 0 4m47s
```
Now, the control plane and worker node are created on google cloud.
## Tear Down Clusters
If you would like to delete the cluster run the below commands.
This will delete the control plane, workers, machine health check and all other resources associated with the cluster on gcp.
$ airshipctl phase render controlplane -k Cluster
```
---
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: gtc
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: gtc-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: GCPCluster
name: gtc
...
```
$ airshipctl phase render controlplane -k Cluster | kubectl delete -f -
```
cluster.cluster.x-k8s.io "gtc" deleted
```
$ kind delete cluster --name capi-gcp
```
Deleting cluster "capi-gcp" ...
```
## Reference
### Provider Manifests
Provider Configuration is referenced from https://github.com/kubernetes-sigs/cluster-api-provider-gcp/tree/master/config
Cluster API does not support gcp provider out of the box. Therefore, the metadata infromation is added using files in airshipctl/manifests/function/capg/data
$ tree airshipctl/manifests/function/capg
```
airshipctl/manifests/function/capg
âââ v0.3.0
âââ certmanager
â âââ certificate.yaml
â âââ kustomization.yaml
â âââ kustomizeconfig.yaml
âââ crd
â âââ bases
â â âââ infrastructure.cluster.x-k8s.io_gcpclusters.yaml
â â âââ infrastructure.cluster.x-k8s.io_gcpmachines.yaml
â â âââ infrastructure.cluster.x-k8s.io_gcpmachinetemplates.yaml
â âââ kustomization.yaml
â âââ kustomizeconfig.yaml
â âââ patches
â âââ cainjection_in_gcpclusters.yaml
â âââ cainjection_in_gcpmachines.yaml
â âââ cainjection_in_gcpmachinetemplates.yaml
â âââ webhook_in_gcpclusters.yaml
â âââ webhook_in_gcpmachines.yaml
â âââ webhook_in_gcpmachinetemplates.yaml
âââ data
â âââ capg-resources.yaml
â âââ kustomization.yaml
â âââ metadata.yaml
âââ default
â âââ credentials.yaml
â âââ kustomization.yaml
â âââ manager_credentials_patch.yaml
â âââ manager_prometheus_metrics_patch.yaml
â âââ manager_role_aggregation_patch.yaml
â âââ namespace.yaml
âââ kustomization.yaml
âââ manager
â âââ kustomization.yaml
â âââ manager_auth_proxy_patch.yaml
â âââ manager_image_patch.yaml
â âââ manager_pull_policy.yaml
â âââ manager.yaml
âââ patch_crd_webhook_namespace.yaml
âââ rbac
â âââ auth_proxy_role_binding.yaml
â âââ auth_proxy_role.yaml
â âââ auth_proxy_service.yaml
â âââ kustomization.yaml
â âââ leader_election_role_binding.yaml
â âââ leader_election_role.yaml
â âââ role_binding.yaml
â âââ role.yaml
âââ webhook
âââ kustomization.yaml
âââ kustomizeconfig.yaml
âââ manager_webhook_patch.yaml
âââ manifests.yaml
âââ service.yaml
âââ webhookcainjection_patch.yaml
```
#### CAPG Specific Variables
capg-resources.yaml consists of `gcp provider specific` variables required to initialize the management cluster.
The values for these variables can be exported before running `airshipctl cluster init` or they can be defined explicitly in clusterctl.yaml
$ cat airshipctl/manifests/function/capg/v0.3.0/data/capg-resources.yaml
```
apiVersion: v1
kind: Secret
metadata:
name: manager-bootstrap-credentials
namespace: system
type: Opaque
data:
GCP_CONTROL_PLANE_MACHINE_TYPE: ${GCP_CONTROL_PLANE_MACHINE_TYPE}
GCP_NODE_MACHINE_TYPE: ${GCP_NODE_MACHINE_TYPE}
GCP_PROJECT: ${GCP_PROJECT}
GCP_REGION: ${GCP_REGION}
GCP_NETWORK_NAME: ${GCP_NETWORK_NAME}
GCP_B64ENCODED_CREDENTIALS: ${GCP_B64ENCODED_CREDENTIALS}
```
### Cluster Templates
manifests/function/k8scontrol-capg contains cluster.yaml, controlplane.yaml templates referenced from [cluster-template](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/templates/cluster-template.yaml)
| Template Name | CRDs |
| ----------------- | ---- |
| cluster.yaml | Cluster, GCPCluster |
| controlplane.yaml | KubeadmControlPlane, GCPMachineTemplate |
$ tree airshipctl/manifests/function/k8scontrol-capg
```
airshipctl/manifests/function/k8scontrol-capg
âââ cluster.yaml
âââ controlplane.yaml
âââ kustomization.yaml
```
airshipctl/manifests/function/workers-capg contains workers.yaml referenced from [cluster-template](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/templates/cluster-template.yaml)
| Template Name | CRDs |
| ----------------- | ---- |
| workers.yaml | GCPMachineTemplate, MachineDeployment, KubeadmConfigTemplate |
$ tree airshipctl/manifests/function/workers-capg
```
airshipctl/manifests/function/workers-capg
âââ kustomization.yaml
âââ workers.yaml
```
### Test Site Manifests
#### gcp-test-site/shared
airshipctl cluster init uses airshipctl/manifests/site/gcp-test-site/shared/clusterctl to initialize management cluster with defined provider components and version.
$ tree airshipctl/manifests/site/gcp-test-site/shared
```
airshipctl/manifests/site/gcp-test-site/shared
âââ clusterctl
âââ clusterctl.yaml
âââ kustomization.yaml
```
#### gcp-test-site/target
There are 3 phases currently available in gcp-test-site/target
|Phase Name | Purpose |
|-----------|---------|
| controlplane | Patches templates in manifests/function/k8scontrol-capg |
| workers | Patches template in manifests/function/workers-capg | |
| initinfra | Simply calls `gcp-test-site/shared/clusterctl` |
Note: `airshipctl cluster init` initializes all the provider components including the gcp infrastructure provider component. As a result, `airshipctl phase apply initinfra` is not used.
At the moment, `phase initinfra` is only present for two reasons:
- `airshipctl` complains if the phase is not found
- `validate site docs to pass`
#### Patch Merge Strategy
Json patches are applied on templates in `manifests/function/k8scontrol-capg` from `airshipctl/manifests/site/gcp-test-site/target/controlplane` when `airshipctl phase apply controlplane` is executed
Json patches are applied on templates in `manifests/function/workers-capg` from `airshipctl/manifests/site/gcp-test-site/target/workers` when `airshipctl phase apply workers` is executed.
| Patch Name | Purpose |
| ------------------------------- | ------------------------------------------------------------------ |
| controlplane/machine_count.json | patches control plane machine count in template function/k8scontrol-capg |
| controlplane/machine_type.json | patches control plane machine type in template function/k8scontrol-capg |
| controlplane/network_name.json | patches control plane network name in template function/k8scontrol-capg |
| controlplane/project_name.json | patches project id template function/k8scontrol-capg |
| controlplane/region_name.json | patches region name in template function/k8scontrol-capg |
| workers/machine_count.json | patches worker machine count in template function/workers-capg |
| workers/machine_type.json | patches worker machine type in template function/workers-capg |
| workers/failure_domain.json | patches failure_domain in template function/workers-capg |
$ tree airshipctl/manifests/site/gcp-test-site/target/
```
airshipctl/manifests/site/gcp-test-site/target/
âââ controlplane
â âââ kustomization.yaml
â âââ machine_count.json
â âââ machine_type.json
â âââ network_name.json
â âââ project_name.json
â âââ region_name.json
âââ initinfra
â âââ kustomization.yaml
âââ workers
âââ failure_domain.json
âââ kustomization.yaml
âââ machine_count.json
âââ machine_type.json
3 directories, 11 files
```
### Software Version Information
All the instructions provided in the document have been tested using the software and version, provided in this section.
#### Virtual Machine Specification
All the instructions in the document were perfomed on a Oracle Virtual Box(6.1) VM running Ubuntu 18.04.4 LTS (Bionic Beaver) with 16G of memory and 4 VCPUs
#### Docker
$ docker version
```
Client: Docker Engine - Community
Version: 19.03.9
API version: 1.40
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:25:18 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.9
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:23:50 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
```
#### Kind
$ kind version
```
kind v0.8.1 go1.14.2 linux/amd64
```
#### Kubectl
$ kubectl version
```
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
```
#### Go
$ go version
```
go version go1.14.1 linux/amd64
```
#### Kustomize
$ kustomize version
```
{Version:kustomize/v3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T14:08:42Z GoOs:linux GoArch:amd64}
```
#### OS
$ cat /etc/os-release
```
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
<style>.markdown-body { max-width: 1250px; }</style>