# Analzying Cluster API GCP Provider for usage with Airshipctl
This document provides instructions on using `clusterctl` to deploy a self-managed workload cluster on google cloud. The purpose of testing the `gcp` provider using `clusterctl` is to evaluate the current state of cluster api - gcp provider, and gauge the overall effort that will be required to implement airshipctl and cluster api gcp integration. Check [Takeaways](#Takeaways)
Also check [Airshipctl and Cluster api gcp integration](https://hackmd.io/hTy5BVlbSxmPHVi-u4HcKQ?view#Airshipctl-and-Cluster-API-GCP-Integration)
## Workflow
A `kind` cluster initialized with capi and capg provider components, on local `vm` was used to create a `workload` cluster on google cloud with 1 machine deployment and 1 control plane.
The workload cluster was later initialized with provider components, making it a capi target management cluster. Then, all the CAPI CRDs were moved from the `kind` management cluster to the `target management cluster` on google cloud using `clusterctl move`, thus making the workload cluster on google cloud self managed.

$ kubectl get machines --kubeconfig ~/projects/cluster-api/gtc.kubeconfig
```
NAME PROVIDERID PHASE
gtc-control-plane-2fdxz gce://elated-bolt-284718/us-west1-a/gtc-control-plane-957q4 Provisioned
gtc-md-0-7b8cbcb584-rzjml gce://elated-bolt-284718/us-west1-a/gtc-md-0-z8khl Provisioned
```
$ kubectl get pods -A --kubeconfig projects/cluster-api/gtc.kubeconfig
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-69c6c9f5d6-2vgxn 2/2 Running 0 44s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-7bd7989c7c-x5dxx 2/2 Running 0 50s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-df67d9b74-dqs78 2/2 Running 0 47s
capi-system capi-controller-manager-7f764bb474-fcf8g 2/2 Running 0 54s
capi-webhook-system capg-controller-manager-c768d69b4-kzkz4 2/2 Running 0 46s
capi-webhook-system capi-controller-manager-b554d9469-45gvj 2/2 Running 1 56s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6dfdc6fd96-65f8x 2/2 Running 0 53s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7bf5997449-8js9z 2/2 Running 0 50s
cert-manager cert-manager-84bb546784-h8cms 1/1 Running 0 72s
cert-manager cert-manager-cainjector-65f677f4fc-48868 1/1 Running 0 72s
cert-manager cert-manager-webhook-bbb8f9db9-bv67d 1/1 Running 1 72s
kube-system calico-kube-controllers-59d85c5c84-4sqxd 1/1 Running 0 34h
kube-system calico-node-2fchr 1/1 Running 1 34h
kube-system calico-node-hkjp4 1/1 Running 1 34h
kube-system coredns-5644d7b6d9-brr7n 1/1 Running 0 34h
kube-system coredns-5644d7b6d9-s9gnn 1/1 Running 0 34h
kube-system etcd-gtc-control-plane-957q4 1/1 Running 0 34h
kube-system kube-apiserver-gtc-control-plane-957q4 1/1 Running 0 34h
kube-system kube-controller-manager-gtc-control-plane-957q4 1/1 Running 0 34h
kube-system kube-proxy-cb7kv 1/1 Running 0 34h
kube-system kube-proxy-g8959 1/1 Running 0 34h
kube-system kube-scheduler-gtc-control-plane-957q4 1/1 Running 0 34h
```
## Takeaways
- The cluster api gcp provider repository is not as mature as other provider repositories like azure, openstack. In terms of releases, there are currently two pre-releases available, which are v0.1.0-alpha.1, and v0.2.0-alpha.2. The master branch of the project is in preperation phase for version v0.3.0, which is not officialy released at this point of time as of `July 31, 2020`
Check [Releases](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/releases)
- GCP provider is not out of the box supported by cluster-api. The provider does not get initalized simply using `clusterctl init -i gcp`
- I had to create an overrides layer similar to docker. The overrides code shipped with `cluster-api` project, required adding gcp provider component, metadata information, and functions to generate files `metadata.yaml` and `infrastructure -components.yaml`. For airshipctl, we can use `kustomzie` to include metadata information, just the way we did for `docker`
- The `cluster-template.yaml` was also required in the overrides directory. Therefore, I had to get it from the provider-repository and place it along `metadata.yaml` and `infrastructure-components.yaml` [Check cluster-template.yaml](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/templates/cluster-template.yaml)
- The management cluster was failing to get initialized with capg provider component. The pod -`capg controller manager` was going into the state crash loop back off. To get it to work, I had to change the manager image version from gcr.io/k8s-staging-cluster-api-gcp/cluster-api-gcp-controller:latest to gcr.io/k8s-staging-cluster-api-gcp/cluster-api-gcp-controller:master
- `capi` provider component also failed to initialize at first. The pod capi-controller-manager was going into state crash loop back off. I disucssed this with people in the cluster api commnity, and they suggested me to use the latest version of clusterctl which is v0.3.7. After using the latest version of clusterctl, the capi provider component got successfully initialized.
Check Slack [Thread](https://kubernetes.slack.com/archives/C8TSNPY4T/p1595986139360000)
- With the above changes, I tried two version combinations and both worked
```
First: Latest versions of all components
--cluster-api:v0.3.7 --bootstrap kubeadm:v0.3.7 --control-plane kubeadm:v0.3.7 --infrastructure gcp:v0.3.0
Second: Same version for all components
--cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure gcp:v0.3.0
```
- A bunch of pre-requisites need to be satisfied before google cloud can be used by cluster-api:
- Create a service account and a key associated with the account
- Create a cloud NAT router for the project. [Check here](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/docs/prerequisites.md)
- Build CAPI compliant custom VM image using image builder. The image comes with kubernetes release 1.16.11. The `kubernetes version` configured in target cluster configuration
needs to be available as a part of the vm image.
- The google cloud specific variables which are required to initialize the cluster before executing `airshipctl cluster init` can be fed to airshipctl in 1 of two ways check [Check](https://hackmd.io/JjgtAo3aTcSTo-1BIC3xoA)
For e.g ,the following variables need to be defined before cluster init
```
GCP_B64ENCODED_CREDENTIALS=$( cat ~/elated-bolt-284718-3feb4855f30e.json | base64 | tr -d '\n' )
GCP_CONTROL_PLANE_MACHINE_TYPE="n1-standard-4"
GCP_NODE_MACHINE_TYPE="n1-standard-4"
GCP_PROJECT="elated-bolt-284718"
GCP_REGION="us-west1"
GCP_NETWORK_NAME="default"
```
## Provider Components Version Information
The management clusters were initalized with the following provider components and version:
| provider component name | provider component type | provider component version |
| ----------------------- | ----------------------- | -------------------------- |
| gcp | infrastructure-provider | v0.3.0 |
| cluster-api | core-provider | v0.3.0 |
| kubeadm | control-plane-provider | v0.3.0 |
| kubeadm | bootstrap-provider | v0.3.0 |
## Pre-requisites
On Google cloud account, for your project set the following:
- Service Account
- Cloud NAT
- Build CAPI compliant image using `image builder`
Note: Latest version of clusterctl - v0.3.7 will be also be required on your laptop or local VM
### Service Account and Cloud NAT
Please check [Reference](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/docs/prerequisites.md)
### Build CAPI Compliant VM Image using cloud builder
Prerequiste Software: Ansible, Packer, Cloud Builder
Use Google Cloud Shell to run the following commands:
`$ export GOOGLE_APPLICATION_CREDENTIALS=~/elated-bolt-284718-3feb4855f30e.json`
`$ export GCP_PROJECT_ID=elated-bolt-284718`
Install Ansible
`$ apt-get install ansible`
Install Packer
```
$ wget https://releases.hashicorp.com/packer/1.6.0/packer_1.6.0_linux_amd64.zip
$ unzip packer_1.6.0_linux_amd64.zip
$ sudo mv packer /usr/local/bin/
```
Build Image using image builder
```
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi
$ make build-gce-default
$ rishabhkjain89@cloudshell:~/image-builder/images/capi (elated-bolt-284718)$ gcloud compute images list --project ${GCP_PROJECT_ID} --no-standard-images
NAME PROJECT FAMILY DEPRECATED STATUS
cluster-api-ubuntu-1804-v1-16-11-1595971481 elated-bolt-284718 capi-ubuntu-1804-k8s-v1-16 READY
family/capi-ubuntu-1804-k8s-v1-17'
```
## Install clusterctl latest version
Check [Installation](https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl)
$ clusterctl version
```
clusterctl version: &version.Info{Major:"0", Minor:"3", GitVersion:"v0.3.7", GitCommit:"846ca08db5939e5bf88104b2f34427e7316ee7b9", GitTreeState:"clean", BuildDate:"2020-07-14T14:34:36Z", GoVersion:"go1.13.12", Compiler:"gc", Platform:"linux/amd64"}
```
## Export GCP Specific Variables
Export Variables required for the google account used by CAPI
```
export GCP_B64ENCODED_CREDENTIALS=$( cat ~/elated-bolt-284718-3feb4855f30e.json | base64 | tr -d '\n' )
export GCP_CONTROL_PLANE_MACHINE_TYPE="n1-standard-4"
export GCP_NODE_MACHINE_TYPE="n1-standard-4"
export GCP_PROJECT="elated-bolt-284718"
export GCP_REGION="us-west1"
export GCP_NETWORK_NAME="default"
```
## Create Overrides Code
Note: `clone both repositories on the same level`
```
git clone https://github.com/kubernetes-sigs/cluster-api-provider-gcp.git
git clone https://github.com/kubernetes-sigs/cluster-api.git
```
Work from inside the cluster-api repo and create the clusterctl-settings.json file to use `../cluster-api-provider-gcp` as provider repo.
After creating the clusterctl-settings.json file, edit the overrides code to add gcp provider information and metadata.
$ cat clusterctl-settings.json
```
{
"providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-gcp"],
"provider_repos": ["../cluster-api-provider-gcp"]
}
```
$ cat ./cmd/clusterctl/hack/local-overrides.py
```
#!/usr/bin/env python
# Copyright 2020 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###################
# local-overrides.py takes in input a list of provider and, for each of them, generates the components YAML from the
# local repositories (the GitHub repositories clone), and finally stores it in the clusterctl local override folder
# prerequisites:
# - the script should be executed from sigs.k8s.io/cluster-api/ by calling cmd/clusterctl/hack/local-overrides.py
# - there should be a sigs.k8s.io/cluster-api/clusterctl-settings.json file with the list of provider for which
# the local overrides should be generated and the list of provider repositories to be included (on top of cluster-api).
# {
# "providers": [ "cluster-api", "bootstrap-kubeadm", "infrastructure-aws"],
# "provider_repos": ["../cluster-api-provider-aws"]
# }
# - for each additional provider repository there should be a sigs.k8s.io/<provider_repo>/clusterctl-settings.json file e.g.
# {
# "name": "infrastructure-aws",
# "config": {
# "componentsFile": "infrastructure-components.yaml",
# "nextVersion": "v0.5.0",
# }
###################
from __future__ import unicode_literals
import json
import subprocess
import os
import errno
import sys
settings = {}
providers = {
'cluster-api': {
'componentsFile': 'core-components.yaml',
'nextVersion': 'v0.3.0',
'type': 'CoreProvider',
},
'bootstrap-kubeadm': {
'componentsFile': 'bootstrap-components.yaml',
'nextVersion': 'v0.3.0',
'type': 'BootstrapProvider',
'configFolder': 'bootstrap/kubeadm/config',
},
'control-plane-kubeadm': {
'componentsFile': 'control-plane-components.yaml',
'nextVersion': 'v0.3.0',
'type': 'ControlPlaneProvider',
'configFolder': 'controlplane/kubeadm/config',
},
'infrastructure-docker': {
'componentsFile': 'infrastructure-components.yaml',
'nextVersion': 'v0.3.0',
'type': 'InfrastructureProvider',
'configFolder': 'test/infrastructure/docker/config',
},
'infrastructure-gcp': {
'componentsFile': 'infrastructure-components.yaml',
'nextVersion': 'v0.3.0',
'type': 'InfrastructureProvider',
'configFolder': '../cluster-api-provider-gcp/config',
},
}
docker_metadata_yaml = """\
apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
kind: Metadata
releaseSeries:
- major: 0
minor: 2
contract: v1alpha2
- major: 0
minor: 3
contract: v1alpha3
"""
gcp_metadata_yaml = docker_metadata_yaml
def load_settings():
global settings
try:
settings = json.load(open('clusterctl-settings.json'))
except Exception as e:
raise Exception('failed to load clusterctl-settings.json: {}'.format(e))
def load_providers():
provider_repos = settings.get('provider_repos', [])
for repo in provider_repos:
file = repo + '/clusterctl-settings.json'
try:
provider_details = json.load(open(file))
provider_name = provider_details['name']
provider_config = provider_details['config']
provider_config['repo'] = repo
providers[provider_name] = provider_config
except Exception as e:
raise Exception('failed to load clusterctl-settings.json from repo {}: {}'.format(repo, e))
def execCmd(args):
try:
out = subprocess.Popen(args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = out.communicate()
if stderr is not None:
raise Exception('stderr contains: \n{}'.format(stderr))
return stdout
except Exception as e:
raise Exception('failed to run {}: {}'.format(args, e))
def get_home():
return os.path.expanduser('~')
def write_local_override(provider, version, components_file, components_yaml):
try:
home = get_home()
overrides_folder = os.path.join(home, '.cluster-api', 'overrides')
provider_overrides_folder = os.path.join(overrides_folder, provider, version)
try:
os.makedirs(provider_overrides_folder)
except OSError as e:
if e.errno != errno.EEXIST:
raise
f = open(os.path.join(provider_overrides_folder, components_file), 'wb')
f.write(components_yaml)
f.close()
except Exception as e:
raise Exception('failed to write {} to {}: {}'.format(components_file, provider_overrides_folder, e))
def write_docker_metadata(version):
try:
home = get_home()
docker_folder = os.path.join(home, '.cluster-api', 'overrides', 'infrastructure-docker', version)
f = open(os.path.join(docker_folder, "metadata.yaml"), 'w')
f.write(docker_metadata_yaml)
f.close()
except Exception as e:
raise Exception('failed to write {} to {}: {}'.format("metadata.yaml", metadata_folder, e))
def write_gcp_metadata(version):
try:
home = get_home()
gcp_folder = os.path.join(home, '.cluster-api', 'overrides', 'infrastructure-gcp', version)
f = open(os.path.join(gcp_folder, "metadata.yaml"), 'w')
f.write(gcp_metadata_yaml)
f.close()
except Exception as e:
raise Exception('failed to write {} to {}: {}'.format("metadata.yaml", metadata_folder, e))
def create_local_overrides():
providerList = settings.get('providers', [])
assert providerList is not None, 'invalid configuration: please define the list of providers to override'
assert len(providerList)>0, 'invalid configuration: please define at least one provider to override'
for provider in providerList:
p = providers.get(provider)
assert p is not None, 'invalid configuration: please specify the configuration for the {} provider'.format(provider)
repo = p.get('repo', '.')
config_folder = p.get('configFolder', 'config')
next_version = p.get('nextVersion')
assert next_version is not None, 'invalid configuration for provider {}: please provide nextVersion value'.format(provider)
name, type =splitNameAndType(provider)
assert name is not None, 'invalid configuration for provider {}: please use a valid provider label'.format(provider)
components_file = p.get('componentsFile')
assert components_file is not None, 'invalid configuration for provider {}: please provide componentsFile value'.format(provider)
components_yaml = execCmd(['kustomize', 'build', os.path.join(repo, config_folder)])
write_local_override(provider, next_version, components_file, components_yaml)
if provider == 'infrastructure-docker':
write_docker_metadata(next_version)
if provider == 'infrastructure-gcp':
write_gcp_metadata(next_version)
yield name, type, next_version
def splitNameAndType(provider):
if provider == 'cluster-api':
return 'cluster-api', 'CoreProvider'
if provider.startswith('bootstrap-'):
return provider[len('bootstrap-'):], 'BootstrapProvider'
if provider.startswith('control-plane-'):
return provider[len('control-plane-'):], 'ControlPlaneProvider'
if provider.startswith('infrastructure-'):
return provider[len('infrastructure-'):], 'InfrastructureProvider'
return None, None
def CoreProviderFlag():
return '--core'
def BootstrapProviderFlag():
return '--bootstrap'
def ControlPlaneProviderFlag():
return '--control-plane'
def InfrastructureProviderFlag():
return '--infrastructure'
def type_to_flag(type):
switcher = {
'CoreProvider': CoreProviderFlag,
'BootstrapProvider': BootstrapProviderFlag,
'ControlPlaneProvider': ControlPlaneProviderFlag,
'InfrastructureProvider': InfrastructureProviderFlag
}
func = switcher.get(type, lambda: 'Invalid type')
return func()
def print_instructions(overrides):
providerList = settings.get('providers', [])
print ('clusterctl local overrides generated from local repositories for the {} providers.'.format(', '.join(providerList)))
print ('in order to use them, please run:')
print
cmd = 'clusterctl init'
for name, type, next_version in overrides:
cmd += ' {} {}:{}'.format(type_to_flag(type), name, next_version)
print (cmd)
print
if 'infrastructure-docker' in providerList:
print ('please check the documentation for additional steps required for using the docker provider')
print
load_settings()
load_providers()
overrides = create_local_overrides()
print_instructions(overrides)
```
## Run Overrides Code
$ ./cmd/clusterctl/hack/local-overrides.py
```
clusterctl local overrides generated from local repositories for the cluster-api, bootstrap-kubeadm, control-plane-kubeadm, infrastructure-gcp providers.
in order to use them, please run:
clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure gcp:v0.3.0
```
## Create clusterctl.yaml
$ cat ~/.cluster-api/clusterctl.yaml
```
providers:
- name: gcp
url: /home/rishabh/.cluster-api/overrides/infrastructure-gcp/v0.3.0/infrastructure-components.yaml
type: InfrastructureProvider
```
## Download cluster-template.yaml
Download cluster-template.yaml in ~/.cluster-api/overrides/infrastructure-gcp/v0.3.0/ from https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/templates/cluster-template.yaml
$ tree ~/.cluster-api
```
~/.cluster-api
├── clusterctl.yaml
└── overrides
├── bootstrap-kubeadm
│ └── v0.3.0
│ └── bootstrap-components.yaml
├── cluster-api
│ └── v0.3.0
│ └── core-components.yaml
├── control-plane-kubeadm
│ └── v0.3.0
│ └── control-plane-components.yaml
└── infrastructure-gcp
└── v0.3.0
├── cluster-template.yaml
├── infrastructure-components.yaml
└── metadata.yaml
9 directories, 7 files
```
## Create Kind Cluster
$ kind create cluster --config ~/kind-cluster-with-extramounts.yaml --name capi-gcp
```
Creating cluster "capi-gcp" ...
✓ Ensuring node image (kindest/node:v1.17.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-capi-gcp"
You can now use your cluster with:
kubectl cluster-info --context kind-capi-gcp
Thanks for using kind! 😊
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-bckk4 1/1 Running 0 3m40s
kube-system coredns-6955765f44-twjl5 1/1 Running 0 3m40s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 3m51s
kube-system kindnet-x6h4w 1/1 Running 0 3m40s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 3m51s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 3m51s
kube-system kube-proxy-sjq2w 1/1 Running 0 3m40s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 3m51s
local-path-storage local-path-provisioner-7745554f7f-ls2vk 1/1 Running 0 3m40s
```
## Initialize Management Cluster
$ clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure gcp:v0.3.0
```
Fetching providers
Using Override="core-components.yaml" Provider="cluster-api" Version="v0.3.0"
Using Override="bootstrap-components.yaml" Provider="bootstrap-kubeadm" Version="v0.3.0"
Using Override="control-plane-components.yaml" Provider="control-plane-kubeadm" Version="v0.3.0"
Using Override="infrastructure-components.yaml" Provider="infrastructure-gcp" Version="v0.3.0"
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-gcp" Version="v0.3.0" TargetNamespace="capg-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-69c6c9f5d6-6vtjg 2/2 Running 0 46s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-7bd7989c7c-97jmg 2/2 Running 0 51s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-df67d9b74-9tggj 2/2 Running 0 49s
capi-system capi-controller-manager-7f764bb474-qzvhc 2/2 Running 0 55s
capi-webhook-system capg-controller-manager-c768d69b4-wxjbh 2/2 Running 0 48s
capi-webhook-system capi-controller-manager-b554d9469-d669l 2/2 Running 0 56s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6dfdc6fd96-85jxr 2/2 Running 1 53s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7bf5997449-snksm 2/2 Running 1 50s
cert-manager cert-manager-84bb546784-crcbd 1/1 Running 0 79s
cert-manager cert-manager-cainjector-65f677f4fc-67j2h 1/1 Running 0 79s
cert-manager cert-manager-webhook-bbb8f9db9-b45l2 1/1 Running 0 79s
kube-system coredns-6955765f44-bckk4 1/1 Running 0 5m28s
kube-system coredns-6955765f44-twjl5 1/1 Running 0 5m28s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 5m39s
kube-system kindnet-x6h4w 1/1 Running 0 5m28s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 5m39s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 5m39s
kube-system kube-proxy-sjq2w 1/1 Running 0 5m28s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 5m39s
local-path-storage local-path-provisioner-7745554f7f-ls2vk 1/1 Running 0 5m28s
```
$ kubectl get providers -A
```
NAMESPACE NAME TYPE PROVIDER VERSION WATCH NAMESPACE
capg-system infrastructure-gcp InfrastructureProvider v0.3.0
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider v0.3.0
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider v0.3.0
capi-system cluster-api CoreProvider v0.3.0
```
## Create Workload Cluster
$ clusterctl config cluster gtc --kubernetes-version v1.16.11 --control-plane-machine-count=1 --worker-machine-count=1 > gtc.yaml
$ kubectl apply -f gtc.yaml
```
cluster.cluster.x-k8s.io/gtc created
gcpcluster.infrastructure.cluster.x-k8s.io/gtc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane created
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane created
machinedeployment.cluster.x-k8s.io/gtc-md-0 created
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 created
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-69c6c9f5d6-6vtjg 2/2 Running 0 4m58s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-7bd7989c7c-97jmg 2/2 Running 0 5m3s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-df67d9b74-9tggj 2/2 Running 0 5m1s
capi-system capi-controller-manager-7f764bb474-qzvhc 2/2 Running 0 5m7s
capi-webhook-system capg-controller-manager-c768d69b4-wxjbh 2/2 Running 0 5m
capi-webhook-system capi-controller-manager-b554d9469-d669l 2/2 Running 0 5m8s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6dfdc6fd96-85jxr 2/2 Running 1 5m5s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7bf5997449-snksm 2/2 Running 1 5m2s
cert-manager cert-manager-84bb546784-crcbd 1/1 Running 0 5m31s
cert-manager cert-manager-cainjector-65f677f4fc-67j2h 1/1 Running 0 5m31s
cert-manager cert-manager-webhook-bbb8f9db9-b45l2 1/1 Running 0 5m31s
kube-system coredns-6955765f44-bckk4 1/1 Running 0 9m40s
kube-system coredns-6955765f44-twjl5 1/1 Running 0 9m40s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 9m51s
kube-system kindnet-x6h4w 1/1 Running 0 9m40s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 9m51s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 9m51s
kube-system kube-proxy-sjq2w 1/1 Running 0 9m40s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 9m51s
local-path-storage local-path-provisioner-7745554f7f-ls2vk 1/1 Running 0 9m40s
```
$ kubectl logs capg-controller-manager-69c6c9f5d6-6vtjg -n capg-system --all-containers=true -f
```
I0729 06:31:39.992771 1 main.go:213] Generating self signed cert as no cert is provided
I0729 06:31:41.069092 1 main.go:243] Starting TCP socket on 0.0.0.0:8443
I0729 06:31:41.070538 1 main.go:250] Listening securely on 0.0.0.0:8443
I0729 06:31:40.459502 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0729 06:31:40.466141 1 main.go:205] setup "msg"="starting manager"
I0729 06:31:40.469338 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0729 06:31:40.469884 1 leaderelection.go:242] attempting to acquire leader lease capg-system/controller-leader-election-capg...
I0729 06:31:40.521295 1 leaderelection.go:252] successfully acquired lease capg-system/controller-leader-election-capg
I0729 06:31:40.522813 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpcluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"project":"","region":"","controlPlaneEndpoint":{"host":"","port":0},"network":{}},"status":{"network":{},"ready":false}}}
I0729 06:31:40.528024 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"instanceType":""},"status":{"ready":false}}}
I0729 06:31:40.824841 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpcluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"instanceType":""},"status":{"ready":false}}}
I0729 06:31:40.929994 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpcluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"controlPlaneEndpoint":{"host":"","port":0}},"status":{"infrastructureReady":false,"controlPlaneInitialized":false}}}
I0729 06:31:40.932614 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"clusterName":"","bootstrap":{},"infrastructureRef":{}},"status":{"bootstrapReady":false,"infrastructureReady":false}}}
I0729 06:31:41.031972 1 controller.go:171] controller-runtime/controller "msg"="Starting Controller" "controller"="gcpcluster"
I0729 06:31:41.032298 1 controller.go:190] controller-runtime/controller "msg"="Starting workers" "controller"="gcpcluster" "worker count"=10
I0729 06:31:41.035109 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"project":"","region":"","controlPlaneEndpoint":{"host":"","port":0},"network":{}},"status":{"network":{},"ready":false}}}
I0729 06:31:41.035206 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"controlPlaneEndpoint":{"host":"","port":0}},"status":{"infrastructureReady":false,"controlPlaneInitialized":false}}}
I0729 06:31:41.035250 1 controller.go:171] controller-runtime/controller "msg"="Starting Controller" "controller"="gcpmachine"
I0729 06:31:41.035281 1 controller.go:190] controller-runtime/controller "msg"="Starting workers" "controller"="gcpmachine" "worker count"=10
I0729 06:35:33.532105 1 gcpcluster_controller.go:114] controllers/GCPCluster "msg"="Cluster Controller has not yet set OwnerRef" "gcpCluster"="gtc" "namespace"="default"
I0729 06:35:34.007563 1 gcpcluster_controller.go:148] controllers/GCPCluster "msg"="Reconciling GCPCluster" "cluster"="gtc" "gcpCluster"="gtc" "namespace"="default"
I0729 06:35:34.217069 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.224939 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.247249 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.259894 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.521085 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.538309 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.546418 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.546773 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.562628 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.562833 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.263014 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.263172 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.273390 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.273573 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.277198 1 gcpcluster_controller.go:148] controllers/GCPCluster "msg"="Reconciling GCPCluster" "cluster"="gtc" "gcpCluster"="gtc" "namespace"="default"
I0729 06:35:38.344917 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.345481 1 gcpmachine_controller.go:215] controllers/GCPMachine "msg"="Bootstrap data secret reference is not yet available" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:39.655991 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.673297 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.685184 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.696313 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.729071 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:39.745598 1 gcpmachine_controller.go:215] controllers/GCPMachine "msg"="Bootstrap data secret reference is not yet available" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:39.756097 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:39.756250 1 gcpmachine_controller.go:215] controllers/GCPMachine "msg"="Bootstrap data secret reference is not yet available" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:40.028884 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:40.438055 1 instances.go:152] controllers/GCPMachine "msg"="Running instance" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "machine-role"="control-plane" "namespace"="default"
I0729 06:36:13.670746 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:36:29.982131 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:36:30.297040 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:36:32.636258 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:36:32.949271 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:41:59.012553 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:41:59.479943 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:42:08.565839 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:42:08.858160 1 instances.go:152] controllers/GCPMachine "msg"="Running instance" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "machine-role"="node" "namespace"="default"
I0729 06:42:15.476325 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:42:15.491817 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:42:15.780270 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:42:15.782446 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:42:16.084566 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:45:15.046663 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:45:15.581295 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:31:39.992771 1 main.go:213] Generating self signed cert as no cert is provided
I0729 06:31:41.069092 1 main.go:243] Starting TCP socket on 0.0.0.0:8443
I0729 06:31:41.070538 1 main.go:250] Listening securely on 0.0.0.0:8443
I0729 06:31:40.459502 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0729 06:31:40.466141 1 main.go:205] setup "msg"="starting manager"
I0729 06:31:40.469338 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0729 06:31:40.469884 1 leaderelection.go:242] attempting to acquire leader lease capg-system/controller-leader-election-capg...
I0729 06:31:40.521295 1 leaderelection.go:252] successfully acquired lease capg-system/controller-leader-election-capg
I0729 06:31:40.522813 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpcluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"project":"","region":"","controlPlaneEndpoint":{"host":"","port":0},"network":{}},"status":{"network":{},"ready":false}}}
I0729 06:31:40.528024 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"instanceType":""},"status":{"ready":false}}}
I0729 06:31:40.824841 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpcluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"instanceType":""},"status":{"ready":false}}}
I0729 06:31:40.929994 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpcluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"controlPlaneEndpoint":{"host":"","port":0}},"status":{"infrastructureReady":false,"controlPlaneInitialized":false}}}
I0729 06:31:40.932614 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"clusterName":"","bootstrap":{},"infrastructureRef":{}},"status":{"bootstrapReady":false,"infrastructureReady":false}}}
I0729 06:31:41.031972 1 controller.go:171] controller-runtime/controller "msg"="Starting Controller" "controller"="gcpcluster"
I0729 06:31:41.032298 1 controller.go:190] controller-runtime/controller "msg"="Starting workers" "controller"="gcpcluster" "worker count"=10
I0729 06:31:41.035109 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"project":"","region":"","controlPlaneEndpoint":{"host":"","port":0},"network":{}},"status":{"network":{},"ready":false}}}
I0729 06:31:41.035206 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="gcpmachine" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"controlPlaneEndpoint":{"host":"","port":0}},"status":{"infrastructureReady":false,"controlPlaneInitialized":false}}}
I0729 06:31:41.035250 1 controller.go:171] controller-runtime/controller "msg"="Starting Controller" "controller"="gcpmachine"
I0729 06:31:41.035281 1 controller.go:190] controller-runtime/controller "msg"="Starting workers" "controller"="gcpmachine" "worker count"=10
I0729 06:35:33.532105 1 gcpcluster_controller.go:114] controllers/GCPCluster "msg"="Cluster Controller has not yet set OwnerRef" "gcpCluster"="gtc" "namespace"="default"
I0729 06:35:34.007563 1 gcpcluster_controller.go:148] controllers/GCPCluster "msg"="Reconciling GCPCluster" "cluster"="gtc" "gcpCluster"="gtc" "namespace"="default"
I0729 06:35:34.217069 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.224939 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.247249 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.259894 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-md-0-z8khl" "namespace"="default"
I0729 06:35:34.521085 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.538309 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.546418 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.546773 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.562628 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:34.562833 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.263014 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.263172 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.273390 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.273573 1 gcpmachine_controller.go:209] controllers/GCPMachine "msg"="Cluster infrastructure is not ready yet" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.277198 1 gcpcluster_controller.go:148] controllers/GCPCluster "msg"="Reconciling GCPCluster" "cluster"="gtc" "gcpCluster"="gtc" "namespace"="default"
I0729 06:35:38.344917 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:38.345481 1 gcpmachine_controller.go:215] controllers/GCPMachine "msg"="Bootstrap data secret reference is not yet available" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:35:39.655991 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.673297 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.685184 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.696313 1 gcpmachine_controller.go:121] controllers/GCPMachine "msg"="Machine Controller has not yet set OwnerRef" "gcpMachine"="gtc-control-plane-957q4" "namespace"="default"
I0729 06:35:39.729071 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:39.745598 1 gcpmachine_controller.go:215] controllers/GCPMachine "msg"="Bootstrap data secret reference is not yet available" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:39.756097 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:39.756250 1 gcpmachine_controller.go:215] controllers/GCPMachine "msg"="Bootstrap data secret reference is not yet available" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:40.028884 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:35:40.438055 1 instances.go:152] controllers/GCPMachine "msg"="Running instance" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "machine-role"="control-plane" "namespace"="default"
I0729 06:36:13.670746 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:36:29.982131 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:36:30.297040 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:36:32.636258 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:36:32.949271 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:41:59.012553 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default"
I0729 06:41:59.479943 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-control-plane-957q4" "machine"="gtc-control-plane-2fdxz" "namespace"="default" "instance-id"="gtc-control-plane-957q4"
I0729 06:42:08.565839 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:42:08.858160 1 instances.go:152] controllers/GCPMachine "msg"="Running instance" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "machine-role"="node" "namespace"="default"
I0729 06:42:15.476325 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:42:15.491817 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:42:15.780270 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:42:15.782446 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:42:16.084566 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
I0729 06:45:15.046663 1 gcpmachine_controller.go:195] controllers/GCPMachine "msg"="Reconciling GCPMachine" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default"
I0729 06:45:15.581295 1 gcpmachine_controller.go:244] controllers/GCPMachine "msg"="Machine instance is running" "cluster"="gtc" "gcpCluster"="gtc" "gcpMachine"="gtc-md-0-z8khl" "machine"="gtc-md-0-7b8cbcb584-rzjml" "namespace"="default" "instance-id"="gtc-md-0-z8khl"
```
$ kubectl get machines
```
NAME PROVIDERID PHASE
gtc-control-plane-2fdxz gce://elated-bolt-284718/us-west1-a/gtc-control-plane-957q4 Running
gtc-md-0-7b8cbcb584-rzjml gce://elated-bolt-284718/us-west1-a/gtc-md-0-z8khl Running
```
$ kubectl --namespace=default get secret/gtc-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./gtc.kubeconfig
$ kubectl get pods -A --kubeconfig ./gtc.kubeconfig
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-brr7n 0/1 Pending 0 12m
kube-system coredns-5644d7b6d9-s9gnn 0/1 Pending 0 12m
kube-system etcd-gtc-control-plane-957q4 1/1 Running 0 12m
kube-system kube-apiserver-gtc-control-plane-957q4 1/1 Running 0 12m
kube-system kube-controller-manager-gtc-control-plane-957q4 1/1 Running 0 12m
kube-system kube-proxy-cb7kv 1/1 Running 0 12m
kube-system kube-proxy-g8959 1/1 Running 0 9m9s
kube-system kube-scheduler-gtc-control-plane-957q4 1/1 Running 0 12m
```
$ kubectl --kubeconfig=./gtc.kubeconfig apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml
```
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
```
$ kubectl get pods -A --kubeconfig ./gtc.kubeconfig
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59d85c5c84-4sqxd 1/1 Running 0 44s
kube-system calico-node-2fchr 1/1 Running 1 44s
kube-system calico-node-hkjp4 1/1 Running 1 44s
kube-system coredns-5644d7b6d9-brr7n 1/1 Running 0 16m
kube-system coredns-5644d7b6d9-s9gnn 1/1 Running 0 16m
kube-system etcd-gtc-control-plane-957q4 1/1 Running 0 15m
kube-system kube-apiserver-gtc-control-plane-957q4 1/1 Running 0 16m
kube-system kube-controller-manager-gtc-control-plane-957q4 1/1 Running 0 15m
kube-system kube-proxy-cb7kv 1/1 Running 0 16m
kube-system kube-proxy-g8959 1/1 Running 0 12m
kube-system kube-scheduler-gtc-control-plane-957q4 1/1 Running 0 15m
```
$ kubectl get nodes
```
NAME STATUS ROLES AGE VERSION
capi-gcp-control-plane Ready master 106m v1.17.0
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-69c6c9f5d6-6vtjg 2/2 Running 0 102m
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-7bd7989c7c-97jmg 2/2 Running 0 102m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-df67d9b74-9tggj 2/2 Running 0 102m
capi-system capi-controller-manager-7f764bb474-qzvhc 2/2 Running 0 102m
capi-webhook-system capg-controller-manager-c768d69b4-wxjbh 2/2 Running 0 102m
capi-webhook-system capi-controller-manager-b554d9469-d669l 2/2 Running 0 102m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6dfdc6fd96-85jxr 2/2 Running 1 102m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7bf5997449-snksm 2/2 Running 1 102m
cert-manager cert-manager-84bb546784-crcbd 1/1 Running 0 103m
cert-manager cert-manager-cainjector-65f677f4fc-67j2h 1/1 Running 0 103m
cert-manager cert-manager-webhook-bbb8f9db9-b45l2 1/1 Running 0 103m
kube-system coredns-6955765f44-bckk4 1/1 Running 0 107m
kube-system coredns-6955765f44-twjl5 1/1 Running 0 107m
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 107m
kube-system kindnet-x6h4w 1/1 Running 0 107m
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 107m
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 107m
kube-system kube-proxy-sjq2w 1/1 Running 0 107m
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 107m
local-path-storage local-path-provisioner-7745554f7f-ls2vk 1/1 Running 0 107m
```
$ kubectl get providers -A
```
NAMESPACE NAME TYPE PROVIDER VERSION WATCH NAMESPACE
capg-system infrastructure-gcp InfrastructureProvider v0.3.0
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider v0.3.0
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider v0.3.0
capi-system cluster-api CoreProvider v0.3.0
```
$ kubectl get clusters
```
NAME PHASE
gtc Provisioned
```
$ kubectl get machines
```
NAME PROVIDERID PHASE
gtc-control-plane-2fdxz gce://elated-bolt-284718/us-west1-a/gtc-control-plane-957q4 Running
gtc-md-0-7b8cbcb584-rzjml gce://elated-bolt-284718/us-west1-a/gtc-md-0-z8khl Running
```
$ kubectl get nodes --kubeconfig gtc.kubeconfig
```
NAME STATUS ROLES AGE VERSION
gtc-control-plane-957q4 Ready master 47m v1.16.11
gtc-md-0-z8khl Ready <none> 43m v1.16.11
```
## Make Workload Cluster Self Managed
Initialize the workload cluster with provider components, making it a capi target management cluster. Then, move all the CAPI CRDs from the `kind` management cluster to the `target management cluster` on google cloud using `clusterctl move`, thus making the workload cluster on google cloud self managed.
$ clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure gcp:v0.3.0 --kubeconfig projects/cluster-api/gtc.kubeconfig
```
Fetching providers
Using Override="core-components.yaml" Provider="cluster-api" Version="v0.3.0"
Using Override="bootstrap-components.yaml" Provider="bootstrap-kubeadm" Version="v0.3.0"
Using Override="control-plane-components.yaml" Provider="control-plane-kubeadm" Version="v0.3.0"
Using Override="infrastructure-components.yaml" Provider="infrastructure-gcp" Version="v0.3.0"
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-gcp" Version="v0.3.0" TargetNamespace="capg-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
```
$ kubectl get pods -A --kubeconfig projects/cluster-api/gtc.kubeconfig
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-69c6c9f5d6-2vgxn 2/2 Running 0 44s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-7bd7989c7c-x5dxx 2/2 Running 0 50s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-df67d9b74-dqs78 2/2 Running 0 47s
capi-system capi-controller-manager-7f764bb474-fcf8g 2/2 Running 0 54s
capi-webhook-system capg-controller-manager-c768d69b4-kzkz4 2/2 Running 0 46s
capi-webhook-system capi-controller-manager-b554d9469-45gvj 2/2 Running 1 56s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6dfdc6fd96-65f8x 2/2 Running 0 53s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7bf5997449-8js9z 2/2 Running 0 50s
cert-manager cert-manager-84bb546784-h8cms 1/1 Running 0 72s
cert-manager cert-manager-cainjector-65f677f4fc-48868 1/1 Running 0 72s
cert-manager cert-manager-webhook-bbb8f9db9-bv67d 1/1 Running 1 72s
kube-system calico-kube-controllers-59d85c5c84-4sqxd 1/1 Running 0 34h
kube-system calico-node-2fchr 1/1 Running 1 34h
kube-system calico-node-hkjp4 1/1 Running 1 34h
kube-system coredns-5644d7b6d9-brr7n 1/1 Running 0 34h
kube-system coredns-5644d7b6d9-s9gnn 1/1 Running 0 34h
kube-system etcd-gtc-control-plane-957q4 1/1 Running 0 34h
kube-system kube-apiserver-gtc-control-plane-957q4 1/1 Running 0 34h
kube-system kube-controller-manager-gtc-control-plane-957q4 1/1 Running 0 34h
kube-system kube-proxy-cb7kv 1/1 Running 0 34h
kube-system kube-proxy-g8959 1/1 Running 0 34h
kube-system kube-scheduler-gtc-control-plane-957q4 1/1 Running 0 34h
```
$ kubectl get machines
```
NAME PROVIDERID PHASE
gtc-control-plane-2fdxz gce://elated-bolt-284718/us-west1-a/gtc-control-plane-957q4 Running
gtc-md-0-7b8cbcb584-rzjml gce://elated-bolt-284718/us-west1-a/gtc-md-0-z8khl Running
```
$ kubectl get clusters
```
NAME PHASE
gtc Provisioned
```
$ clusterctl move --to-kubeconfig ~/projects/cluster-api/gtc.kubeconfig
```
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
```
$ kubectl get machines --kubeconfig ~/projects/cluster-api/gtc.kubeconfig
```
NAME PROVIDERID PHASE
gtc-control-plane-2fdxz gce://elated-bolt-284718/us-west1-a/gtc-control-plane-957q4 Provisioned
gtc-md-0-7b8cbcb584-rzjml gce://elated-bolt-284718/us-west1-a/gtc-md-0-z8khl Provisioned
```
$ kubectl get machines
```
No resources found in default namespace.
```
<style>.markdown-body { max-width: 1500px; }</style>