owned this note
owned this note
Published
Linked with GitHub
# CAPO - airshipctl phase runs
For CAPO, calico cni is not deployed as a `postkubeadm` command.
This was done as per the review comment received in the [PS 736838](https://review.opendev.org/#/c/736838/)
To conform with updated airshipctl workflow, a new phase `initinfra-target`
has been added in the `openstack-test-site` to deploy `calico cni`.
Prior to this change `calico` was being deployed as part of workers deployment [PS 758262](https://review.opendev.org/#/c/758262/)
$ *airshipctl phase plan*
```
GROUP PHASE
group1
clusterctl-init-ephemeral
controlplane-ephemeral
initinfra-target
clusterctl-init-target
clusterctl-move
workers-target
```
$ *airshipctl/manifests/site/openstack-test-site/target* $ tree
```
.
├── initinfra
│ └── kustomization.yaml
└── workers
├── cluster_clouds_yaml_patch.yaml
├── kustomization.yaml
├── workers_cloud_conf_patch.yaml
├── workers_machine_count_patch.yaml
├── workers_machine_flavor_patch.yaml
└── workers_ssh_key_patch.yaml
2 directories, 7 files
```
$ *airshipctl/manifests/site/openstack-test-site/target* $ cat initinfra/kustomization.yaml
```
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../../composite/infra
commonLabels:
airshipit.org/stage: initinfra
```
What follows is the sequence of phase commands to deploy a target cluster in openstack with controlplane and worker machines.
## Steps to deploy controlplane and worker machines using phase commands
$ *kind create cluster --name ephemeral-cluster --wait 200s*
```
Creating cluster "ephemeral-cluster" ...
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK
WARNING: Here be dragons! This is not supported currently.
✓ Ensuring node image (kindest/node:v1.19.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Waiting ≤ 3m20s for control-plane = Ready ⏳
• Ready after 32s 💚
Set kubectl context to "kind-ephemeral-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-ephemeral-cluster
```
$*kubectl get po -A*
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-bkdlg 1/1 Running 0 9m13s
kube-system coredns-f9fd979d6-l8wcw 1/1 Running 0 9m13s
kube-system etcd-ephemeral-cluster-control-plane 1/1 Running 0 9m23s
kube-system kindnet-n4rcv 1/1 Running 0 9m13s
kube-system kube-apiserver-ephemeral-cluster-control-plane 1/1 Running 0 9m23s
kube-system kube-controller-manager-ephemeral-cluster-control-plane 1/1 Running 0 9m23s
kube-system kube-proxy-qljcs 1/1 Running 0 9m13s
kube-system kube-scheduler-ephemeral-cluster-control-plane 1/1 Running 0 9m22s
local-path-storage local-path-provisioner-78776bfc44-l25x5 1/1 Running 0 9m13s
```
$*kubectl config set-context ephemeral-cluster --cluster kind-ephemeral-cluster --user kind-ephemeral-cluster*
```
Context "ephemeral-cluster" modified.
```
$*airshipctl phase run clusterctl-init-ephemeral --debug --kubeconfig ~/.airship/kubeconfig*
```
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CABPK_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CACPK_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPD_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPM3_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CABPK_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CACPK_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPD_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPI_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPI_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPM3_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/clusterctl/client/client.go:81: Starting cluster-api initiation
[airshipctl] 2020/11/11 20:50:10 opendev.org/airship/airshipctl@/pkg/events/processor.go:61: Received event: {4 2020-11-11 20:50:10.234892604 +0000 UTC m=+0.443628579 {InitType {[]} {<nil>} {ApplyEventResourceUpdate ServersideApplied <nil>} {ResourceUpdateEvent <nil> <nil>} {PruneEventResourceUpdate Pruned <nil>} {DeleteEventResourceUpdate Deleted <nil>}} {<nil>} {ResourceUpdateEvent <nil> <nil>} {0 starting clusterctl init executor} {0 } {0 }}
Installing the clusterctl inventory CRD
Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
Fetching providers
................................................
```
$*kubectl get po -A*
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-5646d9589c-smw6b 2/2 Running 0 53s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-759bf846fc-4jx9f 2/2 Running 0 52s
capi-system capi-controller-manager-5d6b4d6769-v8s2w 2/2 Running 0 54s
capi-webhook-system capi-controller-manager-548d4869b4-pr28s 2/2 Running 0 54s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6949f44db8-fln92 2/2 Running 0 54s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7b6c4bf48d-wsmdm 2/2 Running 0 53s
capi-webhook-system capo-controller-manager-84b749bdb4-hjs7d 2/2 Running 0 52s
capo-system capo-controller-manager-d69f8cbcf-g2zp9 2/2 Running 0 51s
cert-manager cert-manager-cainjector-fc6c787db-4bdzf 1/1 Running 0 68s
cert-manager cert-manager-d994d94d7-r2czt 1/1 Running 0 68s
cert-manager cert-manager-webhook-845d9df8bf-wqggz 1/1 Running 0 68s
kube-system coredns-f9fd979d6-cm8xn 1/1 Running 0 3m5s
kube-system coredns-f9fd979d6-kvlsx 1/1 Running 0 3m5s
kube-system etcd-ephemeral-cluster-control-plane 1/1 Running 0 3m16s
kube-system kindnet-r6wvv 1/1 Running 0 3m5s
kube-system kube-apiserver-ephemeral-cluster-control-plane 1/1 Running 0 3m16s
kube-system kube-controller-manager-ephemeral-cluster-control-plane 1/1 Running 0 3m16s
kube-system kube-proxy-sgzc4 1/1 Running 0 3m5s
kube-system kube-scheduler-ephemeral-cluster-control-plane 1/1 Running 0 3m15s
local-path-storage local-path-provisioner-78776bfc44-g8d67 1/1 Running 0 3m5s
```
$*airshipctl phase run controlplane-ephemeral --debug --kubeconfig ~/.airship/kubeconfig*
```
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:129: Getting kubeconfig context name from cluster map
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:134: Getting kubeconfig file information from kubeconfig provider
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:139: Filtering out documents that shouldn't be applied to kubernetes from document bundle
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:147: Using kubeconfig at '/home/stack/.airship/kubeconfig' and context 'ephemeral-cluster'
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:118: WaitTimeout: 33m20s
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:76: Getting infos for bundle, inventory id is controlplane-ephemeral
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:106: Inventory Object config Map not found, auto generating Inventory object
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:113: Injecting Inventory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"controlplane-ephemeral"},"name":"airshipit-controlplane-ephemeral","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/11/11 21:40:46 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:119: Making sure that inventory object namespace airshipit exists
secret/target-cluster-cloud-config created
cluster.cluster.x-k8s.io/target-cluster created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/target-cluster-control-plane created
openstackcluster.infrastructure.cluster.x-k8s.io/target-cluster created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/target-cluster-control-plane created
5 resource(s) applied. 5 created, 0 unchanged, 0 configured
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/target-cluster-control-plane is NotFound: Resource not found
secret/target-cluster-cloud-config is NotFound: Resource not found
cluster.cluster.x-k8s.io/target-cluster is NotFound: Resource not found
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/target-cluster-control-plane is NotFound: Resource not found
openstackcluster.infrastructure.cluster.x-k8s.io/target-cluster is NotFound: Resource not found
secret/target-cluster-cloud-config is Current: Resource is always ready
cluster.cluster.x-k8s.io/target-cluster is InProgress:
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/target-cluster-control-plane is Current: Resource is current
openstackcluster.infrastructure.cluster.x-k8s.io/target-cluster is Current: Resource is current
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/target-cluster-control-plane is Current: Resource is current
cluster.cluster.x-k8s.io/target-cluster is InProgress: Cluster generation is 2, but latest observed generation is 1
openstackcluster.infrastructure.cluster.x-k8s.io/target-cluster is Current: Resource is current
cluster.cluster.x-k8s.io/target-cluster is InProgress: Scaling up to 1 replicas (actual 0)
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/target-cluster-control-plane is InProgress: Scaling up to 1 replicas (actual 0)
cluster.cluster.x-k8s.io/target-cluster is InProgress:
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/target-cluster-control-plane is InProgress:
cluster.cluster.x-k8s.io/target-cluster is Current: Resource is Ready
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/target-cluster-control-plane is Current: Resource is Ready
all resources has reached the Current status
```
$*kubectl get machines*
```
NAME PROVIDERID PHASE
target-cluster-control-plane-bh74v openstack://0e3d92a4-8543-4da4-a610-40a81d97ee51 Running
```
$*kubectl --namespace=default get secret/target-cluster-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./target-cluster.kubeconfig*
$*kubectl get nodes --kubeconfig ~/target-cluster.kubeconfig*
```
NAME STATUS ROLES AGE VERSION
target-cluster-control-plane-tgxc5 NotReady master 3m42s v1.17.3
```
$*kubectl config set-context target-cluster --user target-cluster-admin --cluster target-cluster --kubeconfig target-cluster.kubeconfig*
```
Context "target-cluster" created.
```
$*airshipctl phase run initinfra-target --kubeconfig target-cluster.kubeconfig*
```
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
configmap/calico-config created
deployment.apps/calico-kube-controllers created
...........................................................
```
$*kubectl get nodes --kubeconfig ~/target-cluster.kubeconfig*
```
NAME STATUS ROLES AGE VERSION
target-cluster-control-plane-tgxc5 Ready master 9m16s v1.17.3
```
$*kubectl get po -A --kubeconfig ~/target-cluster.kubeconfig*
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5bbd8f7588-fgprb 1/1 Running 0 2m14s
kube-system calico-node-bn4qn 1/1 Running 0 2m14s
kube-system coredns-6955765f44-8tqz9 1/1 Running 0 8m15s
kube-system coredns-6955765f44-lfvd5 1/1 Running 0 8m15s
kube-system etcd-target-cluster-control-plane-tgxc5 1/1 Running 0 8m31s
kube-system kube-apiserver-target-cluster-control-plane-tgxc5 1/1 Running 0 8m31s
kube-system kube-controller-manager-target-cluster-control-plane-tgxc5 1/1 Running 0 8m29s
kube-system kube-proxy-xd4wp 1/1 Running 0 8m16s
kube-system kube-scheduler-target-cluster-control-plane-tgxc5 1/1 Running 0 8m31s
```
$*kubectl taint node target-cluster-control-plane-tgxc5 node-role.kubernetes.io/master- --kubeconfig target-cluster.kubeconfig --request-timeout 10s*
```
node/target-cluster-control-plane-tgxc5 untainted
```
$*airshipctl phase run clusterctl-init-target --debug --kubeconfig target-cluster.kubeconfig*
```
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CABPK_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CACPK_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPI_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPI_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CABPK_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CACPK_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPD_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPD_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPM3_AUTH_PROXY is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/implementations/reader.go:104: Verifying that variable CONTAINER_CAPM3_MANAGER is allowed to be appended
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/clusterctl/client/client.go:81: Starting cluster-api initiation
[airshipctl] 2020/11/11 21:58:12 opendev.org/airship/airshipctl@/pkg/events/processor.go:61: Received event: {4 2020-11-11 21:58:12.865638356 +0000 UTC m=+0.474586732 {InitType {[]} {<nil>} {ApplyEventResourceUpdate ServersideApplied <nil>} {ResourceUpdateEvent <nil> <nil>} {PruneEventResourceUpdate Pruned <nil>} {DeleteEventResourceUpdate Deleted <nil>}} {<nil>} {ResourceUpdateEvent <nil> <nil>} {0 starting clusterctl init executor} {0 } {0 }}
Installing the clusterctl inventory CRD
Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
Fetching providers
```
$*kubectl get po -A --kubeconfig ~/target-cluster.kubeconfig -w*
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-579cc6bd44-7x24j 2/2 Running 0 112s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-69c9bf9bc6-vxr2s 2/2 Running 0 105s
capi-system capi-controller-manager-565cc9dd6d-wrttl 2/2 Running 0 117s
capi-webhook-system capi-controller-manager-68b7cd6d79-jx2rv 2/2 Running 0 119s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-699b84775f-lrsvk 2/2 Running 0 115s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-b8b48d45f-mqvrs 2/2 Running 0 110s
capi-webhook-system capo-controller-manager-77dbfcfd49-n745b 2/2 Running 0 104s
capo-system capo-controller-manager-cd88457c4-pc5wm 2/2 Running 0 102s
cert-manager cert-manager-7ddc5b4db-zrx7d 1/1 Running 0 2m18s
cert-manager cert-manager-cainjector-6644dc4975-996rz 1/1 Running 0 2m18s
cert-manager cert-manager-webhook-7b887475fb-rn6jx 1/1 Running 0 2m18s
kube-system calico-kube-controllers-5bbd8f7588-fgprb 1/1 Running 0 7m56s
kube-system calico-node-bn4qn 1/1 Running 0 7m56s
kube-system coredns-6955765f44-8tqz9 1/1 Running 0 13m
kube-system coredns-6955765f44-lfvd5 1/1 Running 0 13m
kube-system etcd-target-cluster-control-plane-tgxc5 1/1 Running 0 14m
kube-system kube-apiserver-target-cluster-control-plane-tgxc5 1/1 Running 0 14m
kube-system kube-controller-manager-target-cluster-control-plane-tgxc5 1/1 Running 0 14m
kube-system kube-proxy-xd4wp 1/1 Running 0 13m
kube-system kube-scheduler-target-cluster-control-plane-tgxc5 1/1 Running 0 14m
```
$*KUBECONFIG=~/.airship/kubeconfig:target-cluster.kubeconfig kubectl config view --merge --flatten > ~/ephemeral_and_target.kubeconfig*
$*airshipctl phase run clusterctl-move --kubeconfig ~/ephemeral_and_target.kubeconfig*
```
[airshipctl] 2020/11/11 22:02:50 command 'clusterctl move' is going to be executed
[airshipctl] 2020/11/11 22:02:50 Received event: {4 2020-11-11 22:02:50.89628053 +0000 UTC m=+0.515995839 {InitType {[]} {<nil>} {ApplyEventResourceUpdate ServersideApplied <nil>} {ResourceUpdateEvent <nil> <nil>} {PruneEventResourceUpdate Pruned <nil>} {DeleteEventResourceUpdate Deleted <nil>}} {<nil>} {ResourceUpdateEvent <nil> <nil>} {2 starting clusterctl move executor} {0 } {0 }}
[airshipctl] 2020/11/11 22:02:54 Received event: {4 2020-11-11 22:02:54.512404304 +0000 UTC m=+4.132119713 {InitType {[]} {<nil>} {ApplyEventResourceUpdate ServersideApplied <nil>} {ResourceUpdateEvent <nil> <nil>} {PruneEventResourceUpdate Pruned <nil>} {DeleteEventResourceUpdate Deleted <nil>}} {<nil>} {ResourceUpdateEvent <nil> <nil>} {3 clusterctl move completed successfully} {0 } {0 }}
```
$*airshipctl phase run workers-target --debug --kubeconfig ~/target-cluster.kubeconfig*
```
[airshipctl] 2020/11/11 22:02:50 command 'clusterctl move' is going to be executed
[airshipctl] 2020/11/11 22:02:50 Received event: {4 2020-11-11 22:02:50.89628053 +0000 UTC m=+0.515995839 {InitType {[]} {<nil>} {ApplyEventResourceUpdate ServersideApplied <nil>} {ResourceUpdateEvent <nil> <nil>} {PruneEventResourceUpdate Pruned <nil>} {DeleteEventResourceUpdate Deleted <nil>}} {<nil>} {ResourceUpdateEvent <nil> <nil>} {2 starting clusterctl move executor} {0 } {0 }}
[airshipctl] 2020/11/11 22:02:54 Received event: {4 2020-11-11 22:02:54.512404304 +0000 UTC m=+4.132119713 {InitType {[]} {<nil>} {ApplyEventResourceUpdate ServersideApplied <nil>} {ResourceUpdateEvent <nil> <nil>} {PruneEventResourceUpdate Pruned <nil>} {DeleteEventResourceUpdate Deleted <nil>}} {<nil>} {ResourceUpdateEvent <nil> <nil>} {3 clusterctl move completed successfully} {0 } {0 }}
stack@devstack:~$ airshipctl phase run workers-target --debug --kubeconfig ~/target-cluster.kubeconfig
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:129: Getting kubeconfig context name from cluster map
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:134: Getting kubeconfig file information from kubeconfig provider
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:139: Filtering out documents that shouldn't be applied to kubernetes from document bundle
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:147: Using kubeconfig at '/home/stack/target-cluster.kubeconfig' and context 'target-cluster'
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/executor.go:118: WaitTimeout: 33m20s
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:76: Getting infos for bundle, inventory id is workers-target
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:106: Inventory Object config Map not found, auto generating Inventory object
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:113: Injecting Inventory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"workers-target"},"name":"airshipit-workers-target","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/11/11 22:04:04 opendev.org/airship/airshipctl@/pkg/k8s/applier/applier.go:119: Making sure that inventory object namespace airshipit exists
secret/target-cluster-cloud-config created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/target-cluster-md-0 created
machinedeployment.cluster.x-k8s.io/target-cluster-md-0 created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/target-cluster-md-0 created
4 resource(s) applied. 4 created, 0 unchanged, 0 configured
secret/target-cluster-cloud-config is NotFound: Resource not found
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/target-cluster-md-0 is NotFound: Resource not found
machinedeployment.cluster.x-k8s.io/target-cluster-md-0 is NotFound: Resource not found
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/target-cluster-md-0 is NotFound: Resource not found
secret/target-cluster-cloud-config is Current: Resource is always ready
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/target-cluster-md-0 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/target-cluster-md-0 is Current: Resource is current
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/target-cluster-md-0 is Current: Resource is current
all resources has reached the Current status
```
$* kubectl logs capo-controller-manager-cd88457c4-pc5wm -n capo-system --all-containers=true -f --kubeconfig ~/target-cluster.kubeconfig*
```
...................................................
-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj"
I1111 22:21:37.693281 1 openstackmachine_controller.go:284] controllers/OpenStackMachine "msg"="Creating Machine" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj"
I1111 22:21:50.198235 1 openstackmachine_controller.go:329] controllers/OpenStackMachine "msg"="Machine instance is ACTIVE" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj" "instance-id"="43faa006-b2b7-427f-9d4d-bfc9ad5324c8"
I1111 22:21:50.198509 1 openstackmachine_controller.go:354] controllers/OpenStackMachine "msg"="Reconciled Machine create successfully" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj"
I1111 22:21:50.331816 1 openstackmachine_controller.go:284] controllers/OpenStackMachine "msg"="Creating Machine" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-ss5nh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-frpn5"
I1111 22:22:02.647349 1 openstackmachine_controller.go:329] controllers/OpenStackMachine "msg"="Machine instance is ACTIVE" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-ss5nh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-frpn5" "instance-id"="2e2f5952-d984-4002-a683-09a8ea11b3a5"
I1111 22:22:02.653439 1 openstackmachine_controller.go:354] controllers/OpenStackMachine "msg"="Reconciled Machine create successfully" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-ss5nh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-frpn5"
I1111 22:22:02.691136 1 openstackmachine_controller.go:284] controllers/OpenStackMachine "msg"="Creating Machine" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-p64fh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-75pfw"
I1111 22:22:15.326261 1 openstackmachine_controller.go:329] controllers/OpenStackMachine "msg"="Machine instance is ACTIVE" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-p64fh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-75pfw" "instance-id"="e4de751f-4b9d-42b5-9187-d59befa2e216"
I1111 22:22:15.326686 1 openstackmachine_controller.go:354] controllers/OpenStackMachine "msg"="Reconciled Machine create successfully" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-p64fh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-75pfw"
I1111 22:22:15.486224 1 openstackmachine_controller.go:284] controllers/OpenStackMachine "msg"="Creating Machine" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj"
I1111 22:22:16.095707 1 openstackmachine_controller.go:329] controllers/OpenStackMachine "msg"="Machine instance is ACTIVE" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj" "instance-id"="43faa006-b2b7-427f-9d4d-bfc9ad5324c8"
I1111 22:22:16.105887 1 openstackmachine_controller.go:354] controllers/OpenStackMachine "msg"="Reconciled Machine create successfully" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-25blr" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-mh4rj"
I1111 22:22:16.118013 1 openstackmachine_controller.go:284] controllers/OpenStackMachine "msg"="Creating Machine" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-ss5nh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-frpn5"
I1111 22:22:16.749440 1 openstackmachine_controller.go:329] controllers/OpenStackMachine "msg"="Machine instance is ACTIVE" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-ss5nh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-frpn5" "instance-id"="2e2f5952-d984-4002-a683-09a8ea11b3a5"
I1111 22:22:16.749979 1 openstackmachine_controller.go:354] controllers/OpenStackMachine "msg"="Reconciled Machine create successfully" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-ss5nh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-frpn5"
I1111 22:22:16.751131 1 openstackmachine_controller.go:284] controllers/OpenStackMachine "msg"="Creating Machine" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-p64fh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-75pfw"
I1111 22:22:17.476455 1 openstackmachine_controller.go:329] controllers/OpenStackMachine "msg"="Machine instance is ACTIVE" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-p64fh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-75pfw" "instance-id"="e4de751f-4b9d-42b5-9187-d59befa2e216"
I1111 22:22:17.476799 1 openstackmachine_controller.go:354] controllers/OpenStackMachine "msg"="Reconciled Machine create successfully" "cluster"="target-cluster" "machine"="target-cluster-md-0-69958c66ff-p64fh" "namespace"="default" "openStackCluster"="target-cluster" "openStackMachine"="target-cluster-md-0-75pfw"
```
$*kubectl get machines --kubeconfig target-cluster.kubeconfig*
```
NAME PROVIDERID PHASE
target-cluster-control-plane-bh74v openstack://0e3d92a4-8543-4da4-a610-40a81d97ee51 Running
target-cluster-md-0-69958c66ff-25blr openstack://43faa006-b2b7-427f-9d4d-bfc9ad5324c8 Running
target-cluster-md-0-69958c66ff-p64fh openstack://e4de751f-4b9d-42b5-9187-d59befa2e216 Running
target-cluster-md-0-69958c66ff-ss5nh openstack://2e2f5952-d984-4002-a683-09a8ea11b3a5 Running
```
###### tags: `airshipctl` `capo` `phases`
<style>.markdown-body { max-width: 1200px; }</style>