owned this note
owned this note
Published
Linked with GitHub
# Spike: Metal3.io Upgrade for Brownfield Site #610
[TOC]
### **Description:**
To understand the implications of an upgrade to the Metal3 infrastructure provider in a site (brownfield scenario).
#### Compatibility with Cluster API and CAPM3 version
| CAPM3 version | Cluster API version | CAPM3 Release | Kubernetes version constraint |
| ------------- | ------------------- | ------------- | --- |
| v1alpha4 | v1alpha3 | v0.4.X | suport current v1.18.6 and higher |
| v1alpha5 | v1alpha4 | v0.5.X | >= v1.19.1 |
### Plan for an upgrade:
1) **Upgrade BMO and IRONIC Deployments:**
Upgrade existing BMO and IRONIC deployments supported version.
We can use existing intitinfra-target phase to upgrade BMO and IRONIC. OR
We can create new phases for Day2 upgrade scenarios where individual component can be upgraded.
2) **Upgrade CAPI and CAPM3:**
Upgrade CAPI component versions and Metal3 infrastrcture capi provider using clusterctl KRM function.
Currently clusterctl KRM function supports only 2 actions (init and move).
Add a new action "upgrade apply" in clusterctl KRM function.
Create a new phase `clusterctl-upgrade-target` for upgrading CAPI components and infrastructure providers.
3) **Check the new upgrade:**
Check if the newer version reflected in the resource yaml.
We can add phase helpers to check if APIVersion of CAPI objects are updated or not.
### Summary
* Patch version upgrade/downgrade of capm3 to v0.4.2 (capi to 0.3.22)worked seamlessly keeping BMO and IRONIC deployments to version v0.4.0
* On brownfield upgrade on BMO and IRONIC, we will need to add extra patch in baremetalhosts.metal3.io CRD to support structural schema , and need to make sure ironic pods are provisioned on the correct controlplane node.
* While performing patch upgrade of CAPM3, when BMO upgraded to v0.4.2, Controleplane bmh failed Image validation and went to deprovioning state while registering the node.
* For Minor version upgrade kubernetes version should be 1.19.1 or higher.
* During minor upgrade, CAPM3 infrastructure provider must be upgraded along with CAPI components upgrade at same time.
* Minor version upgrade happen smoothly and there was no downtime in cluster during upgrade.
* Minor version downgrade of CAPI/CAPM3 from v0.5.0 to v0.4.0 is not suported
* There was no affect on workloads during and after patch up and minor upgrade.
### Scenario 1: Patch version upgrade of CAMP3 from v0.4.0 to v0.4.2
In this scenarion we will try to upgrade metal3 infrastrucutre provider patch version upgrade from v0.4.0 to v0.4.2. We will see what are impliacations it has on cluster object.
I have refered to below PS of James Gu to upgrade CAPM3,BMO,IRONIC deployments https://review.opendev.org/c/airship/airshipctl/+/793254
#### Dependancies and constraints for an upgrade.
* We can use existing kubernets version (v 1.18.6).
* CAPI version should be v1alpha3
* CAPM3 version should be v1alpha4
* CAPI compoenets can be upgraded to v0.3.22
* baremetal-operator, ironic can be stay on v0.4.0 or can upgraded to v0.4.2.
#### Upgrading BMO and Ironic Deployments to v0.4.2
```
airship@help:~$ kubectl get deployments.apps -n metal3 -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
baremetal-operator-controller-manager 1/1 1 1 23m kube-rbac-proxy,manager gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0,quay.io/metal3-io/baremetal-operator:capm3-v0.4.2 airshipit.org/stage=initinfra,control-plane=controller-manager
capm3-ironic 1/1 1 1 12m ironic-dnsmasq,mariadb,ironic-api,ironic-conductor,ironic-log-watch,ironic-inspector,ironic-inspector-log-watch quay.io/metal3-io/ironic:capm3-v0.4.2,quay.io/metal3-io/ironic:capm3-v0.4.2,quay.io/metal3-io/ironic:capm3-v0.4.2,quay.io/metal3-io/ironic:capm3-v0.4.2,quay.io/metal3-io/ironic,quay.io/metal3-io/ironic:capm3-v0.4.2,quay.io/metal3-io/ironic airshipit.org/stage=initinfra,name=capm3-ironic
airship@help:~$ kubectl get pod -n metal3
NAME READY STATUS RESTARTS AGE
baremetal-operator-controller-manager-5b7ffc947-68sf8 2/2 Running 0 6m19s
capm3-ironic-75f94586b5-5s7n4 7/7 Running 0 13m
airship@help:~$ kubectl get bmh -A
NAMESPACE NAME STATE CONSUMER ONLINE ERROR
target-infra node01 deprovisioning cluster-controlplane-v5cxr true provisioned registration error
target-infra node02 registration error false registration error
target-infra node03 provisioned worker-1-fwpq8 true
```
**Observations:**
* We need to align BMO and IRONIC deployments names as per GreenField deployments names. Otherwise new deployments get created instead of upgrading existing deployments.
* During upgrade, baremetalhosts.metal3.io CRD upgrade from `v1beta1 to v1` version. We need to add extra patch to make field `spec.preserveUnkwonFields to false` explicitely, otherwise phase run will error out for invalid field value true which is present in the v1beta1 version.
https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning
<details>
<summary>Error logs when preserveUnknownFields=true</summary>
```
airship@help:~/upgrade/airshipctl$ airshipctl phase run initinfra-target
[airshipctl] 2021/08/31 12:16:47 Using kubeconfig at '/home/airship/.airship/kubeconfig-550392417' and context 'target-cluster'
namespace/flux-system unchanged
namespace/hardware-classification unchanged
namespace/metal3 configured
customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/firmwareschemas.metal3.io created
customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io configured
W0831 12:16:47.848444 998181 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0831 12:16:47.879578 998181 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/hardwareclassifications.metal3.io configured
customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/hostfirmwaresettings.metal3.io created
[airshipctl] 2021/08/31 12:16:48 Received error when applying resources to kubernetes: error when applying patch:
{"metadata":{"annotations":{"controller-gen.kubebuilder.io/version":"v0.6.0","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{\"controller-gen.kubebuilder.io/version\":\"v0.6.0\"},\"labels\":{\"airshipit.org/stage\":\"initinfra\",\"cluster.x-k8s.io/provider\":\"metal3\",\"clusterctl.cluster.x-k8s.io\":\"\"},\"name\":\"baremetalhosts.metal3.io\"},\"spec\":{\"group\":\"metal3.io\",\"names\":{\"kind\":\"BareMetalHost\",\"listKind\":\"BareMetalHostList\",\"plural\":\"baremetalhosts\",\"shortNames\":[\"bmh\",\"bmhost\"],\"singular\":\"baremetalhost\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"description\":\"Operational
.
.
.
.
Profile","operationalStatus","poweredOn","provisioning"],"type":"object"}},"type":"object"}},"served":true,"storage":true,"subresources":{"status":{}}}]},"status":{"acceptedNames":{"kind":"","plural":""},"conditions":[],"storedVersions":[]}}
to:
Resource: "apiextensions.k8s.io/v1, Resource=customresourcedefinitions", GroupVersionKind: "apiextensions.k8s.io/v1, Kind=CustomResourceDefinition"
Name: "baremetalhosts.metal3.io", Namespace: ""
for: "customresourcedefinition_baremetalhosts.metal3.io.yaml": CustomResourceDefinition.apiextensions.k8s.io "baremetalhosts.metal3.io" is invalid: spec.preserveUnknownFields: Invalid value: true: must be false in order to use defaults in the schema
```
</details>
* We need to ensure ironic pod gets provision on the controlplane node. We need to update the current nodeSelector patch to choose the correct node.
```
airship@help:~/logs/v0.5.0$ kubectl get pods -n metal3 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
baremetal-operator-controller-manager-68c7fd955b-g56tj 2/2 Running 0 4h5m 192.168.17.70 node03 <none> <none>
capm3-ironic-cf7f76ddd-9qjqb 7/7 Running 0 21m 10.23.25.103 node03 <none> <none>
Error logs on ironic-pods:
**Waiting for 10.23.24.102 to be configured on an interface**
NodeSelector patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: capm3-ironic
spec:
template:
spec:
nodeSelector:
--- kubernetes.io/os: linux
+++ node-type: controlplane
```
* After upgrade BMO will register and validate all the hosts.
* While validating node01(controlplane/air-target-1) went into DEPROVISIONING state. BMH node01 failed in the node adopt phase.
* Node adoption phase performs validations of bmh data. And in Image href validation node01 faling as per below log.
*Error","message":"Host adoption failed: Error while attempting to adopt node bc6ece24-3a2d-48c0-ab72-4520e18e4caa: Validation of image href http://10.23.24.101:80/images/control-plane.qcow2 failed, reason: HTTPConnectionPool(host='10.23.24.101', port=80): Max retries exceeded with url: /images/control-plane.qcow2 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fecdcccabe0>: Failed to establish a new connection: [Errno 113] EHOSTUNREACH',))*`
```
{"level":"info","ts":1630075223.7960095,"logger":"controllers.BareMetalHost","msg":"registering and validating access to management controller","baremetalhost":"target-infra/node01","provisioningState":"provisioned","credentials":{"credentials":{"name":"node01-bmc-secret","namespace":"target-infra"},"credentialsVersion":"4113"}}
{"level":"info","ts":1630075234.0268457,"logger":"controllers.BareMetalHost","msg":"publishing event","baremetalhost":"target-infra/node01","reason":"ProvisionedRegistrationError","message":"Host adoption failed: Error while attempting to adopt node bc6ece24-3a2d-48c0-ab72-4520e18e4caa: Validation of image href http://10.23.24.101:80/images/control-plane.qcow2 failed, reason: HTTPConnectionPool(host='10.23.24.101', port=80): Max retries exceeded with url: /images/control-plane.qcow2 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fecdcccabe0>: Failed to establish a new connection: [Errno 113] EHOSTUNREACH',))."}
{"level":"info","ts":1630075234.051731,"logger":"controllers.BareMetalHost","msg":"done","baremetalhost":"target-infra/node01","provisioningState":"provisioned","requeue":false,"after":107.110100873}
{"level":"info","ts":1630075283.6867683,"logger":"controllers.BareMetalHost","msg":"start","baremetalhost":"target-infra/node03"}
{"level":"info","ts":1630075283.7269022,"logger":"controllers.BareMetalHost","msg":"registering and validating access to management controller","baremetalhost":"target-infra/node03","provisioningState":"provisioned","credentials":{"credentials":{"name":"node03-bmc-secret","namespace":"target-infra"},"credentialsVersion":"6562"}}
{"level":"info","ts":1630075283.7963765,"logger":"provisioner.ironic","msg":"current provision state","host":"target-infra~node03","lastError":"","current":"active","target":""}
```
* node01 is provisioned from ephemeral cluster and image referances are from ephemeral host(10.23.24.101).As ephemeral host no loger exist post targt-cluster is created, ephemeral host is unreachable.
<details>
<summary>Controlplane BMH (failed)</summary>
```
Name: node01
Namespace: target-infra
Labels: airshipit.org/example-label=label-bmh-like-this
airshipit.org/k8s-role=controlplane-host
cluster.x-k8s.io/cluster-name=target-cluster
Annotations: <none>
API Version: metal3.io/v1alpha1
Kind: BareMetalHost
Metadata:
Creation Timestamp: 2021-08-26T21:02:05Z
Finalizers:
baremetalhost.metal3.io
Generation: 1
Owner References:
API Version: infrastructure.cluster.x-k8s.io/v1alpha4
Controller: true
Kind: Metal3Machine
Name: cluster-controlplane-8cgdx
UID: 492f4b7c-8d1f-4332-bfec-f94038051f83
Spec:
Automated Cleaning Mode: metadata
Bmc:
Address: redfish+http://10.23.25.1:8000/redfish/v1/Systems/air-target-1
Credentials Name: node01-bmc-secret
Boot MAC Address: 52:54:00:b6:ed:31
Boot Mode: legacy
Consumer Ref:
API Version: infrastructure.cluster.x-k8s.io/v1alpha4
Kind: Metal3Machine
Name: cluster-controlplane-8cgdx
Namespace: target-infra
Image:
Checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
URL: http://10.23.24.101:80/images/control-plane.qcow2
Network Data:
Name: node01-network-data
Namespace: target-infra
Online: true
User Data:
Name: cluster-controlplane-n25sz
Namespace: target-infra
Status:
Error Count: 1
Error Message: Host adoption failed: Error while attempting to adopt node bc6ece24-3a2d-48c0-ab72-4520e18e4caa: Validation of image href http://10.23.24.101:80/images/control-plane.qcow2 failed, reason: HTTPConnectionPool(host='10.23.24.101', port=80): Max retries exceeded with url: /images/control-plane.qcow2 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fecdcccabe0>: Failed to establish a new connection: [Errno 113] EHOSTUNREACH',)).
Error Type: provisioned registration error
Good Credentials:
Credentials:
Name: node01-bmc-secret
Namespace: target-infra
Credentials Version: 4113
Hardware:
Cpu:
Arch: x86_64
Clock Megahertz: 2400
Count: 2
Flags:
3dnowprefetch
abm
adx
aes
apic
arat
avx
avx2
bmi1
bmi2
clflush
cmov
constant_tsc
cpuid
cpuid_fault
cx16
cx8
de
ept
f16c
fma
fpu
fsgsbase
fxsr
hle
hypervisor
ibpb
ibrs
invpcid
invpcid_single
lahf_lm
lm
mca
mce
md_clear
mmx
movbe
msr
mtrr
nopl
nx
pae
pat
pcid
pclmulqdq
pge
pni
popcnt
pse
pse36
pti
rdrand
rdseed
rdtscp
rep_good
rtm
sep
smap
smep
ss
ssbd
sse
sse2
sse4_1
sse4_2
ssse3
syscall
tpr_shadow
tsc
tsc_adjust
tsc_deadline_timer
tsc_known_freq
vme
vmx
vnmi
vpid
x2apic
xsave
xsaveopt
xtopology
Model: Intel Core Processor (Skylake, IBRS)
Firmware:
Bios:
Hostname: ubuntu
Nics:
Ip: 10.23.24.245
Mac: 52:54:00:b6:ed:31
Model: 0x1af4 0x0001
Name: ens4
Pxe: true
Ip: fe80::5054:ff:fe9b:274c%ens3
Mac: 52:54:00:9b:27:4c
Model: 0x1af4 0x0001
Name: ens3
Ram Mebibytes: 7168
Storage:
Hctl: 2:0:0:0
Model: QEMU HARDDISK
Name: /dev/sda
Rotational: true
Serial Number: QM00005
Size Bytes: 32212254720
Vendor: ATA
System Vendor:
Manufacturer: QEMU
Product Name: Standard PC (i440FX + PIIX, 1996)
Hardware Profile: unknown
Last Updated: 2021-08-27T14:40:34Z
Operation History:
Deprovision:
End: <nil>
Start: <nil>
Inspect:
End: 2021-08-26T20:38:06Z
Start: 2021-08-26T20:34:55Z
Provision:
End: 2021-08-26T20:52:12Z
Start: 2021-08-26T20:38:28Z
Register:
End: 2021-08-26T21:02:19Z
Start: 2021-08-26T21:02:08Z
Operational Status: error
Powered On: true
Provisioning:
ID: bc6ece24-3a2d-48c0-ab72-4520e18e4caa
Image:
Checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
URL: http://10.23.24.101:80/images/control-plane.qcow2
Root Device Hints:
Device Name: /dev/sda
State: provisioned
Tried Credentials:
Credentials:
Name: node01-bmc-secret
Namespace: target-infra
Credentials Version: 4113
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Registered 2m9s metal3-baremetal-controller Registered new host
Normal ProvisionedRegistrationError 97s metal3-baremetal-controller Host adoption failed: Error while attempting to adopt node bc6ece24-3a2d-48c0-ab72-4520e18e4caa: Validation of image href http://10.23.24.101:80/images/control-plane.qcow2 failed, reason: HTTPConnectionPool(host='10.23.24.101', port=80): Max retries exceeded with url: /images/control-plane.qcow2 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fecdcccabe0>: Failed to establish a new connection: [Errno 113] EHOSTUNREACH',)).
```
</details>
* Worker node did pass all the validation and in a state provisioned post upgrade as worker node is provisioned from target-cluster.
* For the sake of upgrade , i hosted the images on the build machine and continued the upgradation.
#### Upgrading CAPM3 infrastructure provider
Once BMO and IRONIC is upgraded, we can upgrade the CAPM3 to v0.4.2
```
airship@help:~/airshipctl$ clusterctl upgrade plan
Checking new release availability...
Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.7 v0.3.22
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.7 v0.3.22
cluster-api capi-system CoreProvider v0.3.7 v0.3.22
infrastructure-metal3 capm3-system InfrastructureProvider v0.4.0 v0.4.3
You can now apply the upgrade by executing the following command:
upgrade apply --management-group capi-system/cluster-api --contract v1alpha3
Management group: capi-system/cluster-api, latest release available for the v1alpha4 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.7 v0.4.1
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.7 v0.4.1
cluster-api capi-system CoreProvider v0.3.7 v0.4.1
infrastructure-metal3 capm3-system InfrastructureProvider v0.4.0 v0.5.0
You can now apply the upgrade by executing the following command:
upgrade apply --management-group capi-system/cluster-api --contract v1alpha4
airship@help:~/airshipctl$
airship@help:~/master/airshipctl/manifests/function/clusterctl$ airshipctl phase run clusterctl-upgrade-target
{"Message":"starting clusterctl upgrade executor","Operation":"ClusterctlInitStart","Timestamp":"2021-08-18T09:03:43.29948973Z","Type":"ClusterctlEvent"}
#clusterctl upgrade apply --kubeconfig /home/airship/.airship/kubeconfig-411987894 --kubeconfig-context target-cluster --management-group capi-system/cluster-api --infrastructure=capm3-system/metal3:v0.4.2
Checking cert-manager version...
Cert-manager is already up to date
Performing upgrade...
Using Override="infrastructure-components.yaml" Provider="infrastructure-metal3" Version="v0.4.2"
Deleting Provider="infrastructure-metal3" Version="" TargetNamespace="capm3-system"
Installing Provider="infrastructure-metal3" Version="v0.4.2" TargetNamespace="capm3-system"
{"Message":"clusterctl upgrade completed successfully","Operation":"ClusterctlInitEnd","Timestamp":"2021-08-18T09:04:22.219045013Z","Type":"ClusterctlEvent"}
airship@help:~/master/airshipctl/manifests/function/workers-capm3$ clusterctl upgrade plan
Checking new release availability...
Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.22 v0.3.23
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.22 v0.3.23
cluster-api capi-system CoreProvider v0.3.22 v0.3.23
infrastructure-metal3 capm3-system InfrastructureProvider v0.4.2 v0.4.3
You can now apply the upgrade by executing the following command:
upgrade apply --management-group capi-system/cluster-api --contract v1alpha3
Management group: capi-system/cluster-api, latest release available for the v1alpha4 API Version of Cluster API (contract):
```
After upgrading CAPM3 to v0.4.2 can see
```
airship@help:~/master/airshipctl$ kubectl get pods -n capm3-system
NAME READY STATUS RESTARTS AGE
capm3-controller-manager-76bc889f9c-dkd8q 2/2 Running 0 80s
capm3-ipam-controller-manager-687f6bd979-99w6c 2/2 Running 0 80s
airship@help:~/master/airshipctl$ kubectl get pods -A | grep -i capi
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-59b7bdbd94-kfkxk 2/2 Running 0 12h
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-7b6d8db5db-jqhlf 2/2 Running 0 12h
capi-system capi-controller-manager-69b878c7b6-pmn2v 2/2 Running 0 12h
capi-webhook-system capi-controller-manager-77ccdfd94c-9jkrr 2/2 Running 0 12h
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-77658d7745-lkjdg 2/2 Running 0 12h
capi-webhook-system capi-kubeadm-control-plane-controller-manager-74dcf8b9c-7twd2 2/2 Running 0 12h
capi-webhook-system capm3-controller-manager-6c8887ff77-fwgz9 2/2 Running 0 113s
capi-webhook-system capm3-ipam-controller-manager-57b65bfcb8-tcmjg 2/2 Running 0 112s
airship@help:~/master/airshipctl$ kubectl get m3m -A
NAMESPACE NAME PROVIDERID READY CLUSTER PHASE
target-infra cluster-controlplane-v5cxr metal3://c607373f-6858-49ca-ac0b-dfd687780bf2 true target-cluster
target-infra worker-1-fwpq8 metal3://088a123e-9a1c-4f66-987c-150e73ff8574 true target-cluster
airship@help:~/master/airshipctl$ kubectl get machines -A
NAMESPACE NAME PROVIDERID PHASE
target-infra cluster-controlplane-xf5gd metal3://c607373f-6858-49ca-ac0b-dfd687780bf2 Running
target-infra worker-1-5bc87ff868-n6l96 metal3://088a123e-9a1c-4f66-987c-150e73ff8574 Running
```
#### Observations:
* During upgrade capm3 controller manager, IPAM controller manager,webhook contrller specific to capm3 get recreated with upgraded versions.
* CAPM3 infrastructure provider patch upgrade did not break the cluster and all other resources in the cluster were intact.
* **Cluster broke when upgrading BMO and IRONIC deployments to v0.4.2** where bmh went to host adopt failed state due to image verification failed.
* Did not observe any pod restarted during and after upgrade of CAPM3.
* Patch upgrade tested by provisioning new worker node, and new node added without any issue.
* As this is a patch version upgrade no changes in the version of resource APIVersion.
<details>
<summary>Test if the new node can be provisioned.</summary>
Post upgrading CAPM3, to test if the capm3 functions properly, added a new worker node to the cluster.
New node has been added without any issue, shows upgade was successfully done.
```
Provisioning new node:
airship@help:~/master/airshipctl/manifests/function/workers-capm3$ airshipctl phase run workers-target
[airshipctl] 2021/08/18 09:09:03 Using kubeconfig at '/home/airship/.airship/kubeconfig-977860680' and context 'target-cluster'
namespace/target-infra unchanged
secret/node05-bmc-secret created
secret/node05-network-data created
baremetalhost.metal3.io/node05 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/worker-1 unchanged
machinedeployment.cluster.x-k8s.io/worker-1 configured
metal3machinetemplate.infrastructure.cluster.x-k8s.io/worker-1 unchanged
7 resource(s) applied. 3 created, 3 unchanged, 1 configured
secret/node03-bmc-secret is Current: Resource is always ready
secret/node03-network-data is Current: Resource is always ready
baremetalhost.metal3.io/node05 is NotFound: Resource not found
secret/node05-bmc-secret is NotFound: Resource not found
secret/node05-network-data is NotFound: Resource not found
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/worker-1 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/worker-1 is Current: Resource is current
metal3machinetemplate.infrastructure.cluster.x-k8s.io/worker-1 is Current: Resource is current
baremetalhost.metal3.io/node03 is Current: Resource is current
namespace/target-infra is Current: Resource is current
secret/node05-bmc-secret is Current: Resource is always ready
secret/node05-network-data is Current: Resource is always ready
baremetalhost.metal3.io/node05 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/worker-1 is Current: Resource is current
```
New node node05 provisioned post upgrade.
```
airship@help:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready master 13h v1.18.6
node03 Ready worker 3h24m v1.18.6
node05 Ready worker 37m v1.18.6
airship@help:~/master/airshipctl$ kubectl get bmh -A
NAMESPACE NAME STATE CONSUMER ONLINE ERROR
target-infra node01 provisioned cluster-controlplane-nnbg5 true
target-infra node03 provisioned worker-1-4vpcx true
target-infra node05 provisioned worker-1-qldmt true
airship@help:~/master/airshipctl$ kubectl get m3m -A
NAMESPACE NAME PROVIDERID READY CLUSTER PHASE
target-infra cluster-controlplane-nnbg5 metal3://c02ff820-28da-4978-a081-e4487f11e5a2 true target-cluster
target-infra worker-1-4vpcx metal3://3de85541-4f1b-4d4f-be9d-5cc48333f20f true target-cluster
target-infra worker-1-qldmt metal3://6cf2478d-8a29-45ff-acc8-d32e29ec7371 true target-cluster
airship@help:~/master/airshipctl$ kubectl get machine -A
NAMESPACE NAME PROVIDERID PHASE VERSION
target-infra cluster-controlplane-8bgs7 metal3://c02ff820-28da-4978-a081-e4487f11e5a2 Running v1.18.6
target-infra worker-1-77f7f7b858-gn6x6 metal3://6cf2478d-8a29-45ff-acc8-d32e29ec7371 Running v1.18.3
target-infra worker-1-77f7f7b858-qz559 metal3://3de85541-4f1b-4d4f-be9d-5cc48333f20f Running v1.18.3
```
</details>
### Scenario 2: Downgrade patch version of CAMP3 from v0.4.2 to v0.4.0
In this scenario we will downgrade the CAPM3 infrastructure provider from v0.4.2 to v0.4.0.
**Downgraded the BMO and IRONIC to v0.4.0**
```
airship@help:~/master/airshipctl$ kubectl get deployments.apps -n metal3 -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
ironic 1/1 1 1 40m dnsmasq,httpd,ironic,ironic-inspector quay.io/metal3-io/ironic:capm3-v0.4.0,quay.io/metal3-io/ironic:capm3-v0.4.0,quay.io/metal3-io/ironic:capm3-v0.4.0,quay.io/metal3-io/ironic-inspector:capm3-v0.4.0 airshipit.org/stage=initinfra,name=ironic
metal3-baremetal-operator 1/1 1 1 40m baremetal-operator,ironic-proxy,ironic-inspector-proxy quay.io/metal3-io/baremetal-operator:capm3-v0.4.0,quay.io/airshipit/socat:1.7.4.1,quay.io/airshipit/socat:1.7.4.1 airshipit.org/stage=initinfra,name=metal3-baremetal-operator
```
**Downgraded CAPI AND CAPM3 to v0.4.0**
```
airship@help:~/master/airshipctl$ airshipctl phase run clusterctl-upgrade-target
{"Message":"starting clusterctl upgrade executor","Operation":"ClusterctlInitStart","Timestamp":"2021-08-19T10:59:27.259844983Z","Type":"ClusterctlEvent"}
#clusterctl upgrade apply --kubeconfig /home/airship/.airship/kubeconfig-890143510 --kubeconfig-context target-cluster --management-group capi-system/cluster-api --infrastructure=capm3-system/metal3:v0.4.0
Checking cert-manager version...
Cert-manager is already up to date
Performing upgrade...
Using Override="infrastructure-components.yaml" Provider="infrastructure-metal3" Version="v0.4.0"
Deleting Provider="infrastructure-metal3" Version="" TargetNamespace="capm3-system"
Installing Provider="infrastructure-metal3" Version="v0.4.0" TargetNamespace="capm3-system"
{"Message":"clusterctl upgrade completed successfully","Operation":"ClusterctlInitEnd","Timestamp":"2021-08-19T11:00:01.926555087Z","Type":"ClusterctlEvent"}
airship@help:~/master/airshipctl$ clusterctl upgrade plan
Checking new release availability...
Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.7 v0.3.22
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.7 v0.3.22
cluster-api capi-system CoreProvider v0.3.7 v0.3.22
infrastructure-metal3 capm3-system InfrastructureProvider v0.4.0 v0.4.3
You can now apply the upgrade by executing the following command:
upgrade apply --management-group capi-system/cluster-api --contract v1alpha3
Management group: capi-system/cluster-api, latest release available for the v1alpha4 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.7 v0.4.1
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.7 v0.4.1
cluster-api capi-system CoreProvider v0.3.7 v0.4.1
infrastructure-metal3 capm3-system InfrastructureProvider v0.4.0 v0.5.0
You can now apply the upgrade by executing the following command:
upgrade apply --management-group capi-system/cluster-api --contract v1alpha4
```
#### Observations
* Downgrade was seamless.
* CAPM3 controller managers recreated with version 0.4.0. All others pods were unaffected.
* No cluster outage during CAPM3 downgrade.
### Scenario 3: Minor version upgrade of CAMP3 from v0.4.0 to v0.5.0
Target-cluster is on Kubernetes version v1.19.1, CAPI v1alpha3(v0.3.7), CAPM3 v1alpha4(v0.4.0).
In this scenario we will upgrade the CAPI version to v1alpha4 and CAPM3 to v1alpha5.
For this upgrade we need to upgrade the BMO and IRONIC to version v0.5.0
For upgrading BMO and IRONIC I have taken referance from below PS from Sirisha Gopigiri
https://review.opendev.org/c/airship/airshipctl/+/804834
#### Dependancies and constraints for an upgrade.
* Kubernetes version required V1.19.1 or higher
* CAPI version should be upgraded to v1alpha4
* CAPI componenets needs to be upgraded to v0.4.0 or higher
* baremetal-operator, ironic needs to be upgraded to v0.5.0.
#### Upgrading BMO and Ironic Deployments to v0.5.0
```
airship@help:~/upgrade/airshipctl$ kubectl get deployments.apps -n metal3 -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
baremetal-operator-controller-manager 1/1 1 1 16m kube-rbac-proxy,manager gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0,quay.io/metal3-io/baremetal-operator:capm3-v0.5.0 airshipit.org/stage=initinfra,control-plane=controller-manager
capm3-ironic 1/1 1 1 16m ironic-dnsmasq,mariadb,ironic-api,ironic-conductor,ironic-log-watch,ironic-inspector,ironic-inspector-log-watch quay.io/metal3-io/ironic:capm3-v0.5.0,quay.io/metal3-io/ironic:capm3-v0.5.0,quay.io/metal3-io/ironic:capm3-v0.5.0,quay.io/metal3-io/ironic:capm3-v0.5.0,quay.io/metal3-io/ironic,quay.io/metal3-io/ironic:capm3-v0.5.0,quay.io/metal3-io/ironic airshipit.org/stage=initinfra,name=capm3-ironic
```
```
airship@help:~/upgrade/airshipctl$ kubectl get bmh -A
NAMESPACE NAME STATE CONSUMER ONLINE ERROR
target-infra node01 provisioned cluster-controlplane-7cxt7 true
target-infra node03 provisioned worker-1-7z562 true
airship@help:~/upgrade/airshipctl$ kubectl get machine -A
NAMESPACE NAME PROVIDERID PHASE
target-infra cluster-controlplane-cjdkz metal3://c7f0f3d4-ebaa-4153-963e-94fe335181db Running
target-infra worker-1-7bff768f9-fxmzr metal3://7f29349c-4e09-475b-888a-0adfc591d408 Running
airship@help:~/upgrade/airshipctl$ kubectl get m3m -A
NAMESPACE NAME PROVIDERID READY CLUSTER PHASE
target-infra cluster-controlplane-7cxt7 metal3://c7f0f3d4-ebaa-4153-963e-94fe335181db true target-cluster
target-infra worker-1-7z562 metal3://7f29349c-4e09-475b-888a-0adfc591d408 true target-cluster
```
**Observations:**
* We need to align BMO and IRONIC deployments names as per GreenField deployments names. Otherwise new deployments get created instead of upgrading existing deployments.
* While upgrading CRD baremetalhosts.metal3.io from apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1, we need to update `spec.preserveUnknownFields` to `false` to suppport structural schema.
* We need to ensure ironic pod gets provision on the controlplane node. We need to update the current nodeSelector patch to choose the correct node.
* Post upgrading BMO and IRONIC to v0.5.0 , bmh re-registered and validated themselves successfully.
* It did not break in the Image validation phase in minor version upgrade.
#### Upgrade CAPM3 to v0.5.0 (Minor version upgrade)
After updating BMO/IRONIC to v0.5.0, upgrading the CAPM3 to version v0.5.0
```
airship@help:~/upgrade/airshipctl$ clusterctl upgrade apply --management-group capi-system/cluster-api --contract v1alpha4
Performing upgrade...
Performing upgrade...
Upgrading Provider="capi-kubeadm-bootstrap-system/bootstrap-kubeadm" CurrentVersion="v0.3.7" TargetVersion="v0.4.1"
Deleting Provider="bootstrap-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.4.1" TargetNamespace="capi-kubeadm-bootstrap-system"
Upgrading Provider="capi-kubeadm-control-plane-system/control-plane-kubeadm" CurrentVersion="v0.3.7" TargetVersion="v0.4.1"
Deleting Provider="control-plane-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v0.4.1" TargetNamespace="capi-kubeadm-control-plane-system"
Upgrading Provider="capi-system/cluster-api" CurrentVersion="v0.3.7" TargetVersion="v0.4.1"
Deleting Provider="cluster-api" Version="v0.3.7" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.4.1" TargetNamespace="capi-system"
Upgrading Provider="capm3-system/infrastructure-metal3" CurrentVersion="v0.4.0" TargetVersion="v0.5.0"
Deleting Provider="infrastructure-metal3" Version="v0.4.0" TargetNamespace="capm3-system"
Installing Provider="infrastructure-metal3" Version="v0.5.0" TargetNamespace="capm3-system"
```
```
airship@help:~/upgrade/airshipctl$ clusterctl upgrade plan
Checking new release availability...
Management group: capi-system/cluster-api, latest release available for the v1alpha4 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.4.1 Already up to date
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.4.1 Already up to date
cluster-api capi-system CoreProvider v0.4.1 Already up to date
infrastructure-metal3 capm3-system InfrastructureProvider v0.5.0 Already up to date
You are already up to date!
```
* CAPM3 deployments gets recreated with version v0.5.0.
* Existing cluster was remain unharmed post upgrade.
```
airship@help:~/upgrade/airshipctl$ kubectl get deployments.apps -n capm3-system -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
capm3-controller-manager 1/1 1 1 3m10s manager quay.io/metal3-io/cluster-api-provider-metal3:v0.5.0 cluster.x-k8s.io/provider=infrastructure-metal3,control-plane=controller-manager,controller-tools.k8s.io=1.0
capm3-ipam-controller-manager 1/1 1 1 3m9s manager quay.io/metal3-io/ip-address-manager:v0.1.0 cluster.x-k8s.io/provider=infrastructure-metal3,control-plane=controller-manager,controller-tools.k8s.io=1.0
airship@help:~/upgrade/airshipctl$ kubectl get pods -A| grep -i capi
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-864bc6b479-w4cz2 1/1 Running 0 4m11s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-7dd8df47f-7f4bf 1/1 Running 0 4m3s
capi-system capi-controller-manager-c54759577-p7z7k 1/1 Running 0 3m48s
capi-webhook-system capi-controller-manager-745689557d-9nkvf 2/2 Running 0 72m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6949f44db8-drbmb 2/2 Running 0 72m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7b6c4bf48d-k2pwk 2/2 Running 0 72m
capi-webhook-system capm3-controller-manager-76b787647b-rssdv 2/2 Running 0 72m
capi-webhook-system capm3-ipam-controller-manager-6bb9f5b99c-bclrx 2/2 Running 0 72m
airship@help:~/upgrade/airshipctl$ kubectl get bmh -A
NAMESPACE NAME STATE CONSUMER ONLINE ERROR
target-infra node01 provisioned cluster-controlplane-7cxt7 true
target-infra node02 registration error false registration error
target-infra node03 provisioned worker-1-7z562 true
airship@help:~/upgrade/airshipctl$ kubectl get machines -A
NAMESPACE NAME PROVIDERID PHASE VERSION
target-infra cluster-controlplane-cjdkz metal3://c7f0f3d4-ebaa-4153-963e-94fe335181db Running v1.19.1
target-infra worker-1-7bff768f9-fxmzr metal3://7f29349c-4e09-475b-888a-0adfc591d408 Running v1.19.1
```
* No other pods in the cluster were affected and can see no pods get restarted due to upgrade.
```
airship@help:~/upgrade/airshipctl$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-784866bbfd-n97bh 1/1 Running 0 80m
calico-system calico-node-r6kt9 1/1 Running 0 80m
calico-system calico-node-sh9wv 1/1 Running 0 52m
calico-system calico-typha-5657654dcf-fxwr4 1/1 Running 0 80m
calico-system calico-typha-5657654dcf-lqrbj 1/1 Running 0 50m
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-864bc6b479-w4cz2 1/1 Running 0 4m39s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-7dd8df47f-7f4bf 1/1 Running 0 4m31s
capi-system capi-controller-manager-c54759577-p7z7k 1/1 Running 0 4m16s
capi-webhook-system capi-controller-manager-745689557d-9nkvf 2/2 Running 0 73m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6949f44db8-drbmb 2/2 Running 0 73m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7b6c4bf48d-k2pwk 2/2 Running 0 72m
capi-webhook-system capm3-controller-manager-76b787647b-rssdv 2/2 Running 0 72m
capi-webhook-system capm3-ipam-controller-manager-6bb9f5b99c-bclrx 2/2 Running 0 72m
capm3-system capm3-controller-manager-776f8b9495-hk7wq 1/1 Running 0 3m58s
capm3-system capm3-ipam-controller-manager-5958778bc6-btj5z 1/1 Running 0 3m58s
cert-manager cert-manager-768bf64dd4-l7xng 1/1 Running 0 80m
cert-manager cert-manager-cainjector-646879549c-9lmfw 1/1 Running 0 80m
cert-manager cert-manager-webhook-6dc9ccc9fb-bgsx2 1/1 Running 0 80m
default nginx 1/1 Running 0 75m
flux-system helm-controller-f9945fb4b-tlccx 1/1 Running 0 79m
flux-system source-controller-9bff54fbf-zmflf 1/1 Running 0 79m
hardware-classification hardware-classification-controller-manager-57d954ff86-6twwt 2/2 Running 0 79m
ingress ingress-ingress-nginx-controller-594c74b8fd-wn2x7 1/1 Running 0 52m
ingress ingress-ingress-nginx-defaultbackend-744cb749cf-kx5cn 1/1 Running 0 52m
kube-system coredns-f9fd979d6-7dhcm 1/1 Running 0 81m
kube-system coredns-f9fd979d6-cjx5l 1/1 Running 0 81m
kube-system etcd-node01 1/1 Running 0 81m
kube-system kube-apiserver-node01 1/1 Running 0 81m
kube-system kube-controller-manager-node01 1/1 Running 0 81m
kube-system kube-proxy-9hmrn 1/1 Running 0 81m
kube-system kube-proxy-xx98z 1/1 Running 0 52m
kube-system kube-scheduler-node01 1/1 Running 0 81m
metal3 baremetal-operator-controller-manager-68c7fd955b-5dmc5 2/2 Running 0 24m
metal3 capm3-ironic-75c985d58c-rlvnc 7/7 Running 0 24m
tigera-operator tigera-operator-5b76777d49-ptmn5 1/1 Running 0 80m
airship@help:~/upgrade/airshipctl$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready master 81m v1.19.1
node03 Ready worker 52m v1.19.1
```
* Version referances of the CAPI objects and CAPM3 reflected post upgrade.
* There are few places mostly in metadata managed fields where version was still referencing to the older version.
<details>
<summary>Cluster object v1alpha4</summary>
```
Name: target-cluster
Namespace: target-infra
Labels: airshipit.org/stage=initinfra
Annotations: <none>
API Version: cluster.x-k8s.io/v1alpha4
Kind: Cluster
Metadata:
Creation Timestamp: 2021-08-25T08:36:25Z
Finalizers:
cluster.cluster.x-k8s.io
Generation: 3
Managed Fields:
API Version: cluster.x-k8s.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:finalizers:
.:
v:"cluster.cluster.x-k8s.io":
f:spec:
.:
f:clusterNetwork:
.:
f:pods:
.:
f:cidrBlocks:
f:serviceDomain:
f:services:
.:
f:cidrBlocks:
f:controlPlaneEndpoint:
.:
f:host:
f:port:
f:controlPlaneRef:
.:
f:kind:
f:name:
f:namespace:
f:infrastructureRef:
.:
f:kind:
f:name:
f:namespace:
f:paused:
Manager: clusterctl
Operation: Update
Time: 2021-08-25T08:36:26Z
API Version: cluster.x-k8s.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:controlPlaneInitialized:
f:controlPlaneReady:
f:infrastructureReady:
f:phase:
Manager: manager
Operation: Update
Time: 2021-08-25T08:36:30Z
API Version: cluster.x-k8s.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:airshipit.org/stage:
Manager: airshipctl
Operation: Update
Time: 2021-08-25T08:36:59Z
API Version: cluster.x-k8s.io/v1alpha4
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:controlPlaneRef:
f:apiVersion:
f:infrastructureRef:
f:apiVersion:
f:status:
f:observedGeneration:
Manager: manager
Operation: Update
Time: 2021-08-25T09:44:43Z
Resource Version: 39984
Self Link: /apis/cluster.x-k8s.io/v1alpha4/namespaces/target-infra/clusters/target-cluster
UID: 6bdaafb3-deae-43d5-b3be-d50e8009c77e
Spec:
Cluster Network:
Pods:
Cidr Blocks:
192.168.0.0/18
Service Domain: cluster.local
Services:
Cidr Blocks:
10.96.0.0/12
Control Plane Endpoint:
Host: 10.23.25.102
Port: 6443
Control Plane Ref:
API Version: controlplane.cluster.x-k8s.io/v1alpha4
Kind: KubeadmControlPlane
Name: cluster-controlplane
Namespace: target-infra
Infrastructure Ref:
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Kind: Metal3Cluster
Name: target-cluster
Namespace: target-infra
Status:
Conditions:
Last Transition Time: 2021-08-25T08:36:30Z
Status: True
Type: Ready
Last Transition Time: 2021-08-25T09:44:30Z
Status: True
Type: ControlPlaneInitialized
Last Transition Time: 2021-08-25T08:36:30Z
Status: True
Type: ControlPlaneReady
Last Transition Time: 2021-08-25T08:36:28Z
Status: True
Type: InfrastructureReady
Control Plane Ready: true
Infrastructure Ready: true
Observed Generation: 3
Phase: Provisioned
Events: <none>
```
</details>
<details>
<summary>controlplane bmh object</summary>
```
Name: node01
Namespace: target-infra
Labels: airshipit.org/example-label=label-bmh-like-this
airshipit.org/k8s-role=controlplane-host
cluster.x-k8s.io/cluster-name=target-cluster
Annotations: <none>
API Version: metal3.io/v1alpha1
Kind: BareMetalHost
Metadata:
Creation Timestamp: 2021-08-25T08:36:26Z
Finalizers:
baremetalhost.metal3.io
Generation: 2
Managed Fields:
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"4d452ead-52e2-416a-96e5-6839177eb0d7"}:
.:
f:apiVersion:
f:controller:
f:kind:
f:name:
f:uid:
Manager: manager
Operation: Update
Time: 2021-08-25T08:36:25Z
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:finalizers:
.:
v:"baremetalhost.metal3.io":
f:labels:
.:
f:airshipit.org/example-label:
f:airshipit.org/k8s-role:
f:cluster.x-k8s.io/cluster-name:
f:ownerReferences:
.:
k:{"uid":"eb242c0d-f88a-4a75-93e7-6096bdac65d7"}:
.:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:bmc:
.:
f:address:
f:credentialsName:
f:bootMACAddress:
f:bootMode:
f:consumerRef:
.:
f:kind:
f:name:
f:namespace:
f:image:
.:
f:checksum:
f:url:
f:networkData:
.:
f:name:
f:namespace:
f:online:
f:userData:
.:
f:name:
f:namespace:
Manager: clusterctl
Operation: Update
Time: 2021-08-25T08:36:26Z
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:errorMessage:
f:goodCredentials:
.:
f:credentials:
.:
f:name:
f:namespace:
f:credentialsVersion:
f:hardware:
.:
f:cpu:
.:
f:arch:
f:clockMegahertz:
f:count:
f:flags:
f:model:
f:firmware:
.:
f:bios:
f:hostname:
f:nics:
f:ramMebibytes:
f:storage:
f:systemVendor:
.:
f:manufacturer:
f:productName:
f:hardwareProfile:
f:lastUpdated:
f:operationHistory:
.:
f:deprovision:
.:
f:end:
f:start:
f:inspect:
.:
f:end:
f:start:
f:provision:
.:
f:end:
f:start:
f:register:
.:
f:end:
f:start:
f:operationalStatus:
f:poweredOn:
f:provisioning:
.:
f:ID:
f:image:
.:
f:checksum:
f:url:
f:rootDeviceHints:
.:
f:deviceName:
f:state:
f:triedCredentials:
.:
f:credentials:
.:
f:name:
f:namespace:
f:credentialsVersion:
Manager: baremetal-operator
Operation: Update
Time: 2021-08-25T09:33:27Z
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"eb242c0d-f88a-4a75-93e7-6096bdac65d7"}:
f:apiVersion:
f:spec:
f:consumerRef:
f:apiVersion:
Manager: cluster-api-provider-metal3-manager
Operation: Update
Time: 2021-08-25T09:44:41Z
Owner References:
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Controller: true
Kind: Metal3Machine
Name: cluster-controlplane-7cxt7
UID: eb242c0d-f88a-4a75-93e7-6096bdac65d7
Resource Version: 39942
Self Link: /apis/metal3.io/v1alpha1/namespaces/target-infra/baremetalhosts/node01
UID: da19f2c6-bff4-4d91-b8db-c4b4082e7239
Spec:
Automated Cleaning Mode: metadata
Bmc:
Address: redfish+http://10.23.25.1:8000/redfish/v1/Systems/air-target-1
Credentials Name: node01-bmc-secret
Boot MAC Address: 52:54:00:b6:ed:31
Boot Mode: legacy
Consumer Ref:
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Kind: Metal3Machine
Name: cluster-controlplane-7cxt7
Namespace: target-infra
Image:
Checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
URL: http://10.23.24.101:80/images/control-plane.qcow2
Network Data:
Name: node01-network-data
Namespace: target-infra
Online: true
User Data:
Name: cluster-controlplane-4m7nd
Namespace: target-infra
Status:
Error Count: 0
Error Message:
Good Credentials:
Credentials:
Name: node01-bmc-secret
Namespace: target-infra
Credentials Version: 28775
Hardware:
Cpu:
Arch: x86_64
Clock Megahertz: 2400
Count: 2
Flags:
3dnowprefetch
abm
adx
aes
apic
arat
avx
avx2
bmi1
bmi2
clflush
cmov
constant_tsc
cpuid
cpuid_fault
cx16
cx8
de
ept
f16c
fma
fpu
fsgsbase
fxsr
hle
hypervisor
ibpb
ibrs
invpcid
invpcid_single
lahf_lm
lm
mca
mce
md_clear
mmx
movbe
msr
mtrr
nopl
nx
pae
pat
pcid
pclmulqdq
pge
pni
popcnt
pse
pse36
pti
rdrand
rdseed
rdtscp
rep_good
rtm
sep
smap
smep
ss
ssbd
sse
sse2
sse4_1
sse4_2
ssse3
syscall
tpr_shadow
tsc
tsc_adjust
tsc_deadline_timer
tsc_known_freq
vme
vmx
vnmi
vpid
x2apic
xsave
xsaveopt
xtopology
Model: Intel Core Processor (Skylake, IBRS)
Firmware:
Bios:
Hostname: ubuntu
Nics:
Ip: fe80::5054:ff:fe9b:274c%ens3
Mac: 52:54:00:9b:27:4c
Model: 0x1af4 0x0001
Name: ens3
Ip: 10.23.24.245
Mac: 52:54:00:b6:ed:31
Model: 0x1af4 0x0001
Name: ens4
Pxe: true
Ram Mebibytes: 7168
Storage:
Hctl: 2:0:0:0
Model: QEMU HARDDISK
Name: /dev/sda
Rotational: true
Serial Number: QM00005
Size Bytes: 32212254720
Vendor: ATA
System Vendor:
Manufacturer: QEMU
Product Name: Standard PC (i440FX + PIIX, 1996)
Hardware Profile: unknown
Last Updated: 2021-08-25T09:33:28Z
Operation History:
Deprovision:
End: <nil>
Start: <nil>
Inspect:
End: 2021-08-25T08:12:55Z
Start: 2021-08-25T08:09:44Z
Provision:
End: 2021-08-25T08:25:31Z
Start: 2021-08-25T08:13:17Z
Register:
End: 2021-08-25T09:33:28Z
Start: 2021-08-25T09:32:52Z
Operational Status: OK
Powered On: true
Provisioning:
ID: 9d74db0d-a2e4-4c2d-b1a2-947ed8b7c2fd
Image:
Checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
URL: http://10.23.24.101:80/images/control-plane.qcow2
Root Device Hints:
Device Name: /dev/sda
State: provisioned
Tried Credentials:
Credentials:
Name: node01-bmc-secret
Namespace: target-infra
Credentials Version: 28775
Events: <none>
```
</details>
<details>
<summary> worker bmh object </summary>
```
Name: node03
Namespace: target-infra
Labels: airshipit.org/k8s-role=worker
cluster.x-k8s.io/cluster-name=target-cluster
Annotations: <none>
API Version: metal3.io/v1alpha1
Kind: BareMetalHost
Metadata:
Creation Timestamp: 2021-08-25T08:37:26Z
Finalizers:
baremetalhost.metal3.io
Generation: 4
Managed Fields:
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:airshipit.org/k8s-role:
f:spec:
.:
f:bmc:
.:
f:address:
f:credentialsName:
f:bootMACAddress:
f:bootMode:
f:networkData:
.:
f:name:
f:namespace:
Manager: airshipctl
Operation: Update
Time: 2021-08-25T08:37:26Z
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:cluster.x-k8s.io/cluster-name:
f:ownerReferences:
.:
k:{"uid":"5f7c6e40-3862-4ce3-b4e0-7fc7ec0a6c66"}:
.:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
f:consumerRef:
.:
f:kind:
f:name:
f:namespace:
f:image:
.:
f:checksum:
f:url:
f:online:
f:userData:
.:
f:name:
f:namespace:
Manager: manager
Operation: Update
Time: 2021-08-25T08:40:58Z
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"baremetalhost.metal3.io":
f:status:
.:
f:errorMessage:
f:goodCredentials:
.:
f:credentials:
.:
f:name:
f:namespace:
f:credentialsVersion:
f:hardware:
.:
f:cpu:
.:
f:arch:
f:clockMegahertz:
f:count:
f:flags:
f:model:
f:firmware:
.:
f:bios:
f:hostname:
f:nics:
f:ramMebibytes:
f:storage:
f:systemVendor:
.:
f:manufacturer:
f:productName:
f:hardwareProfile:
f:lastUpdated:
f:operationHistory:
.:
f:deprovision:
.:
f:end:
f:start:
f:inspect:
.:
f:end:
f:start:
f:provision:
.:
f:end:
f:start:
f:register:
.:
f:end:
f:start:
f:operationalStatus:
f:poweredOn:
f:provisioning:
.:
f:ID:
f:image:
.:
f:checksum:
f:url:
f:rootDeviceHints:
.:
f:deviceName:
f:state:
f:triedCredentials:
.:
f:credentials:
.:
f:name:
f:namespace:
f:credentialsVersion:
Manager: baremetal-operator
Operation: Update
Time: 2021-08-25T09:33:30Z
API Version: metal3.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"5f7c6e40-3862-4ce3-b4e0-7fc7ec0a6c66"}:
f:apiVersion:
f:spec:
f:consumerRef:
f:apiVersion:
Manager: cluster-api-provider-metal3-manager
Operation: Update
Time: 2021-08-25T09:44:41Z
Owner References:
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Controller: true
Kind: Metal3Machine
Name: worker-1-7z562
UID: 5f7c6e40-3862-4ce3-b4e0-7fc7ec0a6c66
Resource Version: 39946
Self Link: /apis/metal3.io/v1alpha1/namespaces/target-infra/baremetalhosts/node03
UID: 7f29349c-4e09-475b-888a-0adfc591d408
Spec:
Automated Cleaning Mode: metadata
Bmc:
Address: redfish+http://10.23.25.1:8000/redfish/v1/Systems/air-worker-1
Credentials Name: node03-bmc-secret
Boot MAC Address: 52:54:00:b6:ed:23
Boot Mode: legacy
Consumer Ref:
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Kind: Metal3Machine
Name: worker-1-7z562
Namespace: target-infra
Image:
Checksum: http://10.23.24.102:80/images/data-plane.qcow2.md5sum
URL: http://10.23.24.102:80/images/data-plane.qcow2
Network Data:
Name: node03-network-data
Namespace: target-infra
Online: true
User Data:
Name: worker-1-knvpr
Namespace: target-infra
Status:
Error Count: 0
Error Message:
Good Credentials:
Credentials:
Name: node03-bmc-secret
Namespace: target-infra
Credentials Version: 28777
Hardware:
Cpu:
Arch: x86_64
Clock Megahertz: 2400
Count: 2
Flags:
3dnowprefetch
abm
adx
aes
apic
arat
avx
avx2
bmi1
bmi2
clflush
cmov
constant_tsc
cpuid
cpuid_fault
cx16
cx8
de
ept
f16c
fma
fpu
fsgsbase
fxsr
hle
hypervisor
ibpb
ibrs
invpcid
invpcid_single
lahf_lm
lm
mca
mce
md_clear
mmx
movbe
msr
mtrr
nopl
nx
pae
pat
pcid
pclmulqdq
pge
pni
popcnt
pse
pse36
pti
rdrand
rdseed
rdtscp
rep_good
rtm
sep
smap
smep
ss
ssbd
sse
sse2
sse4_1
sse4_2
ssse3
syscall
tpr_shadow
tsc
tsc_adjust
tsc_deadline_timer
tsc_known_freq
vme
vmx
vnmi
vpid
x2apic
xsave
xsaveopt
xtopology
Model: Intel Core Processor (Skylake, IBRS)
Firmware:
Bios:
Hostname: ubuntu
Nics:
Ip: 10.23.24.231
Mac: 52:54:00:b6:ed:23
Model: 0x1af4 0x0001
Name: ens4
Pxe: true
Ip: fe80::5054:ff:fe9b:2707%ens3
Mac: 52:54:00:9b:27:07
Model: 0x1af4 0x0001
Name: ens3
Ram Mebibytes: 7168
Storage:
Hctl: 2:0:0:0
Model: QEMU HARDDISK
Name: /dev/sda
Rotational: true
Serial Number: QM00005
Size Bytes: 32212254720
Vendor: ATA
System Vendor:
Manufacturer: QEMU
Product Name: Standard PC (i440FX + PIIX, 1996)
Hardware Profile: unknown
Last Updated: 2021-08-25T09:33:30Z
Operation History:
Deprovision:
End: <nil>
Start: <nil>
Inspect:
End: 2021-08-25T08:40:38Z
Start: 2021-08-25T08:37:27Z
Provision:
End: 2021-08-25T08:55:04Z
Start: 2021-08-25T08:40:58Z
Register:
End: 2021-08-25T09:33:30Z
Start: 2021-08-25T09:32:53Z
Operational Status: OK
Powered On: true
Provisioning:
ID: d3fae99f-ce70-44c9-8842-21352e82fb10
Image:
Checksum: http://10.23.24.102:80/images/data-plane.qcow2.md5sum
URL: http://10.23.24.102:80/images/data-plane.qcow2
Root Device Hints:
Device Name: /dev/sda
State: provisioned
Tried Credentials:
Credentials:
Name: node03-bmc-secret
Namespace: target-infra
Credentials Version: 28777
Events: <none>
```
</details>
<details>
<summary>controleplane m3m object</summary>
```
Name: cluster-controlplane-7cxt7
Namespace: target-infra
Labels: cluster.x-k8s.io/cluster-name=target-cluster
cluster.x-k8s.io/control-plane=
Annotations: cluster.x-k8s.io/cloned-from-groupkind: Metal3MachineTemplate.infrastructure.cluster.x-k8s.io
cluster.x-k8s.io/cloned-from-name: cluster-controlplane
metal3.io/BareMetalHost: target-infra/node01
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Kind: Metal3Machine
Metadata:
Creation Timestamp: 2021-08-25T08:36:26Z
Finalizers:
metal3machine.infrastructure.cluster.x-k8s.io
Generation: 1
Managed Fields:
API Version: infrastructure.cluster.x-k8s.io/v1alpha4
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cluster.x-k8s.io/cloned-from-groupkind:
f:cluster.x-k8s.io/cloned-from-name:
f:metal3.io/BareMetalHost:
f:finalizers:
.:
v:"metal3machine.infrastructure.cluster.x-k8s.io":
f:labels:
.:
f:cluster.x-k8s.io/cluster-name:
f:cluster.x-k8s.io/control-plane:
f:ownerReferences:
.:
k:{"uid":"2038a78b-e95b-4db8-8fe4-cc5a867054ae"}:
.:
f:apiVersion:
f:kind:
f:name:
f:uid:
k:{"uid":"4b85828a-d17e-4134-bd52-f746af1688b6"}:
.:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:hostSelector:
.:
f:matchLabels:
.:
f:airshipit.org/k8s-role:
f:image:
.:
f:checksum:
f:url:
f:providerID:
f:status:
f:userData:
.:
f:name:
f:namespace:
Manager: clusterctl
Operation: Update
Time: 2021-08-25T08:36:26Z
API Version: infrastructure.cluster.x-k8s.io/v1alpha4
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"3828bcf4-f652-4a34-b7eb-7e5b58317b7a"}:
.:
f:apiVersion:
f:kind:
f:name:
f:uid:
k:{"uid":"cfeda3dd-f8b5-4a47-bad7-62d7952445bf"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:status:
.:
f:addresses:
f:ready:
Manager: manager
Operation: Update
Time: 2021-08-25T09:44:16Z
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"4b85828a-d17e-4134-bd52-f746af1688b6"}:
f:apiVersion:
Manager: manager
Operation: Update
Time: 2021-08-25T09:44:42Z
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Fields Type: FieldsV1
fieldsV1:
f:status:
f:lastUpdated:
Manager: cluster-api-provider-metal3-manager
Operation: Update
Time: 2021-08-25T11:21:42Z
Owner References:
API Version: controlplane.cluster.x-k8s.io/v1alpha3
Kind: KubeadmControlPlane
Name: cluster-controlplane
UID: 2038a78b-e95b-4db8-8fe4-cc5a867054ae
API Version: cluster.x-k8s.io/v1alpha4
Block Owner Deletion: true
Controller: true
Kind: Machine
Name: cluster-controlplane-cjdkz
UID: 4b85828a-d17e-4134-bd52-f746af1688b6
Resource Version: 90551
Self Link: /apis/infrastructure.cluster.x-k8s.io/v1alpha5/namespaces/target-infra/metal3machines/cluster-controlplane-7cxt7
UID: eb242c0d-f88a-4a75-93e7-6096bdac65d7
Spec:
Automated Cleaning Mode: metadata
Host Selector:
Match Labels:
airshipit.org/k8s-role: controlplane-host
Image:
Checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
URL: http://10.23.24.101:80/images/control-plane.qcow2
Provider ID: metal3://c7f0f3d4-ebaa-4153-963e-94fe335181db
Status:
Addresses:
Address: fe80::5054:ff:fe9b:274c%ens3
Type: InternalIP
Address: 10.23.24.245
Type: InternalIP
Address: ubuntu
Type: Hostname
Address: ubuntu
Type: InternalDNS
Last Updated: 2021-08-25T11:21:42Z
Ready: true
Events: <none>
```
</details>
<details>
<summary>controlplane machine object</summary>
```
Name: cluster-controlplane-cjdkz
Namespace: target-infra
Labels: cluster.x-k8s.io/cluster-name=target-cluster
cluster.x-k8s.io/control-plane=
Annotations: controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration:
{"etcd":{},"networking":{"serviceSubnet":"10.0.0.0/20","podSubnet":"192.168.16.0/20","dnsDomain":"cluster.local"},"apiServer":{"extraArgs"...
API Version: cluster.x-k8s.io/v1alpha4
Kind: Machine
Metadata:
Creation Timestamp: 2021-08-25T08:36:25Z
Finalizers:
machine.cluster.x-k8s.io
Generation: 2
Managed Fields:
API Version: cluster.x-k8s.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration:
f:finalizers:
.:
v:"machine.cluster.x-k8s.io":
f:labels:
.:
f:cluster.x-k8s.io/cluster-name:
f:cluster.x-k8s.io/control-plane:
f:ownerReferences:
.:
k:{"uid":"2038a78b-e95b-4db8-8fe4-cc5a867054ae"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:bootstrap:
.:
f:configRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:uid:
f:dataSecretName:
f:clusterName:
f:infrastructureRef:
.:
f:kind:
f:name:
f:namespace:
f:uid:
f:providerID:
f:version:
Manager: clusterctl
Operation: Update
Time: 2021-08-25T08:36:25Z
API Version: cluster.x-k8s.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"3828bcf4-f652-4a34-b7eb-7e5b58317b7a"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:status:
.:
f:addresses:
f:bootstrapReady:
f:infrastructureReady:
f:lastUpdated:
f:nodeRef:
.:
f:apiVersion:
f:kind:
f:name:
f:uid:
f:phase:
Manager: manager
Operation: Update
Time: 2021-08-25T08:36:29Z
API Version: cluster.x-k8s.io/v1alpha4
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:infrastructureRef:
f:apiVersion:
f:status:
f:conditions:
f:nodeInfo:
.:
f:architecture:
f:bootID:
f:containerRuntimeVersion:
f:kernelVersion:
f:kubeProxyVersion:
f:kubeletVersion:
f:machineID:
f:operatingSystem:
f:osImage:
f:systemUUID:
f:observedGeneration:
Manager: manager
Operation: Update
Time: 2021-08-25T09:45:12Z
Owner References:
API Version: controlplane.cluster.x-k8s.io/v1alpha3
Block Owner Deletion: true
Controller: true
Kind: KubeadmControlPlane
Name: cluster-controlplane
UID: 2038a78b-e95b-4db8-8fe4-cc5a867054ae
Resource Version: 40246
Self Link: /apis/cluster.x-k8s.io/v1alpha4/namespaces/target-infra/machines/cluster-controlplane-cjdkz
UID: 4b85828a-d17e-4134-bd52-f746af1688b6
Spec:
Bootstrap:
Config Ref:
API Version: bootstrap.cluster.x-k8s.io/v1alpha3
Kind: KubeadmConfig
Name: cluster-controlplane-4m7nd
Namespace: target-infra
UID: a6307f89-adb9-41e3-a720-53727b60962f
Data Secret Name: cluster-controlplane-4m7nd
Cluster Name: target-cluster
Infrastructure Ref:
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Kind: Metal3Machine
Name: cluster-controlplane-7cxt7
Namespace: target-infra
UID: 4d452ead-52e2-416a-96e5-6839177eb0d7
Provider ID: metal3://c7f0f3d4-ebaa-4153-963e-94fe335181db
Version: v1.19.1
Status:
Addresses:
Address: fe80::5054:ff:fe9b:274c%ens3
Type: InternalIP
Address: 10.23.24.245
Type: InternalIP
Address: ubuntu
Type: Hostname
Address: ubuntu
Type: InternalDNS
Bootstrap Ready: true
Conditions:
Last Transition Time: 2021-08-25T08:36:29Z
Status: True
Type: Ready
Last Transition Time: 2021-08-25T09:45:12Z
Status: True
Type: APIServerPodHealthy
Last Transition Time: 2021-08-25T08:36:27Z
Status: True
Type: BootstrapReady
Last Transition Time: 2021-08-25T09:45:12Z
Status: True
Type: ControllerManagerPodHealthy
Last Transition Time: 2021-08-25T09:45:12Z
Status: True
Type: EtcdMemberHealthy
Last Transition Time: 2021-08-25T09:45:12Z
Status: True
Type: EtcdPodHealthy
Last Transition Time: 2021-08-25T08:36:29Z
Status: True
Type: InfrastructureReady
Last Transition Time: 2021-08-25T09:44:42Z
Status: True
Type: NodeHealthy
Last Transition Time: 2021-08-25T09:45:12Z
Status: True
Type: SchedulerPodHealthy
Infrastructure Ready: true
Last Updated: 2021-08-25T08:36:29Z
Node Info:
Architecture: amd64
Boot ID: 284f70b5-9b10-4e3a-8839-1afaa2d7a43e
Container Runtime Version: containerd://1.4.9
Kernel Version: 5.4.0-81-generic
Kube Proxy Version: v1.19.1
Kubelet Version: v1.19.1
Machine ID: 521078e9ad9c4498bd9fb5fe283e04a8
Operating System: linux
Os Image: Ubuntu 20.04.2 LTS
System UUID: 521078e9-ad9c-4498-bd9f-b5fe283e04a8
Node Ref:
API Version: v1
Kind: Node
Name: node01
UID: 76eb3b77-b2c1-48c8-9c36-8a36710244f2
Observed Generation: 2
Phase: Running
Events: <none>
```
</details>
<details>
<summary>worker m3m object</summary>
```
Name: worker-1-7z562
Namespace: target-infra
Labels: cluster.x-k8s.io/cluster-name=target-cluster
machine-template-hash=369932495
Annotations: cluster.x-k8s.io/cloned-from-groupkind: Metal3MachineTemplate.infrastructure.cluster.x-k8s.io
cluster.x-k8s.io/cloned-from-name: worker-1
metal3.io/BareMetalHost: target-infra/node03
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Kind: Metal3Machine
Metadata:
Creation Timestamp: 2021-08-25T08:37:27Z
Finalizers:
metal3machine.infrastructure.cluster.x-k8s.io
Generation: 2
Managed Fields:
API Version: infrastructure.cluster.x-k8s.io/v1alpha4
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cluster.x-k8s.io/cloned-from-groupkind:
f:cluster.x-k8s.io/cloned-from-name:
f:metal3.io/BareMetalHost:
f:finalizers:
.:
v:"metal3machine.infrastructure.cluster.x-k8s.io":
f:labels:
.:
f:cluster.x-k8s.io/cluster-name:
f:machine-template-hash:
f:ownerReferences:
.:
k:{"uid":"127cd8a3-a841-42ef-adbe-d5dc4ac6a884"}:
.:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:hostSelector:
.:
f:matchLabels:
.:
f:airshipit.org/k8s-role:
f:image:
.:
f:checksum:
f:url:
f:providerID:
f:status:
.:
f:addresses:
f:ready:
f:userData:
.:
f:name:
f:namespace:
Manager: manager
Operation: Update
Time: 2021-08-25T09:44:17Z
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"127cd8a3-a841-42ef-adbe-d5dc4ac6a884"}:
f:apiVersion:
Manager: manager
Operation: Update
Time: 2021-08-25T09:44:42Z
API Version: infrastructure.cluster.x-k8s.io/v1alpha5
Fields Type: FieldsV1
fieldsV1:
f:status:
f:lastUpdated:
Manager: cluster-api-provider-metal3-manager
Operation: Update
Time: 2021-08-25T11:21:42Z
Owner References:
API Version: cluster.x-k8s.io/v1alpha4
Block Owner Deletion: true
Controller: true
Kind: Machine
Name: worker-1-7bff768f9-fxmzr
UID: 127cd8a3-a841-42ef-adbe-d5dc4ac6a884
Resource Version: 90550
Self Link: /apis/infrastructure.cluster.x-k8s.io/v1alpha5/namespaces/target-infra/metal3machines/worker-1-7z562
UID: 5f7c6e40-3862-4ce3-b4e0-7fc7ec0a6c66
Spec:
Automated Cleaning Mode: metadata
Host Selector:
Match Labels:
airshipit.org/k8s-role: worker
Image:
Checksum: http://10.23.24.102:80/images/data-plane.qcow2.md5sum
URL: http://10.23.24.102:80/images/data-plane.qcow2
Provider ID: metal3://7f29349c-4e09-475b-888a-0adfc591d408
Status:
Addresses:
Address: 10.23.24.231
Type: InternalIP
Address: fe80::5054:ff:fe9b:2707%ens3
Type: InternalIP
Address: ubuntu
Type: Hostname
Address: ubuntu
Type: InternalDNS
Last Updated: 2021-08-25T11:21:42Z
Ready: true
User Data:
Name: worker-1-knvpr
Namespace: target-infra
Events: <none>
```
</details>
**Test if new machine can be provisioned.**
After upgrade to minor version , testing capm3 by provisioning new worker node and adding it to cluster.
```
airship@help:~/upgrade/airshipctl$ airshipctl phase run workers-target
[airshipctl] 2021/08/25 10:12:25 Using kubeconfig at '/home/airship/.airship/kubeconfig-539996638' and context 'target-cluster'
namespace/target-infra unchanged
secret/node05-bmc-secret created
secret/node05-network-data created
baremetalhost.metal3.io/node05 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/worker-1 configured
machinedeployment.cluster.x-k8s.io/worker-1 configured
metal3machinetemplate.infrastructure.cluster.x-k8s.io/worker-1 configured
7 resource(s) applied. 3 created, 1 unchanged, 3 configured
namespace/target-infra is Current: Resource is current
secret/node05-bmc-secret is NotFound: Resource not found
secret/node05-network-data is NotFound: Resource not found
baremetalhost.metal3.io/node05 is NotFound: Resource not found
secret/node03-bmc-secret is Current: Resource is always ready
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/worker-1 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/worker-1 is Current: Resource is Ready
metal3machinetemplate.infrastructure.cluster.x-k8s.io/worker-1 is Current: Resource is current
secret/node03-network-data is Current: Resource is always ready
baremetalhost.metal3.io/node03 is Current: Resource is current
secret/node05-bmc-secret is Current: Resource is always ready
secret/node05-network-data is Current: Resource is always ready
baremetalhost.metal3.io/node05 is Current: Resource is current
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/worker-1 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/worker-1 is InProgress: Minimum availability requires 2 replicas, current 1 available
baremetalhost.metal3.io/node05 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/worker-1 is Current: Resource is Ready
```
```
airship@help:~/upgrade/airshipctl$ kubectl get bmh -A
NAMESPACE NAME STATE CONSUMER ONLINE ERROR
target-infra node01 provisioned cluster-controlplane-7cxt7 true
target-infra node02 registration error false registration error
target-infra node03 provisioned worker-1-7z562 true
target-infra node05 provisioned worker-1-66k9b true
airship@help:~/upgrade/airshipctl$ kubectl get m3m -A
NAMESPACE NAME PROVIDERID READY CLUSTER PHASE
target-infra cluster-controlplane-7cxt7 metal3://c7f0f3d4-ebaa-4153-963e-94fe335181db true target-cluster
target-infra worker-1-66k9b metal3://bb1ca9b2-8ad8-4b1e-b636-5abd1ec853cc true target-cluster
target-infra worker-1-7z562 metal3://7f29349c-4e09-475b-888a-0adfc591d408 true target-cluster
```
**Observations:**
* New node node05 is provisioned successfully and added in the cluster
* Versions are shown appropriately capi objects to v1alpha4 and infrastructure capm3 to v1alpha5
### Scenario 4: Downgrade CAMP3 from v0.5.0 to v0.4.0
Downgrading from v1alpha4 to v1alpha3 of capi is not supported.
Same for the CAPM3 , it cant be downgrade from v0.5.0 to v0.4.0
```
airship@help:~/upgrade/airshipctl$ clusterctl upgrade apply --management-group capi-system/cluster-api --contract v1alpha3
Performing upgrade...
Performing upgrade...
airship@help:~/upgrade/airshipctl$
airship@help:~/upgrade/airshipctl$ clusterctl upgrade apply --management-group capi-system/cluster-api -b capi-kubeadm-bootstrap-system/kubeadm:v0.3.7 -c capi-kubeadm-control-plane-system/kubeadm:v0.3.7 --core capi-system/cluster-api:v0.3.7 -i capm3-system/metal3:v0.4.0
Performing upgrade...
Performing upgrade...
Upgrading Provider="capi-system/cluster-api" CurrentVersion="" TargetVersion="v0.3.7"
Deleting Provider="cluster-api" Version="" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.3.7" TargetNamespace="capi-system"
Error: action failed after 10 attempts: failed to patch provider object: CustomResourceDefinition.apiextensions.k8s.io "clusterresourcesetbindings.addons.cluster.x-k8s.io" is invalid: status.storedVersions[1]: Invalid value: "v1alpha4": must appear in spec.versions
with clusterctl version 0.3.7
airship@help:~/logs/v0.5.0$ clusterctl upgrade apply --management-group capi-system/cluster-api -i capm3-system/metal3:v0.4.0
Performing upgrade...
Error: unable to complete that upgrade: the target version for the provider capm3-system/infrastructure-metal3 supports the v1alpha3 API Version of Cluster API (contract), while the management group is using v1alpha4
with clusterctl version 0.4.2
airship@help:~/logs/v0.5.0$ clusterctl upgrade apply -i capm3-system/metal3:v0.4.0
Performing upgrade...
Error: unable to complete that upgrade: the target version for the provider capm3-system/infrastructure-metal3 supports the v1alpha3 API Version of Cluster API (contract), while the management cluster is using v1alpha4
```