owned this note
owned this note
Published
Linked with GitHub
# Spike: CAPI Component Upgrades for Brownfield Sites
[TOC]
The current document enlists different scenarios and implications when CAPI components(cluster-api core, controlplane and bootstrap) are upgraded as a Day2 requirement on the target cluster.
## Metal3 Provider
### Summary
* Patch version upgrade and downgrade works seemlessly
* Minor version upgrade required minimum kubernetes version to be 1.19.1
* Minor version upgrade of capi components alone is not being supported. It has dependency to upgrade the provider as well.
* Minor version upgrade of capi and capm3 is having issues - need more testing along with bmo and ironic upgrade
* Could see metal3machine object discovery errors in capi pod after upgrade
* capi objects are referring both v1alpha3 and v1alpha4 in metadata. And it is taking some time for capi objects to refer to v1alpha4.
* capm3 objects referring both v1alpha4 and v1alpha5 in metadata. And it is taking some time for capm3 objects to refer to v1alpha5.
* Tried updating kubernetes version for which new machine object which failed with no provider found error --- this doesn't seem to be working even before the upgrade -- not sure if it is supported or not
* Tried updating `node-monitor-period` for controller-manager but it is not getting reflected on the control-plane node --- this doesn't seem to be working even before the upgrade -- not sure if it is supported or not
* Minor version downgrade not supporting
* No much effect on the workloads as the upgrade only updates the image of capi and capd components
### Scenario 1: Patch version upgrade CAPI - tested with kubernetes v1.18.6
In this scenario we will try to upgrade the capi components by a minor version. For example in the current test, the components will be upgraded from `v0.3.7` to `v0.3.22`. After upgrading the cluster-api kube objects are noticied to see the effect.
```
$ clusterctl upgrade plan
Checking cert-manager version...
Cert-Manager is already up to date
Checking new release availability...
Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.7 v0.3.22
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.7 v0.3.22
cluster-api capi-system CoreProvider v0.3.7 v0.3.22
You can now apply the upgrade by executing the following command:
clusterctl upgrade upgrade apply --management-group capi-system/cluster-api --contract v1alpha3
Management group: capi-system/cluster-api, latest release available for the v1alpha4 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.7 v0.4.0
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.7 v0.4.0
cluster-api capi-system CoreProvider v0.3.7 v0.4.0
The current version of clusterctl could not upgrade to v1alpha4 contract (only v1alpha3 supported).
```
Upgrading the capi components
```
$ clusterctl upgrade apply --management-group capi-system/cluster-api --bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.3.22 --control-plane capi-kubeadm-control-plane-system/kubeadm:v0.3.22 --core capi-system/cluster-api:v0.3.22
Checking cert-manager version...
Cert-manager is already up to date
Performing upgrade...
Deleting Provider="bootstrap-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.22" TargetNamespace="capi-kubeadm-bootstrap-system"
Deleting Provider="control-plane-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.22" TargetNamespace="capi-kubeadm-control-plane-system"
Deleting Provider="cluster-api" Version="v0.3.7" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.3.22" TargetNamespace="capi-system"
```
```
$ kubectl get metal3machines -A
NAMESPACE NAME PROVIDERID READY CLUSTER PHASE
target-infra cluster-controlplane-ll9xg metal3://11593451-983c-46e2-8d65-f9a8e90ed5a4 true target-cluster
target-infra worker-1-wzlxv metal3://35196f52-0ce8-4a8a-a853-629c995b8809 true target-cluster
$ kubectl get clusters -A
NAMESPACE NAME PHASE
target-infra target-cluster Provisioned
```
```
$ kubectl get pods -A | grep capi
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-75446f98bf-l5kzx 2/2 Running 0 2m18s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-5d545bd746-z7tcm 2/2 Running 0 2m12s
capi-system capi-controller-manager-6fd746f5b6-2xhhj 2/2 Running 0 2m27s
capi-webhook-system capi-controller-manager-79cd8b7c89-fd4nl 2/2 Running 0 2m30s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-59b4449dfb-4z45z 2/2 Running 0 2m21s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-5bc8fcf8d9-hs45q 2/2 Running 0 2m14s
capi-webhook-system capm3-controller-manager-6fb5ddd44c-sbxg5 2/2 Running 0 20h
capi-webhook-system capm3-ipam-controller-manager-86bf55f6-kfgzq 2/2 Running 0 20h
```
Observations:
* The upgrade command deletes the capi, bootstrap and controlplane pods are recreate them with the new version of capi.
* The kubernetes capi resources are uneffected. As we are not upgrading the capm3 for now there is no much change in the resources.
* Since we are upgrading only capi components there is no effect on metal3 or capm3 pods
### Scenario 2: Patch version downgrade CAPI - tested with kubernetes v1.18.6
In this test we will try to downgrade the CAPI components(core, bootstrap and control-plane) from v0.3.22 to v0.3.7 version.
```
$ clusterctl upgrade apply --management-group capi-system/cluster-api --bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.3.7 --control-plane capi-kubeadm-control-plane-system/kubeadm:v0.3.7 --core capi-system/cluster-api:v0.3.7
Checking cert-manager version...
Cert-manager is already up to date
Performing upgrade...
Deleting Provider="cluster-api" Version="" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.3.7" TargetNamespace="capi-system"
Deleting Provider="bootstrap-kubeadm" Version="" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-bootstrap-system"
Deleting Provider="control-plane-kubeadm" Version="" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-control-plane-system"
```
Observations:
* After the downgrade the capi, bootstrap and control-plane pods are recreated with v0.3.7 version
* No change in the other kubernetes cluster objects
### Scenario 3: Minor version upgrade CAPI - target cluster on kubernetes 1.18.6
Now with this test we will try to perform a major upgrade on the CAPI components going from v0.3.7 to v0.4.0
```
# Download clusterctl 0.4.0 version
$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.0/clusterctl-linux-amd64 -o clusterctlv4
$ chmod +x clusterctlv4
$ clusterctlv4 upgrade plan
Error: unsupported management cluster server version: v1.18.6 - minimum required version is v1.19.1
```
Observations:
* As noticed, CAPI components upgrade to 0.4.0 version on a 1.18.6 version of kubernetes is not supported
### Scenario 4: Minor version upgrade CAPI - target cluster on kubernetes v1.19.1
Deployed target-cluster
* With kubernetes 1.19.1 version
* v0.3.7 version of capi components
* v0.4.0 version of capm3 components
Upgrading the capi and capm3 components
```
clusterctl upgrade apply --contract v1alpha4
Performing upgrade...
Deleting Provider="bootstrap-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.4.1" TargetNamespace="capi-kubeadm-bootstrap-system"
I0823 12:23:43.553978 20930 request.go:668] Waited for 1.019969289s due to client-side throttling, not priority and fairness, request: GET:https://10.23.25.102:6443/apis/bootstrap.cluster.x-k8s.io/v1alpha4?timeout=30s
Deleting Provider="control-plane-kubeadm" Version="v0.3.7" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v0.4.1" TargetNamespace="capi-kubeadm-control-plane-system"
Deleting Provider="cluster-api" Version="v0.3.7" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.4.1" TargetNamespace="capi-system"
I0823 12:24:08.246118 20930 request.go:668] Waited for 1.047686386s due to client-side throttling, not priority and fairness, request: GET:https://10.23.25.102:6443/apis/controlplane.cluster.x-k8s.io/v1alpha3?timeout=30s
Deleting Provider="infrastructure-metal3" Version="v0.4.0" TargetNamespace="capm3-system"
Installing Provider="infrastructure-metal3" Version="v0.5.0" TargetNamespace="capm3-system"
I0823 12:24:23.379077 20930 request.go:668] Waited for 1.010206137s due to client-side throttling, not priority and fairness, request: GET:https://10.23.25.102:6443/apis/clusterctl.cluster.x-k8s.io/v1alpha3?timeout=30s
New clusterctl version available: v0.4.0 -> v0.4.1
https://github.com/kubernetes-sigs/cluster-api/releases/tag/v0.4.1
```
Observations:
1. Observed few errors in capi pod
```
E0824 09:04:19.546088 1 machinedeployment_controller.go:145] controller-runtime/manager/controller/machinedeployment "msg"="Failed to reconcile MachineDeployment" "error"="failed to retrieve Metal3MachineTemplate external object \"target-infra\"/\"worker-1\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3MachineTemplate failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="worker-1" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="MachineDeployment"
E0824 09:04:19.560939 1 machineset_controller.go:159] controller-runtime/manager/controller/machineset "msg"="Failed to reconcile MachineSet" "error"="failed to retrieve Metal3MachineTemplate external object \"target-infra\"/\"worker-1\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3MachineTemplate failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="worker-1-7bff768f9" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="MachineSet"
E0824 09:04:19.770063 1 controller.go:302] controller-runtime/manager/controller/machinedeployment "msg"="Reconciler error" "error"="failed to retrieve Metal3MachineTemplate external object \"target-infra\"/\"worker-1\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3MachineTemplate failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="worker-1" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="MachineDeployment"
E0824 09:04:19.783067 1 controller.go:302] controller-runtime/manager/controller/machineset "msg"="Reconciler error" "error"="failed to retrieve Metal3MachineTemplate external object \"target-infra\"/\"worker-1\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3MachineTemplate failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="worker-1-7bff768f9" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="MachineSet"
I0824 09:04:20.747807 1 tracker.go:55] controller-runtime/manager/controller/cluster "msg"="Adding watcher on external object" "name"="target-cluster" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="Cluster" "GroupVersionKind"="controlplane.cluster.x-k8s.io/v1alpha4, Kind=KubeadmControlPlane"
I0824 09:04:20.750074 1 controller.go:130] controller-runtime/manager/controller/cluster "msg"="Starting EventSource" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="Cluster" "source"={"Type":{"apiVersion":"controlplane.cluster.x-k8s.io/v1alpha4","kind":"KubeadmControlPlane"}}
E0824 09:04:20.858109 1 machineset_controller.go:159] controller-runtime/manager/controller/machineset "msg"="Failed to reconcile MachineSet" "error"="failed to retrieve Metal3MachineTemplate external object \"target-infra\"/\"worker-1\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3MachineTemplate failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="worker-1-7bff768f9" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="MachineSet"
E0824 09:04:20.859921 1 controller.go:302] controller-runtime/manager/controller/machineset "msg"="Reconciler error" "error"="failed to retrieve Metal3MachineTemplate external object \"target-infra\"/\"worker-1\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3MachineTemplate failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="worker-1-7bff768f9" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="MachineSet"
E0824 09:04:20.866716 1 controller.go:302] controller-runtime/manager/controller/cluster "msg"="Reconciler error" "error"="failed to retrieve Metal3Cluster external object \"target-infra\"/\"target-cluster\": conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=Metal3Cluster failed: Post \"https://capm3-webhook-service.capm3-system.svc:443/convert?timeout=30s\": dial tcp 10.0.10.179:443: connect: connection refused" "name"="target-cluster" "namespace"="target-infra" "reconciler group"="cluster.x-k8s.io" "reconciler kind"="Cluster"
```
2. Most of the capi objects are referring both v1alpha4 and v1alpha3 -- like machine, cluster objects for some time almost 15-30mins. Later they are changed to v1alpha4, with both v1alpha3 and v1alpha4 references
<details>
<summary>capi object v1alpha3</summary>
```
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Machine
metadata:
annotations:
cluster.x-k8s.io/conversion-data: '{"apiVersion":"cluster.x-k8s.io/v1alpha4","kind":"Machine","spec":{"bootstrap":{"configRef":{"apiVersion":"bootstrap.cluster.x-k8s.io/v1alpha3","kind":"KubeadmConfig","name":"cluster-controlplane-xbl2n","namespace":"target-infra","uid":"f6b7d6e1-e6da-45dd-a4ed-7096c266747a"},"dataSecretName":"cluster-controlplane-xbl2n"},"clusterName":"target-cluster","infrastructureRef":{"apiVersion":"infrastructure.cluster.x-k8s.io/v1alpha5","kind":"Metal3Machine","name":"cluster-controlplane-gplqq","namespace":"target-infra","uid":"b023caae-40fa-40df-bd3c-da7e052089d2"},"providerID":"metal3://110b1266-d094-41f2-a35c-54d1eead09f5","version":"v1.19.1"},"status":{"addresses":[{"address":"fe80::5054:ff:fe9b:274c%ens3","type":"InternalIP"},{"address":"10.23.24.245","type":"InternalIP"},{"address":"ubuntu","type":"Hostname"},{"address":"ubuntu","type":"InternalDNS"}],"bootstrapReady":true,"conditions":[{"lastTransitionTime":"2021-08-23T11:56:28Z","status":"True","type":"Ready"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"APIServerPodHealthy"},{"lastTransitionTime":"2021-08-23T11:56:26Z","status":"True","type":"BootstrapReady"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"ControllerManagerPodHealthy"},{"lastTransitionTime":"2021-08-23T12:25:13Z","status":"True","type":"EtcdMemberHealthy"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"EtcdPodHealthy"},{"lastTransitionTime":"2021-08-23T11:56:28Z","status":"True","type":"InfrastructureReady"},{"lastTransitionTime":"2021-08-23T12:24:44Z","status":"True","type":"NodeHealthy"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"SchedulerPodHealthy"}],"infrastructureReady":true,"lastUpdated":"2021-08-23T11:56:28Z","nodeInfo":{"architecture":"amd64","bootID":"a3cebc32-a87f-43d6-b3c1-569e5011ab6f","containerRuntimeVersion":"containerd://1.4.9","kernelVersion":"5.4.0-81-generic","kubeProxyVersion":"v1.19.1","kubeletVersion":"v1.19.1","machineID":"89eb7881568c4d00a3ec249da0dfc6a1","operatingSystem":"linux","osImage":"Ubuntu
20.04.2 LTS","systemUUID":"89eb7881-568c-4d00-a3ec-249da0dfc6a1"},"nodeRef":{"apiVersion":"v1","kind":"Node","name":"node01","uid":"e280cf86-a4f5-4ca4-94a9-408a24284b06"},"observedGeneration":2,"phase":"Running"}}'
controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration: '{"etcd":{},"networking":{"serviceSubnet":"10.0.0.0/20","podSubnet":"192.168.16.0/20","dnsDomain":"cluster.local"},"apiServer":{"extraArgs":{"allow-privileged":"true","authorization-mode":"Node,RBAC","enable-admission-plugins":"NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction","kubelet-preferred-address-types":"InternalIP,ExternalIP,Hostname","requestheader-allowed-names":"front-proxy-client","requestheader-group-headers":"X-Remote-Group","requestheader-username-headers":"X-Remote-User","service-cluster-ip-range":"10.0.0.0/20","service-node-port-range":"80-32767","tls-cipher-suites":"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA","tls-min-version":"VersionTLS12","v":"2"},"timeoutForControlPlane":"16m40s"},"controllerManager":{"extraArgs":{"bind-address":"127.0.0.1","cluster-cidr":"192.168.16.0/20","configure-cloud-routes":"false","enable-hostpath-provisioner":"true","node-monitor-grace-period":"20s","node-monitor-period":"5s","pod-eviction-timeout":"60s","port":"0","terminated-pod-gc-threshold":"1000","use-service-account-credentials":"true","v":"2"}},"scheduler":{},"dns":{},"imageRepository":"k8s.gcr.io"}'
creationTimestamp: "2021-08-23T11:56:24Z"
finalizers:
- machine.cluster.x-k8s.io
generation: 2
labels:
cluster.x-k8s.io/cluster-name: target-cluster
cluster.x-k8s.io/control-plane: ""
managedFields:
- apiVersion: cluster.x-k8s.io/v1alpha3
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration: {}
f:finalizers:
.: {}
v:"machine.cluster.x-k8s.io": {}
f:labels:
.: {}
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/control-plane: {}
f:ownerReferences:
.: {}
k:{"uid":"3725bf20-57dd-4841-adab-590b7c466769"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
.: {}
f:bootstrap:
.: {}
f:configRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:uid: {}
f:dataSecretName: {}
f:clusterName: {}
f:infrastructureRef:
.: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:uid: {}
f:providerID: {}
f:version: {}
manager: clusterctl
operation: Update
time: "2021-08-23T11:56:24Z"
- apiVersion: cluster.x-k8s.io/v1alpha3
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"a3d07a73-7f41-4fe2-98e6-995a87c9562b"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:status:
.: {}
f:addresses: {}
f:bootstrapReady: {}
f:infrastructureReady: {}
f:lastUpdated: {}
f:nodeRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:uid: {}
f:phase: {}
manager: manager
operation: Update
time: "2021-08-23T11:56:28Z"
- apiVersion: cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:infrastructureRef:
f:apiVersion: {}
f:status:
f:conditions: {}
f:nodeInfo:
.: {}
f:architecture: {}
f:bootID: {}
f:containerRuntimeVersion: {}
f:kernelVersion: {}
f:kubeProxyVersion: {}
f:kubeletVersion: {}
f:machineID: {}
f:operatingSystem: {}
f:osImage: {}
f:systemUUID: {}
f:observedGeneration: {}
manager: manager
operation: Update
time: "2021-08-23T12:25:13Z"
name: cluster-controlplane-88vks
namespace: target-infra
ownerReferences:
- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
blockOwnerDeletion: true
controller: true
kind: KubeadmControlPlane
name: cluster-controlplane
uid: 3725bf20-57dd-4841-adab-590b7c466769
resourceVersion: "21608"
selfLink: /apis/cluster.x-k8s.io/v1alpha3/namespaces/target-infra/machines/cluster-controlplane-88vks
uid: f1b1d2df-9db6-457d-92dc-79369f9faee5
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfig
name: cluster-controlplane-xbl2n
namespace: target-infra
uid: f6b7d6e1-e6da-45dd-a4ed-7096c266747a
dataSecretName: cluster-controlplane-xbl2n
clusterName: target-cluster
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
kind: Metal3Machine
name: cluster-controlplane-gplqq
namespace: target-infra
uid: b023caae-40fa-40df-bd3c-da7e052089d2
providerID: metal3://110b1266-d094-41f2-a35c-54d1eead09f5
version: v1.19.1
status:
addresses:
- address: fe80::5054:ff:fe9b:274c%ens3
type: InternalIP
- address: 10.23.24.245
type: InternalIP
- address: ubuntu
type: Hostname
- address: ubuntu
type: InternalDNS
bootstrapReady: true
conditions:
- lastTransitionTime: "2021-08-23T11:56:28Z"
status: "True"
type: Ready
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: APIServerPodHealthy
- lastTransitionTime: "2021-08-23T11:56:26Z"
status: "True"
type: BootstrapReady
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: ControllerManagerPodHealthy
- lastTransitionTime: "2021-08-23T12:25:13Z"
status: "True"
type: EtcdMemberHealthy
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: EtcdPodHealthy
- lastTransitionTime: "2021-08-23T11:56:28Z"
status: "True"
type: InfrastructureReady
- lastTransitionTime: "2021-08-23T12:24:44Z"
status: "True"
type: NodeHealthy
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: SchedulerPodHealthy
infrastructureReady: true
lastUpdated: "2021-08-23T11:56:28Z"
nodeRef:
apiVersion: v1
kind: Node
name: node01
uid: e280cf86-a4f5-4ca4-94a9-408a24284b06
observedGeneration: 2
phase: Running
```
</details>
<details>
<summary>capi object v1alpha4</summary>
```
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Machine
metadata:
annotations:
cluster.x-k8s.io/conversion-data: '{"apiVersion":"cluster.x-k8s.io/v1alpha4","kind":"Machine","spec":{"bootstrap":{"configRef":{"apiVersion":"bootstrap.cluster.x-k8s.io/v1alpha3","kind":"KubeadmConfig","name":"cluster-controlplane-xbl2n","namespace":"target-infra","uid":"f6b7d6e1-e6da-45dd-a4ed-7096c266747a"},"dataSecretName":"cluster-controlplane-xbl2n"},"clusterName":"target-cluster","infrastructureRef":{"apiVersion":"infrastructure.cluster.x-k8s.io/v1alpha5","kind":"Metal3Machine","name":"cluster-controlplane-gplqq","namespace":"target-infra","uid":"b023caae-40fa-40df-bd3c-da7e052089d2"},"providerID":"metal3://110b1266-d094-41f2-a35c-54d1eead09f5","version":"v1.19.1"},"status":{"addresses":[{"address":"fe80::5054:ff:fe9b:274c%ens3","type":"InternalIP"},{"address":"10.23.24.245","type":"InternalIP"},{"address":"ubuntu","type":"Hostname"},{"address":"ubuntu","type":"InternalDNS"}],"bootstrapReady":true,"conditions":[{"lastTransitionTime":"2021-08-23T11:56:28Z","status":"True","type":"Ready"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"APIServerPodHealthy"},{"lastTransitionTime":"2021-08-23T11:56:26Z","status":"True","type":"BootstrapReady"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"ControllerManagerPodHealthy"},{"lastTransitionTime":"2021-08-23T12:25:13Z","status":"True","type":"EtcdMemberHealthy"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"EtcdPodHealthy"},{"lastTransitionTime":"2021-08-23T11:56:28Z","status":"True","type":"InfrastructureReady"},{"lastTransitionTime":"2021-08-23T12:24:44Z","status":"True","type":"NodeHealthy"},{"lastTransitionTime":"2021-08-23T12:25:12Z","status":"True","type":"SchedulerPodHealthy"}],"infrastructureReady":true,"lastUpdated":"2021-08-23T11:56:28Z","nodeInfo":{"architecture":"amd64","bootID":"a3cebc32-a87f-43d6-b3c1-569e5011ab6f","containerRuntimeVersion":"containerd://1.4.9","kernelVersion":"5.4.0-81-generic","kubeProxyVersion":"v1.19.1","kubeletVersion":"v1.19.1","machineID":"89eb7881568c4d00a3ec249da0dfc6a1","operatingSystem":"linux","osImage":"Ubuntu
20.04.2 LTS","systemUUID":"89eb7881-568c-4d00-a3ec-249da0dfc6a1"},"nodeRef":{"apiVersion":"v1","kind":"Node","name":"node01","uid":"e280cf86-a4f5-4ca4-94a9-408a24284b06"},"observedGeneration":2,"phase":"Running"}}'
controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration: '{"etcd":{},"networking":{"serviceSubnet":"10.0.0.0/20","podSubnet":"192.168.16.0/20","dnsDomain":"cluster.local"},"apiServer":{"extraArgs":{"allow-privileged":"true","authorization-mode":"Node,RBAC","enable-admission-plugins":"NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction","kubelet-preferred-address-types":"InternalIP,ExternalIP,Hostname","requestheader-allowed-names":"front-proxy-client","requestheader-group-headers":"X-Remote-Group","requestheader-username-headers":"X-Remote-User","service-cluster-ip-range":"10.0.0.0/20","service-node-port-range":"80-32767","tls-cipher-suites":"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA","tls-min-version":"VersionTLS12","v":"2"},"timeoutForControlPlane":"16m40s"},"controllerManager":{"extraArgs":{"bind-address":"127.0.0.1","cluster-cidr":"192.168.16.0/20","configure-cloud-routes":"false","enable-hostpath-provisioner":"true","node-monitor-grace-period":"20s","node-monitor-period":"5s","pod-eviction-timeout":"60s","port":"0","terminated-pod-gc-threshold":"1000","use-service-account-credentials":"true","v":"2"}},"scheduler":{},"dns":{},"imageRepository":"k8s.gcr.io"}'
creationTimestamp: "2021-08-23T11:56:24Z"
finalizers:
- machine.cluster.x-k8s.io
generation: 2
labels:
cluster.x-k8s.io/cluster-name: target-cluster
cluster.x-k8s.io/control-plane: ""
managedFields:
- apiVersion: cluster.x-k8s.io/v1alpha3
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration: {}
f:finalizers:
.: {}
v:"machine.cluster.x-k8s.io": {}
f:labels:
.: {}
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/control-plane: {}
f:ownerReferences:
.: {}
k:{"uid":"3725bf20-57dd-4841-adab-590b7c466769"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
.: {}
f:bootstrap:
.: {}
f:configRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:uid: {}
f:dataSecretName: {}
f:clusterName: {}
f:infrastructureRef:
.: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:uid: {}
f:providerID: {}
f:version: {}
manager: clusterctl
operation: Update
time: "2021-08-23T11:56:24Z"
- apiVersion: cluster.x-k8s.io/v1alpha3
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"a3d07a73-7f41-4fe2-98e6-995a87c9562b"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:status:
.: {}
f:addresses: {}
f:bootstrapReady: {}
f:infrastructureReady: {}
f:lastUpdated: {}
f:nodeRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:uid: {}
f:phase: {}
manager: manager
operation: Update
time: "2021-08-23T11:56:28Z"
- apiVersion: cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:infrastructureRef:
f:apiVersion: {}
f:status:
f:conditions: {}
f:nodeInfo:
.: {}
f:architecture: {}
f:bootID: {}
f:containerRuntimeVersion: {}
f:kernelVersion: {}
f:kubeProxyVersion: {}
f:kubeletVersion: {}
f:machineID: {}
f:operatingSystem: {}
f:osImage: {}
f:systemUUID: {}
f:observedGeneration: {}
manager: manager
operation: Update
time: "2021-08-23T12:25:13Z"
name: cluster-controlplane-88vks
namespace: target-infra
ownerReferences:
- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
blockOwnerDeletion: true
controller: true
kind: KubeadmControlPlane
name: cluster-controlplane
uid: 3725bf20-57dd-4841-adab-590b7c466769
resourceVersion: "21608"
selfLink: /apis/cluster.x-k8s.io/v1alpha3/namespaces/target-infra/machines/cluster-controlplane-88vks
uid: f1b1d2df-9db6-457d-92dc-79369f9faee5
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfig
name: cluster-controlplane-xbl2n
namespace: target-infra
uid: f6b7d6e1-e6da-45dd-a4ed-7096c266747a
dataSecretName: cluster-controlplane-xbl2n
clusterName: target-cluster
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
kind: Metal3Machine
name: cluster-controlplane-gplqq
namespace: target-infra
uid: b023caae-40fa-40df-bd3c-da7e052089d2
providerID: metal3://110b1266-d094-41f2-a35c-54d1eead09f5
version: v1.19.1
status:
addresses:
- address: fe80::5054:ff:fe9b:274c%ens3
type: InternalIP
- address: 10.23.24.245
type: InternalIP
- address: ubuntu
type: Hostname
- address: ubuntu
type: InternalDNS
bootstrapReady: true
conditions:
- lastTransitionTime: "2021-08-23T11:56:28Z"
status: "True"
type: Ready
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: APIServerPodHealthy
- lastTransitionTime: "2021-08-23T11:56:26Z"
status: "True"
type: BootstrapReady
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: ControllerManagerPodHealthy
- lastTransitionTime: "2021-08-23T12:25:13Z"
status: "True"
type: EtcdMemberHealthy
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: EtcdPodHealthy
- lastTransitionTime: "2021-08-23T11:56:28Z"
status: "True"
type: InfrastructureReady
- lastTransitionTime: "2021-08-23T12:24:44Z"
status: "True"
type: NodeHealthy
- lastTransitionTime: "2021-08-23T12:25:12Z"
status: "True"
type: SchedulerPodHealthy
infrastructureReady: true
lastUpdated: "2021-08-23T11:56:28Z"
nodeRef:
apiVersion: v1
kind: Node
name: node01
uid: e280cf86-a4f5-4ca4-94a9-408a24284b06
observedGeneration: 2
phase: Running
```
</details>
3. metal3 objects are referring alpha4 and alpha5. The transition to v1alpha5 is taking time.
<details>
<summary> capm3 object v1alpha4</summary>
```
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: Metal3Machine
metadata:
annotations:
cluster.x-k8s.io/cloned-from-groupkind: Metal3MachineTemplate.infrastructure.cluster.x-k8s.io
cluster.x-k8s.io/cloned-from-name: cluster-controlplane
metal3.io/BareMetalHost: target-infra/node01
creationTimestamp: "2021-08-23T11:56:25Z"
finalizers:
- metal3machine.infrastructure.cluster.x-k8s.io
generation: 1
labels:
cluster.x-k8s.io/cluster-name: target-cluster
cluster.x-k8s.io/control-plane: ""
managedFields:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:cluster.x-k8s.io/cloned-from-groupkind: {}
f:cluster.x-k8s.io/cloned-from-name: {}
f:metal3.io/BareMetalHost: {}
f:finalizers:
.: {}
v:"metal3machine.infrastructure.cluster.x-k8s.io": {}
f:labels:
.: {}
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/control-plane: {}
f:ownerReferences:
.: {}
k:{"uid":"3725bf20-57dd-4841-adab-590b7c466769"}:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:uid: {}
k:{"uid":"f1b1d2df-9db6-457d-92dc-79369f9faee5"}:
.: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
.: {}
f:hostSelector:
.: {}
f:matchLabels:
.: {}
f:airshipit.org/k8s-role: {}
f:image:
.: {}
f:checksum: {}
f:url: {}
f:providerID: {}
f:status:
f:userData:
.: {}
f:name: {}
f:namespace: {}
manager: clusterctl
operation: Update
time: "2021-08-23T11:56:25Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"a3d07a73-7f41-4fe2-98e6-995a87c9562b"}:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:uid: {}
k:{"uid":"adf67276-a618-4839-a1d0-a80dba0a86d6"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:status:
.: {}
f:addresses: {}
f:ready: {}
manager: manager
operation: Update
time: "2021-08-23T12:24:17Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"f1b1d2df-9db6-457d-92dc-79369f9faee5"}:
f:apiVersion: {}
manager: manager
operation: Update
time: "2021-08-23T12:24:51Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
fieldsType: FieldsV1
fieldsV1:
f:status:
f:lastUpdated: {}
manager: cluster-api-provider-metal3-manager
operation: Update
time: "2021-08-23T12:25:13Z"
name: cluster-controlplane-gplqq
namespace: target-infra
ownerReferences:
- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: cluster-controlplane
uid: 3725bf20-57dd-4841-adab-590b7c466769
- apiVersion: cluster.x-k8s.io/v1alpha4
blockOwnerDeletion: true
controller: true
kind: Machine
name: cluster-controlplane-88vks
uid: f1b1d2df-9db6-457d-92dc-79369f9faee5
resourceVersion: "21609"
selfLink: /apis/infrastructure.cluster.x-k8s.io/v1alpha4/namespaces/target-infra/metal3machines/cluster-controlplane-gplqq
uid: 084b03d6-2945-482e-995a-1dd6e56f960a
spec:
automatedCleaningMode: metadata
hostSelector:
matchLabels:
airshipit.org/k8s-role: controlplane-host
image:
checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
url: http://10.23.24.101:80/images/control-plane.qcow2
providerID: metal3://110b1266-d094-41f2-a35c-54d1eead09f5
status:
addresses:
- address: fe80::5054:ff:fe9b:274c%ens3
type: InternalIP
- address: 10.23.24.245
type: InternalIP
- address: ubuntu
type: Hostname
- address: ubuntu
type: InternalDNS
lastUpdated: "2021-08-23T12:25:13Z"
ready: true
```
</details>
<details>
<summary> capm3 object v1alpha5</summary>
```
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
kind: Metal3Machine
metadata:
annotations:
cluster.x-k8s.io/cloned-from-groupkind: Metal3MachineTemplate.infrastructure.cluster.x-k8s.io
cluster.x-k8s.io/cloned-from-name: cluster-controlplane
metal3.io/BareMetalHost: target-infra/node01
creationTimestamp: "2021-08-23T11:56:25Z"
finalizers:
- metal3machine.infrastructure.cluster.x-k8s.io
generation: 1
labels:
cluster.x-k8s.io/cluster-name: target-cluster
cluster.x-k8s.io/control-plane: ""
managedFields:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:cluster.x-k8s.io/cloned-from-groupkind: {}
f:cluster.x-k8s.io/cloned-from-name: {}
f:metal3.io/BareMetalHost: {}
f:finalizers:
.: {}
v:"metal3machine.infrastructure.cluster.x-k8s.io": {}
f:labels:
.: {}
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/control-plane: {}
f:ownerReferences:
.: {}
k:{"uid":"3725bf20-57dd-4841-adab-590b7c466769"}:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:uid: {}
k:{"uid":"f1b1d2df-9db6-457d-92dc-79369f9faee5"}:
.: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
.: {}
f:hostSelector:
.: {}
f:matchLabels:
.: {}
f:airshipit.org/k8s-role: {}
f:image:
.: {}
f:checksum: {}
f:url: {}
f:providerID: {}
f:status:
f:userData:
.: {}
f:name: {}
f:namespace: {}
manager: clusterctl
operation: Update
time: "2021-08-23T11:56:25Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"a3d07a73-7f41-4fe2-98e6-995a87c9562b"}:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:uid: {}
k:{"uid":"adf67276-a618-4839-a1d0-a80dba0a86d6"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:status:
.: {}
f:addresses: {}
f:ready: {}
manager: manager
operation: Update
time: "2021-08-23T12:24:17Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
k:{"uid":"f1b1d2df-9db6-457d-92dc-79369f9faee5"}:
f:apiVersion: {}
manager: manager
operation: Update
time: "2021-08-23T12:24:51Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha5
fieldsType: FieldsV1
fieldsV1:
f:status:
f:lastUpdated: {}
manager: cluster-api-provider-metal3-manager
operation: Update
time: "2021-08-23T12:25:13Z"
name: cluster-controlplane-gplqq
namespace: target-infra
ownerReferences:
- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: cluster-controlplane
uid: 3725bf20-57dd-4841-adab-590b7c466769
- apiVersion: cluster.x-k8s.io/v1alpha4
blockOwnerDeletion: true
controller: true
kind: Machine
name: cluster-controlplane-88vks
uid: f1b1d2df-9db6-457d-92dc-79369f9faee5
resourceVersion: "21609"
selfLink: /apis/infrastructure.cluster.x-k8s.io/v1alpha4/namespaces/target-infra/metal3machines/cluster-controlplane-gplqq
uid: 084b03d6-2945-482e-995a-1dd6e56f960a
spec:
automatedCleaningMode: metadata
hostSelector:
matchLabels:
airshipit.org/k8s-role: controlplane-host
image:
checksum: http://10.23.24.101:80/images/control-plane.qcow2.md5sum
url: http://10.23.24.101:80/images/control-plane.qcow2
providerID: metal3://110b1266-d094-41f2-a35c-54d1eead09f5
status:
addresses:
- address: fe80::5054:ff:fe9b:274c%ens3
type: InternalIP
- address: 10.23.24.245
type: InternalIP
- address: ubuntu
type: Hostname
- address: ubuntu
type: InternalDNS
lastUpdated: "2021-08-23T12:25:13Z"
ready: true
```
</details>
#### Tested scenarios:
* Tried changing kubernetes version.
* Updated the worker machine deployment templates to point to v1alpha4 -- Metal3MachineTemplate, MachineDeployment, Cluster -- in manifests folder and re-run `airshipctl phase run workers-target`.
* New machine object created, not updating the existing object. And the new object is struct in provisioning state with no valid provider found error
```
kubectl get machines -A
NAMESPACE NAME PROVIDERID PHASE VERSION
target-infra cluster-controlplane-88vks metal3://110b1266-d094-41f2-a35c-54d1eead09f5 Running v1.19.1
target-infra worker-1-5bc87ff868-bqhnv metal3://ba22e722-0a62-4d3b-b617-7198ebcc7b5f Running v1.19.1
target-infra worker-1-95fd87f58-qlrgj Provisioning v1.19.2
```
* Tried changing controller `node-monitor-period` to 30s, but it is not reflected on the node
### Scenario 5: Minor version downgrade CAPI - target cluster on kubernetes v1.19.1
Target cluster running on kubernetes 1.19.1 and capi v0.4.0 and capm3 v0.5.0 versions.
Downgrading major version not being supported currently
* With clusterctl 0.3 version
```
clusterctl upgrade plan
Checking cert-manager version...
Cert-Manager is already up to date
Error: this version of clusterctl could be used only with "v1alpha3" management clusters, "v1alpha4" detected
```
* With clusterctl 0.4 version
```
clusterctl upgrade plan
Checking new release availability...
Latest release available for the v1alpha4 API Version of Cluster API (contract):
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.4.1 Already up to date
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.4.1 Already up to date
cluster-api capi-system CoreProvider v0.4.1 Already up to date
infrastructure-metal3 capm3-system InfrastructureProvider v0.5.0 Already up to date
You are already up to date!
```
Observations:
* Downgrading major version is not supported currently
## Docker provider
### Summary
* Patch version upgrade and downgrade works well for capi and capd provider -- v0.3.7 to 0.3.22
* Minor upgrade(0.3.7 to 0.4.0) is successful, provided
* The capd is also upgrade along with capi components
* And the capd pod gets launched on the control-plane with docker sock volume mount
* No much effect on the work loads as the upgrade only updates the image of capi and capd components
### Tested scenario
* After upgrade checked the objects for v1alpha4 references and all are updated properly
* Also tried creating another worker node, was able to boot up successfully
## Integration of clusterctl with airshipctl
As clusterctl is implemented as a krm-function the integration upgrading would require
* Code changes to pkg directory with appropriate function calls to upgrade like [init function](https://github.com/airshipit/airshipctl/blob/master/pkg/phase/executors/clusterctl.go#L216)
* Passing appropriate upgrade options like [init-options](https://github.com/airshipit/airshipctl/blob/master/manifests/function/clusterctl/clusterctl.yaml#L7)
```
apiVersion: airshipit.org/v1alpha1
kind: Clusterctl
metadata:
labels:
airshipit.org/deploy-k8s: "false"
name: clusterctl_init
upgrade-options:
- core-provider: "cluster-api:v0.3.7"
- bootstrap-providers: "kubeadm:v0.3.7"
- infrastructure-providers: "metal3:v0.4.0"
- control-plane-providers: "kubeadm:v0.3.7"
+ core-provider: "capi-system/cluster-api:v0.3.22"
+ bootstrap-providers: "capi-kubeadm-bootstrap-system/kubeadm:v0.3.22"
+ infrastructure-providers: "capm3-system/metal3:v0.4.0"
+ control-plane-providers: "capi-kubeadm-control-plane-system/kubeadm:v0.3.22"
clusterctl: v0.3
management-group: capi-system/cluster-api
providers:
- name: "metal3"
type: "InfrastructureProvider"
url: airshipctl/manifests/function/capm3/v0.4.0
- name: "kubeadm"
type: "BootstrapProvider"
- url: airshipctl/manifests/function/cabpk/v0.3.7
+ url: airshipctl/manifests/function/cabpk/v0.3.22
- name: "cluster-api"
type: "CoreProvider"
- url: airshipctl/manifests/function/capi/v0.3.7
+ url: airshipctl/manifests/function/capi/v0.3.22
- name: "kubeadm"
type: "ControlPlaneProvider"
- url: airshipctl/manifests/function/cacpk/v0.3.7
+ url: airshipctl/manifests/function/cacpk/v0.3.22
```
### Challenges:
* Challenge is using appropriate clusterctl binary for upgrading
* Patch version upgrade require corresponding clusterctl v0.3.x or v0.4.x binary to be used
* Minor version upgrade require clusterctl v0.4.x binary to be used
* Upgrading patch version required an additional flag --management-group for v0.3.x version of clusterctl