Testing of calico dualstack manifests (https://review.opendev.org/c/airship/treasuremap/+/807721) for use with Airship.
Due to non availability of IPv6 setup with Baremetal, the test was done on a private IPv6 network setup using VM as a router (with radvd for ipv6 distribution)
Though the manifest is slated for Treasuremap, as AIAP works fine with airshipctl, the manifest were moved to airshipctl and tested using AIAP with the below setup.
Test of calicodual stack with multi VMs in AIAP
PS = https://review.opendev.org/c/airship/airshipctl/+/810449
The dualstack PS relies on the PS for uplift of K8s to 1.21.2 https://review.opendev.org/c/airship/airshipctl/+/802771
It was observed that inspite of using the k8s1.21.2 qcow bundle, the resultant target nodes were v1.19.14 as shown in the test results.
The workaround was to use the featureGate: IPV6DualStack=true. This was not necessary in case k8s1.21.2 qcow image worked.
The below setup was deployed using Vagrant (Setting up IPv6 private network using radvd (router advertisement daemon))
root@airship-in-a-pod:~# virsh list --all
Id Name State
--------------------------------
4 air-target-1 running
9 air-worker-2 running
10 air-worker-1 running
- air-ephemeral shut off
root@airship-in-a-pod:~# ktl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node01 Ready master 5h5m v1.19.14 10.23.25.102 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.4.11
node03 Ready worker 4h37m v1.19.14 10.23.25.103 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.4.11
node04 Ready worker 4h37m v1.19.14 10.23.25.104 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.4.11
root@airship-in-a-pod:~# ktl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-7688c757d7-r2pg4 1/1 Running 0 5h3m
calico-system calico-node-5xpc4 1/1 Running 0 4h36m
calico-system calico-node-f786j 1/1 Running 0 4h36m
calico-system calico-node-vvxnn 1/1 Running 0 5h3m
calico-system calico-typha-5559987c7d-bdwtb 1/1 Running 0 5h3m
calico-system calico-typha-5559987c7d-cwrh4 1/1 Running 0 4h35m
calico-system calico-typha-5559987c7d-d2qpx 1/1 Running 0 4h35m
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-66785d8c6-pq4kn 1/1 Running 0 4h56m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-58bb994868-qgdjb 1/1 Running 0 4h56m
capi-system capi-controller-manager-5444bcc564-qlk9w 1/1 Running 0 4h56m
capm3-system capm3-controller-manager-656795c95-ftw84 1/1 Running 0 4h56m
capm3-system capm3-ipam-controller-manager-55cfffcdd8-8v5rw 1/1 Running 0 4h56m
cert-manager cert-manager-768bf64dd4-tcpx4 1/1 Running 0 5h3m
cert-manager cert-manager-cainjector-646879549c-wrnw6 1/1 Running 0 5h3m
cert-manager cert-manager-webhook-5fc6b7588-tvlqh 1/1 Running 0 5h3m
default myshell 1/1 Running 1 112m
flux-system helm-controller-7fcd8847c4-w2ksc 1/1 Running 0 5h1m
flux-system source-controller-865c5758f5-hcwwm 1/1 Running 0 5h1m
hardware-classification hardware-classification-controller-manager-57d954ff86-flk8b 2/2 Running 0 5h1m
ingress ingress-ingress-nginx-controller-686dc8f4cb-xqkmm 1/1 Running 0 4h35m
ingress ingress-ingress-nginx-defaultbackend-9bc7fd8c-g4q7m 1/1 Running 0 4h35m
kube-system coredns-5d78b786c6-b9qn5 1/1 Running 0 5h4m
kube-system coredns-5d78b786c6-r6g69 1/1 Running 0 5h4m
kube-system etcd-node01 1/1 Running 0 5h4m
kube-system kube-apiserver-node01 1/1 Running 0 5h4m
kube-system kube-controller-manager-node01 1/1 Running 0 5h4m
kube-system kube-proxy-gxbfw 1/1 Running 0 4h36m
kube-system kube-proxy-qfmwf 1/1 Running 0 5h4m
kube-system kube-proxy-vz7fs 1/1 Running 0 4h36m
kube-system kube-scheduler-node01 1/1 Running 0 5h4m
metal3 baremetal-operator-controller-manager-79577784f5-pwb74 2/2 Running 0 5h1m
metal3 capm3-ironic-5fb88cdc84-xmdwp 7/7 Running 0 5h1m
tigera-operator tigera-operator-5b76777d49-g8grg 1/1 Running 0 5h3m
root@airship-in-a-pod:~# ktl get machines -A
NAMESPACE NAME PROVIDERID PHASE VERSION
target-infra cluster-controlplane-db4k8 metal3://a0312805-2cc3-4a0b-abf1-9a1ee06ce7b7 Running v1.21.2
target-infra worker-1-fbd468ccd-hzjps metal3://aa30faa4-a92a-429e-a346-eed5c5fb37e8 Running v1.19.14
target-infra worker-1-fbd468ccd-msgkc metal3://37edd0db-847c-4275-b66a-3bb3c9d08e9e Running v1.19.14
root@airship-in-a-pod:~# ktl get bmh -A
NAMESPACE NAME STATE CONSUMER ONLINE ERROR
target-infra node01 provisioned cluster-controlplane-7nct9 true
target-infra node03 provisioned worker-1-bml8q true
target-infra node04 provisioned worker-1-pt2b4 true
Note that the IPs attribute in the pod description has both IPv4 and an IPv6 address.
root@airship-in-a-pod:~# ktl describe po myshell
Name: myshell
Namespace: default
Priority: 0
Node: node03/10.23.25.103
Start Time: Sun, 17 Oct 2021 08:27:01 +0000
Labels: run=myshell
Annotations: cni.projectcalico.org/podIP: 192.168.46.179/32
cni.projectcalico.org/podIPs: 192.168.46.179/32,2001:db8:42:59:3881:5dcd:877e:6632/128
Status: Running
IP: 192.168.46.179
IPs:
IP: 192.168.46.179
IP: 2001:db8:42:59:3881:5dcd:877e:6632
Containers:
myshell:
Container ID: containerd://3b6e152e69561bff30e08d62b8ca4a9186f6cd50775661dfc3ab9df841471ae2
Image: busybox
Image ID: docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57
Port: <none>
Host Port: <none>
Command:
sh
-c
sleep 3600
State: Running
Started: Sun, 17 Oct 2021 10:27:08 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 17 Oct 2021 09:27:06 +0000
Finished: Sun, 17 Oct 2021 10:27:06 +0000
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qjwbf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-qjwbf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 107s (x3 over 121m) kubelet, node03 Pulling image "busybox"
Normal Created 106s (x3 over 121m) kubelet, node03 Created container myshell
Normal Started 106s (x3 over 121m) kubelet, node03 Started container myshell
Normal Pulled 106s kubelet, node03 Successfully pulled image "busybox" in 1.30599128s