# k8s minimal cluster config
## Prerequisites
Flatcar configuration: https://suraj.io/post/2021/01/kubeadm-flatcar/
PhotonOS configuration: read below
### Kubeadm, kubectl, kubelet
First we need to install packages required for kubernetes cluster initialization:
```bash
tdnf install ebtables ethtool socat conntrack
```
The script needed to install kubeadm, kubectl, kubelet.
```bash
#!/usr/bin/env bash
CNI_VERSION="v0.8.2"
ARCH="amd64"
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
DOWNLOAD_DIR=/usr/local/bin
sudo mkdir -p $DOWNLOAD_DIR
CRICTL_VERSION="v1.22.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
RELEASE="v1.22.6"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
sudo chmod +x {kubeadm,kubelet,kubectl}
RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable --now kubelet
```
### Install helm
```bash
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
```
### Ports
First you need to open ports 80, 443, 6443 and 10248-10250 for Kubernetes and Rancher:
```bash
sudo iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 10250 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 10249 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 10248 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
```
Then save iptables config:
```bash
iptables-save > /etc/systemd/scripts/ip4save
```
### Docker config
Create and edit `/etc/docker/daemon.json` as below:
```json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
```
Execute:
```bash
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
```
#### Possible fix for kublet not being healthy:
If kubelet isn't healthy after executing step above disabling swap might help:
```bash
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
reboot
```
### Letting iptables see bridged traffic
Make sure that the `br_netfilter` module is loaded. This can be done by running `lsmod | grep br_netfilter`. To load it explicitly call `sudo modprobe br_netfilter`.
As a requirement for your Linux Node's `iptables` to correctly see bridged traffic, you should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g.
```bash
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
```
You should also enable packet forwarding:
For this set `net.ipv4.ip_forward = 1` in one of`/etc/sysctl.d/` files e.g. in PhotonOS you should alter `/etc/sysctl.d/50-security-hardening.conf` and set `net.ipv4.ip_forward = 1`
Then update sysctl:
```bash
sudo sysctl --system
```
### Environment
For ease of use you should edit your `/etc/environment` to contain line:
```
KUBECONFIG=/etc/kubernetes/admin.conf
```
Then start new session for changes to take effect.
## Cluster initial configuration
### Initialize cluster
On master node initialize cluster with `kubeadm init`. **IMPORTANT:** If your node network overlaps with `192.168.0.0/16` then you need to alter your pod network CIDR e.g. `kubeadm init --pod-network-cidr=172.16.0.0/12`
### Networking plugin
In this step Calico networking plugin will be installed. First download Calico CRDs:
```bash
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
```
Then download and edit custom resources. Set `spec.calicoNetwork.ipPools.cidr` to match your CIDR set in `kubadm init` step.
```bash
curl -o calico.yaml https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml
```
Edit CIDR e.g.:
```yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 172.16.0.0/12
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
```
And apply Calico networking plugin:
```bash
kubectl create -f calico.yaml
```
Wait for calico pods to get ready. Wait until each pod has the `STATUS` of `Running`:
```bash
watch kubectl get pods -n calico-system
```
**IMPORTANT:** Remove the taints on the master so that you can schedule pods on it.
```bash
kubectl taint nodes --all node-role.kubernetes.io/master-
```
Done. Your cluster is operational. In next chapter you can install Rancher to manage your clusters.
## Rancher
### Cert-manager installation
In this step you will install cert-manager in your kubernetes cluster.
First add helm repository for cert-manager chart:
```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
```
There are two ways of installing cert-manager and CRDs:
With `kubectl apply` for CRDs:
```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.7.1
```
Or with chart's option:
```bash
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.7.1 \
--set installCRDs=true
```
### Rancher installation (Rancher generated certificates)
Add helm repo:
```
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
```
Create namespace:
```
kubectl create namespace cattle-system
```
Install rancher stable with helm (match hostname with host's domain name):
```
helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--set hostname=cluster-1.test.modino.cloud \
--set bootstrapPassword=admin
```
Finally wait for rancher to start:
```bash
kubectl -n cattle-system rollout status deploy/rancher
```
Now you can edit your rancher service to expose it to external IP:
```bash
kubectl edit service rancher -n cattle-system
```
And then add these line in `spec`:
```yaml
externalIPs:
- 192.168.66.101
```
Now access your host on browser e.g.
```
https://cluster-1.test.modino.cloud/
```
### Adding a cluster with rancher agent
First follow the steps: [Prerequisites](https://hackmd.io/HtjMLGtcTeSwSVQSP-pEQw?both#Prerequisites), [Cluster initial configuration](https://hackmd.io/HtjMLGtcTeSwSVQSP-pEQw?both#Prerequisites)
After that to join the cluster use rancher ui:
- select `Import existing` while on homepage
- select `Generic` from "Import any Kubernetes cluster"
- Name the cluster and provide initial configuration if needed
- Coppy the command from "Registration" tab and execute it on worker cluster.
### Joining new nodes to cluster
First follow the steps: [Prerequisites](https://hackmd.io/HtjMLGtcTeSwSVQSP-pEQw?both#Prerequisites)
Execute `kubeadm token create --print-join-command` on master node and execute command's result on worker node e.g.:
```bash
kubeadm join 192.168.66.101:6443 --token hul03w.sd3le61hmeo10uiy --discovery-token-ca-cert-hash sha256:01106bd1fad1f436dff29a71cd41fc31b3102524d50e0517098fc42fba2442a7
```
### More about clusters using self-signed certificate
When using self-signed certs you might experience:
```
“Certificate chain is not complete, please check if all needed intermediate certificates are included in the server certificate (in the correct order) and if the cacerts setting in Rancher either contains the correct CA certificate (in the case of using self signed certificates) or is empty (in the case of using a certificate signed by a recognized CA). Certificate information is displayed above. error: Get “https://192.168.105.200\“: x509: certificate signed by unknown authority (possibly because of “x509: ECDSA verification failure” while trying to verify candidate authority certificate “dynamiclistener-ca”)” in RKE1 cluster
OR
“Configured cacerts checksum (1382944946dbe8c6faf7d0bd6d33d6593f3416579e75efa6ad852c2e24453016) does not match given --ca-checksum (543edb437be8e3b68c60bb09fc27bde24f26ce62bec2e44e182681c2df6ed06b)” in RKE2 cluster
```
Replace tls-rancher-internal-ca’s tls.crt and tls.key with tls-rancher’s tls.crt and tls.key:
Copy tls-rancher's tls.crt and tls.key:
```bash
kubectl edit secret tls-rancher -n cattle-system
```
And replace tls-rancher-internal-ca's tls.crt and tls.key with copied data:
```bash
kubectl edit secret tls-rancher-internal-ca -n cattle-system
```
Then restart rancher/kubernetes/node.
# Creating deployment, service, ingress
## Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
selector:
matchLabels:
app: my-deployment
replicas: 3
template:
metadata:
labels:
app: my-deployment
spec:
imagePullSecrets:
- name: <REGISTRY_SECRET_NAME>
containers:
- name: my-deployment
image: <IMAGE>
envFrom:
- secretRef:
name: <ENV_SECRETS_NAME>
imagePullPolicy: Always
ports:
- containerPort: <PORT>
```
### Registry credentials
To use images from private registry you have to provide credential to that registry. To do so you have to provide it at a secret
```yaml
apiVersion: v1
data:
.dockerconfigjson: <DOCKER_CREDENTIALS>
kind: Secret
metadata:
name: my-registry-secret
namespace: my-namespace
type: kubernetes.io/dockerconfigjson
```
Where <DOCKER_CREDENTIALS> is a file with credentials (`~/.docker/config.json`) that is created after executing `docker login`
More info: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
### Secrets
If your app need secrets you will have to create one. The secret and deployment have to be in the same namespace.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
type: Opaque
data:
SECRET_NAME: <SECRET_IN_BASE64>
```
## Service
To match a deployment to service you have to use the same selector.
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
spec:
selector:
app: my-deployment
ports:
- protocol: TCP
port: <PORT>
targetPort: <PORT>
```
## Ingress
To access the app from outside of the cluster you have to expose service. One way of doing that is to configure Ingress that will route outside traffic to services.
Below are two diffrent tools that can work as an Ingress
### NGINX Ingress
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: selfsigned-cluster-issuer # needed to get cert from issuer
name: my-ingress
namespace: my-namespace
spec:
defaultBackend: # default redirect if the path doesn't match
service:
name: my-service
port:
number: 443
ingressClassName: nginx
rules:
- host: <HOST eg. host.io>
http:
paths:
- pathType: Prefix
backend:
service:
name: my-service
port:
number: 443
path: /
tls:
- hosts:
- <HOST>
secretName: <SECRET_CERT> #will be created by the issuer
```
To use tls you have to have certificate. In this example we use cert-manager. To do so we just have to create issuer that will create a cert for us.
```yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: default
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-cluster-issuer
spec:
selfSigned: {}
```
#### Additonal notes
To route traffic ot right path you might want to use regex to do so you will have to add to addnotations:
`nginx.ingress.kubernetes.io/use-regex: "true"`
It's also impotrant what part of the path you want pass, you can use:
`nginx.ingress.kubernetes.io/rewrite-target: /$3`
https://kubernetes.github.io/ingress-nginx/examples/rewrite/
### Istio
We need to create Virtual service that will reference to our service and use desired gateway that recives the traffic.
To use tls we need a certificate, in this example we will use cert-manager. First we need to create issuer and after that we can create an cert using that issuer.
#### Gateway
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
namespace: my-namespace
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: my-testcert
hosts:
- "*"
```
#### VirtualService
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-vs
namespace: my-namespace
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: hello-service.hello-world.svc.cluster.local
port:
number: 80
```
#### Issuer
It's important to keep the namespace: `istio-system`
```yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: istio-system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-cluster-issuer
namespace: istio-system
spec:
selfSigned: {}
```
#### Cert
```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-testcert
namespace: my-namespace
spec:
secretName: my-testcert
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
dnsNames:
- <eg. cluster-4.test.modino.cloud>
issuerRef:
name: selfsigned-cluster-issuer
kind: ClusterIssuer
group: cert-manager.io
```
# Ansible
## Hostfile
Example hostfile:
```yaml
nodes:
hosts:
192.168.66.102:
ansible_user: ansible-test
ansible_become_password: "{{ cluster2Pass }}"
clusters:
hosts:
192.168.66.104:
ansible_user: ansible-test
ansible_become_password: "{{ cluster4Pass }}"
```
## Playbooks
### Initial configurations
```yaml
- name: Setup initial config for k8s
hosts: nodes
become: yes
vars_files:
- vault.yml
tasks:
- name: Install ebtables ethtool socat conntrack
ansible.builtin.shell:
cmd: tdnf -y install ebtables ethtool socat conntrack
- name: Install kubeadm kubectl kubelet
ansible.builtin.script:
cmd: ./kubeInstall.sh
- name: Install Helm
ansible.builtin.shell: |
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
args:
executable: /usr/bin/bash
- name: Open ports
ansible.builtin.shell: |
sudo iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 10250 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 10249 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 10248 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables-save > /etc/systemd/scripts/ip4save
args:
executable: /usr/bin/bash
- name: Setup docker config
ansible.builtin.shell: |
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
args:
executable: /usr/bin/bash
- name: Letting iptables see bridged traffic
ansible.builtin.shell: |
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
args:
executable: /usr/bin/bash
- name: Enable packet forwarding
ansible.builtin.shell:
cmd: sed -i "s/net.ipv4.ip_forward =.*/net.ipv4.ip_forward = 1/g" /etc/sysctl.d/50-security-hardening.conf
- name: Update sysctl
ansible.builtin.shell:
cmd: sudo sysctl --system
```
#### kubeInstall.sh
```bash
#!/usr/bin/env bash
CNI_VERSION="v0.8.2"
ARCH="amd64"
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
DOWNLOAD_DIR=/usr/local/bin
sudo mkdir -p $DOWNLOAD_DIR
CRICTL_VERSION="v1.22.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
RELEASE="v1.22.6"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
sudo chmod +x {kubeadm,kubelet,kubectl}
RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable --now kubelet
```
### Minimal cluster config
```yaml
- name: Setup minimal cluster
hosts: nodes
become: yes
vars_files:
- vault.yml
tasks:
- name: Initialize cluster
ansible.builtin.shell:
cmd: kubeadm init --pod-network-cidr=172.16.0.0/12
- name: Environment config
ansible.builtin.shell:
cmd: echo "KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/environment
- name: reset ssh connection
meta: reset_connection
- name: Remove taint from master
ansible.builtin.shell:
cmd: kubectl taint nodes --all node-role.kubernetes.io/master-
- name: install networking plugin
ansible.builtin.shell: |
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
cat <<EOF | sudo tee ./calico.yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 172.16.0.0/12
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
kubectl create -f calico.yaml
args:
executable: /usr/bin/bash
```
### Create cluster with rancher
```yaml
- name: Join node master
hosts: nodes
become: yes
vars_files:
- vault.yml
tasks:
- name: Install cert-manager
ansible.builtin.shell: |
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml
helm install --wait \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.7.1
- name: Install Rancher
ansible.builtin.shell: |
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
kubectl create namespace cattle-system
helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--set hostname=cluster-2.test.modino.cloud \
--set bootstrapPassword=admin
```
### Create and join cluster
```yaml
- name: Join node master
hosts: nodes
become: yes
vars_files:
- vault.yml
tasks:
- name: Get command to join cluster
ansible.builtin.script:
cmd: ./joinCluster.sh "{{ RANCHER_USERNAME }}" "{{ RANCHER_PASSWORD }}"
```
#### ./joinCluster.sh
```bash
#!/bin/bash
RANCHER_URL=https://cluster-1.test.modino.cloud
USERNAME=$1
PASSWORD=$2
NAME=cluster-ansible-test
TOKEN=$(curl $RANCHER_URL/v3-public/localProviders/local?action=login -k -c - \
-H 'Content-Type: application/json' \
-d '{"description":"UI session","responseType":"body","username":"'$USERNAME'","password":"'$PASSWORD'"}')
TOKEN=$(echo $TOKEN | jq ".token")
curl --cookie 'R_LOCALE=en-us;R_REDIRECTED=true;R_SESS='$TOKEN'' $RANCHER_URL/v1/provisioning.cattle.io.clusters -k \
-H 'Content-Type: application/json' \
-d '{"type":"provisioning.cattle.io.cluster","metadata":{"namespace":"fleet-default","name":"'$NAME'"},"spec":{}}'
sleep 15
JSON=$(curl --cookie 'R_LOCALE=en-us;R_REDIRECTED=true;R_SESS='$TOKEN'' \
$RANCHER_URL/v3/clusters -k \
-H 'Content-Type: application/json')
for row in $(echo "${JSON}" | jq -r '.data[] | @base64'); do
_jq() {
echo ${row} | base64 --decode | jq -r ${1}
}
if [ "$NAME" == $(_jq '.name') ]; then
ID=$(_jq '.id')
fi
done
JSON=$(curl --cookie 'R_LOCALE=en-us;R_REDIRECTED=true;R_SESS='$TOKEN'' \
$RANCHER_URL/v3/clusterregistrationtoken -k \
-H 'Content-Type: application/json')
for row in $(echo "${JSON}" | jq -r '.data[] | @base64'); do
_jq() {
echo ${row} | base64 --decode | jq -r ${1}
}
if [ "$ID" == $(_jq '.clusterId') ]; then
COMMAND=$(_jq '.insecureCommand')
fi
done
eval ${COMMAND}
```
### Join node
```yaml
- name: Join node master
hosts: clusters
become: yes
vars_files:
- vault.yml
tasks:
- name: Get command
ansible.builtin.shell:
cmd: kubeadm token create --print-join-command
register: kubeadm_join
- name: Set fact join_command
set_fact:
join_command: "{{ kubeadm_join.stdout_lines[0] }}"
- name: Join node slave
hosts: nodes
become: yes
vars_files:
- vault.yml
tasks:
- name: Join cluster as node
ansible.builtin.shell:
cmd: "{{ hostvars[groups['clusters'][0]]['join_command'] }}"
```
Hee-Hee! (Well, I hope it will work...)


