# ClusterAPI with OpenStack Provider
# Introduction
The following README details how to setup a simple scenario for integrating **ClusterAPI** with the **OpenStack provider** and deploy virtual machines with **Kubernetes Clusters** on-demand.
# Credentials
OpenStack:
```
https://31.171.250.116
admin
bv6OCC8lVcPD9wjqxJy1Sz33z1TIZDlN
```
# Setup
- 1x VM with **Debian 11.5** running **ClusterAPI**
- 2x VMs with **Debian 11.5** running **Microstack** *hypervisor*
- ClusterAPI v1.2.4
- Microstack version *Ussuri* revision 245
# Requirements
Before installing the ClusterAPI framework, you should have a working k8s cluster.
In brief, each server currently have xxvCPUs (xxGHz), xxGB RAM and xxGB SSD.
Initial Credentials: Username: cloudsigma Password: Cloud2022
Access through SSH:
```bash
ssh cloudsigma@<IP> -i identity.pem
```
Fix `sudo: unable to resolve host`. Run this on all machines for the first time.
```bash
echo "127.0.0.1 $HOSTNAME" | sudo tee -a /etc/hosts
```
(**optional**) Install useful packages
```bash
sudo apt-get update
sudo apt-get install -y tmux vim git bind9-utils
```
# ClusterAPI Setup
Refer to [ClusterAPI v1.2.5 documentation and QuickStart guide](https://https://cluster-api.sigs.k8s.io/user/quick-start.html)
Install Clusterctl
```
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.2.5/clusterctl-linux-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl
```
Test to ensure the version you installed is up-to-date:
```
clusterctl version
```
ClusterAPI supports different providers. In this guide, we will be using the OpenStack provider.
You can deploy the needed infrastructure with the following command:
```
# Initialize the management cluster
clusterctl init --infrastructure openstack
```
You should see a similar output like the following:
```
Fetching providers
Installing cert-manager Version="v1.9.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.0.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v1.0.0" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
```
Then proceed with OpenStack provider config as following:
Go to OpenStack UI and download your `cloud.yamls`.
`https://<openstack-ip>/project/api_access/` and click download `cloud.yamls`.
Add the password in `auth` section as this:
```
password: "YourOpenStackUIpasswd"
```
**You need a certificate to access the Openstack services**.
This self-signed certificate is located in the *Openstack VM* in the folder **/var/snap/microstack/common/etc/ssl/certs/cacert.pem**. Copy it to the ClusterAPI machine then add the certificate path to the *cloud.yaml* as this:
```
cacert: <path-to-your-cert>.pem
```
You should end with something like this:
```
# This is a clouds.yaml file, which can be used by OpenStack tools as a source
# of configuration on how to connect to a cloud. If this is your only cloud,
# just put this file in ~/.config/openstack/clouds.yaml and tools like
# python-openstackclient will just work with no further config. (You will need
# to add your password to the auth section)
# If you have more than one cloud account, add the cloud entry to the clouds
# section of your existing file and you can refer to them by name with
# OS_CLOUD=openstack or --os-cloud=openstack
clouds:
openstack:
auth:
auth_url: https://31.171.250.116:5000/v3/
username: "admin"
password: "bv6OCC8lVcPD9wjqxJy1Sz33z1TIZDlN"
project_id: a5656a1e09204be0bf65e27684ede8b8
project_name: "admin"
user_domain_name: "Default"
region_name: "microstack"
interface: "public"
cacert: /home/cloudsigma/configs/cacert.pem
identity_api_version: 3
```
You can check all the environment variables needed with the following command:
```
clusterctl generate cluster --infrastructure openstack --list-variables capi-quickstart
Required Variables:
- KUBERNETES_VERSION
- OPENSTACK_CLOUD
- OPENSTACK_CLOUD_CACERT_B64
- OPENSTACK_CLOUD_PROVIDER_CONF_B64
- OPENSTACK_CLOUD_YAML_B64
- OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR
- OPENSTACK_DNS_NAMESERVERS
- OPENSTACK_EXTERNAL_NETWORK_ID
- OPENSTACK_FAILURE_DOMAIN
- OPENSTACK_IMAGE_NAME
- OPENSTACK_NODE_MACHINE_FLAVOR
- OPENSTACK_SSH_KEY_NAME
Optional Variables:
- CLUSTER_NAME (defaults to capi-quickstart)
- CONTROL_PLANE_MACHINE_COUNT (defaults to 1)
- WORKER_MACHINE_COUNT (defaults to 0)
```
The following script can be used to export some of the variables:
```
wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc <path/to/clouds.yaml> openstack
```
Apart from the script, the following OpenStack environment variables are required. Check for details [here](https://https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/docs/book/src/clusteropenstack/configuration.md).
```
# The list of nameservers for OpenStack Subnet being created.
# Set this value when you need create a new network/subnet while the access through DNS is required.
export OPENSTACK_DNS_NAMESERVERS=<dns nameserver>
# FailureDomain is the failure domain the machine will be created in.
export OPENSTACK_FAILURE_DOMAIN=<availability zone name>
# The flavor reference for the flavor for your server instance.
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=<flavor>
# The flavor reference for the flavor for your server instance.
export OPENSTACK_NODE_MACHINE_FLAVOR=<flavor>
# The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
export OPENSTACK_IMAGE_NAME=<image name>
# The SSH key pair name
export OPENSTACK_SSH_KEY_NAME=<ssh key pair name>
# The external network
export OPENSTACK_EXTERNAL_NETWORK_ID=<external network ID>
```
Follows as working example:
```
# The list of nameservers for OpenStack Subnet being created.
# Set this value when you need create a new network/subnet while the access through DNS is required.
export OPENSTACK_DNS_NAMESERVERS=1.1.1.1
# FailureDomain is the failure domain the machine will be created in.
# microstack.openstack availability zone list
export OPENSTACK_FAILURE_DOMAIN=nova
# The flavor reference for the flavor for your server instance.
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.tiny
# The flavor reference for the flavor for your server instance.
export OPENSTACK_NODE_MACHINE_FLAVOR=m1.tiny
# The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
#microstack.openstack image list
export OPENSTACK_IMAGE_NAME=cirros
# The SSH key pair name
# microstack.openstack keypair create cs-clusterapi-openstack1
export OPENSTACK_SSH_KEY_NAME=cs-clusterapi-openstack1
# The external network
export OPENSTACK_EXTERNAL_NETWORK_ID=95cbd797-80ac-46a7-9a5d-5e8ca52b38e7
```
You can now generate a cluster
```
clusterctl generate cluster my-cluster-2 \
--kubernetes-version v1.25.0 \
--control-plane-machine-count=1 \
--worker-machine-count=0 \
> my-cluster-2.yaml
```
This generates a **YAML file** called, which can be used to create a *Kubernetes cluster* with **one** *control plane machine* and **zero** *worker machines*.
**Important:** Before applying the generated yaml file, it should be changed to accomodate the desired cluster setup.
Currently, we disabled LoadBalancer by changing:
```
spec:
apiServerLoadBalancer:
- enabled: false
+ enabled: true
```
Finally, apply the generated `my-cluster-2.yaml` to create the cluster:
```
$kubectl apply -f my-cluster-2.yaml
```
You should see an output like the following:
```
cluster.cluster.x-k8s.io/capi-quickstart created
dockercluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created
```
If all the steps work, a single VM should be created in OpenStack and a K8s installed in that VM.
```
$kubectl get cluster
NAME PHASE AGE VERSION
my-cluster-2 Provisioned 3d3h
```
If not, good luck, you can troubleshoot the cluster creation step with:
```
kubectl describe cluster my-cluster-2
```
Or check the ClusterAPI controller manager logs:
```
kubectl logs -n capo-system capo-controller-manager-7bccc4fc4d-jrrjd
```
# Clean-up
To delete the cluster run the command:
```
kubectl delete cluster my-cluster-2
```
# OpenStack (microstack)
Refer to the microstack documentation:
- https://microstack.run/docs/single-node
- https://opendev.org/x/microstack
## Installation
Install microstack with snap:
```
$ sudo apt update
sudo apt install snapd
sudo snap install core
sudo snap install microstack --beta
```
Check microstack version and revision:
```
$ snap list microstack
Name Version Rev Tracking Publisher Notes
microstack ussuri 245 latest/beta canonical✓ -
```
Export microstack to PATH and update sudo path with `visudo`
```
export PATH=/snap/bin:$PATH
```
Now you can initialize the microstack service:
```
sudo microstack init --auto --control
```
### Note
The initialization step **automatically deploys**, configures, and starts *OpenStack services*. In particular, it will create the **database**, **networks**, an **image**, **several flavors**, and **ICMP/SSH security groups**. This can all be done within *10 to 20 minutes* depending on your machine.
You should now be able to run the Openstack commands:
```
microstack.openstack [command]
```
You can create an instance named *test* with the *cirros* image:
```
microstack launch cirros
```
The output generated:
```
Creating local "microstack" ssh key at /home/ubuntu/snap/microstack/common/.ssh/id_microstack
Launching server ...
Allocating floating ip ...
Server test launched! (status is BUILD)
Access it with `ssh -i /home/ubuntu/snap/microstack/common/.ssh/id_microstack cirros@10.20.20.199`
```
Enable IP Forwarding:
```
sudo sysctl -w net.ipv4.ip_forward=1
```
To make it permanent you must edit `/etc/sysctl.conf`
```
net.ipv4.ip_forward=1
```
Useful commands:
```
microstack.openstack network list
microstack.openstack subnet list
microstack.openstack router list
microstack.openstack flavor list
microstack.openstack keypair list
microstack.openstack image list
microstack.openstack security group rule list
```
To access the Openstack dashboard, you need the admin password:
```
sudo snap get microstack config.credentials.keystone-password
```
```
Dashboard: https://<your-ip>
```
# Known Issues
## ClusterAPI
During ClusterAPI setup (`clusterctl init --infrastructure openstack`) we faced issues to pull CertManager docker images from Quay.io. If you face the same you should proceed with the installation of certmanager manually by:
```
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.0/cert-manager.yaml
```
## OpenStack
Not working with default CS images of Ubuntu 20.04 and 22.04. Further investigation is needed.
Microstack currently has an issue observed with CS Ubuntu 20.04 and 22.04 /Debian 11.5 images. `microstack init` doesn't pick the correct hostname to use as nova compute host. You can check your configuration with the following command:
```
microstack.openstack compute service list --service nova-compute
```
If the hostname listed doesn't correspond to the hostname of your machine, you need to change it, in the nova configurations.
```
sudo vim /var/snap/microstack/common/etc/nova/nova.conf.d/nova-snap.conf
```
And set the correct hostname and IP:
```
host = GVA3-CHA-SRVR01
my_ip = 31.171.250.116
```
Then restart the service:
```
sudo systemctl restart snap.microstack.nova-compute.service
```
Make sure the server list output is like the following:
```
microstack.openstack compute service list --service nova-compute
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+-----------------+------+---------+-------+----------------------------+
| 4 | nova-compute | GVA3-CHA-SRVR01 | nova | enabled | up | 2022-11-25T12:01:31.000000 |
```
If openstack creates and additional server instead of updating the previous one, just delete the old as:
You should delete the previous incorrect host:
```
microstack.openstack compute service delete <id_of_compute_node>
```