# SUSE Rancher - Useful notes
## Install Rancher with `k3d`
This guide allows you easily install Rancher in a `k3d` based cluster.
This setup uses the [Rancher-generated TLS certificate](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster#3-choose-your-ssl-configuration) SSL configuration.
Create a cluster with `k3d`:
```shell
k3d cluster create test-cluster
```
For the specific Kubernetes version:
```shell
k3d cluster create test-cluster --image rancher/k3s:v1.30.0-rc1-k3s1
```
Add `rancher` chart repo (`latest`) with `helm`:
```shell
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
```
Create the Kubernetes `Namespace` to host Rancher:
```shell
kubectl create namespace cattle-system
```
Install `cert-manager` with `helm`:
```shell
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
```
Install Rancher with `helm`:
```shell
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.my.org --set bootstrapPassword=admin
```
If you want to install a specific version:
```shell
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.my.org --set bootstrapPassword=admin --version=2.9.3
```
Wait for Rancher to be rolled out:
```shell
kubectl -n cattle-system rollout status deploy/rancher
```
Set the domain name `rancher.my.org` in your `/etc/hosts` file (in order to be able to resolve it when requesting) and local forward traffic from the cluster:
```shell
kubectl expose deployment rancher --name=rancher-lb --port=443 --type=LoadBalancer -n cattle-system
```
Reach Rancher from the browser at: `https://rancher.my.org:4443/`
Alternatively, use the `port-forward` command from `kubectl`:
```
kubectl port-forward -n cattle-system svc/rancher 8000:443
```
To retrive the user password:
```
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
```
## Rancher RBAC
Rancher's offers several RBAC features which are designed to help users manage permissions more easily than with standard k8s RBAC.
In details:
* `GlobalRole` are used to grant access to the local clusters (except the `inheritedClusterRole` field which grants access to all downstream clusters)
* `RoleTemplate` can be used to grant access to a downstream cluster (`ClusterRoleTemplateBinding`) or project (`ProjectRoleTemplateBinding`)

Reference: https://github.com/rancherlabs/engineering-infra/tree/main/docs/rbac
## Setup Keycloak with Rancher (OIDC)
Run **Keycloak**:
```shell
docker run -it --rm -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KEYCLOAK_LOGLEVEL=DEBUG quay.io/keycloak/keycloak:24.0.5 start-dev
```
To provision **Keycloak** automatically, type the following command:
```shell
docker run -it --rm -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KEYCLOAK_LOGLEVEL=DEBUG -v /home/alessio/Documents/github/work/keycloak-config/:/opt/keycloak/data/import quay.io/keycloak/keycloak:24.0.5 start-dev --import-realm
```
In case you are using **ngrok**:
```shell
docker run -it --rm -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KEYCLOAK_LOGLEVEL=DEBUG -e KC_HOSTNAME_URL=https://0a80b2bbac1f.ngrok.app/ -e KC_HOSTNAME_ADMIN_URL=https://0a80b2bbac1f.ngrok.app/ -v /home/alessio/Documents/github/work/keycloak-config/:/opt/keycloak/data/import quay.io/keycloak/keycloak:24.0.5 start-dev --import-realm
```
Run **Rancher**:
```shell
sudo docker run --privileged -it --rm -p 80:80 -p 443:443 --dns 1.1.1.1 -e CATTLE_BOOTSTRAP_PASSWORD=password rancher/rancher
```
Run **Rancher** and use local `kubectl`:
```shell
docker run --privileged -it --rm -p 80:80 -p 443:443 -p 6443:6443 --dns 1.1.1.1 -e CATTLE_BOOTSTRAP_PASSWORD=password rancher/rancher
docker cp <container_id>:/etc/rancher/k3s ./kube/
export KUBECONFIG=./kube/k3s/k3s.yaml
```
Get **Rancher** and **Keycloak** IP addresses:
```shell
docker ps
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-id>
```
### Configure Keycloak
To configure **Keycloak** follow these instructions or follow this guide: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc
* Create the realm
* Create the user
* Create credentials for the user (disable Temporary button)
* Create client
* Client Scopes
* Select "rancher-dedicated"
* Select Add Mapper (By configuration)
* Create Groups Mapper
* Create Client Audience
* Create Groups Path
### Configure Rancher
* [Users & Auth] -> [Auth] -> [Keycloak OIDC]
**Keycloak** URL: `https://<keycloak-ngrok>:8080`
**Rancher** URL: `https://<rancher-ngrok>`
**Issuer** URL: `http://<keycloak-ngrok>/realms/myrealm`
**Auth Endpoint** URL: `http://<keycloak-ngrok>/realms/myrealm/protocol/openid-connect/auth`

## Setup LDAP with Rancher
I followed this guide from Enrico's gists: https://gist.github.com/enrichman/f14a1689ae315f83d8d2efe28669ef9e
Use the following `docker-compose.yaml` to setup the environment
```yaml
---
version: '3.7'
services:
openldap:
image: osixia/openldap:1.5.0
container_name: openldap
hostname: openldap
ports:
- "389:389"
- "636:636"
environment:
- LDAP_ORGANISATION=example
- LDAP_DOMAIN=example.com
- LDAP_ADMIN_USERNAME=admin
- LDAP_ADMIN_PASSWORD=password
- LDAP_CONFIG_PASSWORD=password
- "LDAP_BASE_DN=dc=example,dc=com"
- LDAP_READONLY_USER=true
- LDAP_READONLY_USER_USERNAME=user-ro
- LDAP_READONLY_USER_PASSWORD=ro_pass
networks:
- openldap
phpldapadmin:
image: osixia/phpldapadmin:0.9.0
container_name: phpldapadmin
hostname: phpldapadmin
ports:
- "80:80"
environment:
- PHPLDAPADMIN_LDAP_HOSTS=openldap
- PHPLDAPADMIN_HTTPS=false
depends_on:
- openldap
networks:
- openldap
rancher:
image: rancher/rancher:latest
container_name: rancher
hostname: rancher
privileged: true
dns:
- 1.1.1.1
ports:
- "8080:80"
- "443:443"
environment:
- CATTLE_BOOTSTRAP_PASSWORD=password12345
depends_on:
- openldap
networks:
- openldap
networks:
openldap:
driver: bridge
```
Import this file on `phpldapadmin` (all the hashed password are: `password`):
```txt
# LDIF Export for dc=example,dc=com
# Server: openldap (openldap)
# Search Scope: sub
# Search Filter: (objectClass=*)
# Total Entries: 6
#
# Generated by phpLDAPadmin (http://phpldapadmin.sourceforge.net) on July 19, 2024 3:28 pm
# Version: 1.2.5
version: 1
# Entry 1: dc=example,dc=com
dn: dc=example,dc=com
dc: example
o: example
objectclass: top
objectclass: dcObject
objectclass: organization
# Entry 2: cn=user-ro,dc=example,dc=com
dn: cn=user-ro,dc=example,dc=com
cn: user-ro
description: LDAP read only user
objectclass: simpleSecurityObject
objectclass: organizationalRole
userpassword: {SSHA}SJbTe6zLxF1r0jMa4unwlO99ecomNtKW
# Entry 3: ou=groups,dc=example,dc=com
dn: ou=groups,dc=example,dc=com
objectclass: organizationalUnit
objectclass: top
ou: groups
# Entry 4: cn=dev,ou=groups,dc=example,dc=com
dn: cn=dev,ou=groups,dc=example,dc=com
cn: dev
gidnumber: 500
objectclass: posixGroup
objectclass: top
# Entry 5: ou=users,dc=example,dc=com
dn: ou=users,dc=example,dc=com
objectclass: organizationalUnit
objectclass: top
ou: users
# Entry 6: cn=alessio,ou=users,dc=example,dc=com
dn: cn=alessio,ou=users,dc=example,dc=com
cn: alessio
gidnumber: 500
givenname: alessio
homedirectory: /home/users/alessio
objectclass: inetOrgPerson
objectclass: posixAccount
objectclass: top
sn: alessio
uid: alessio
uidnumber: 1000
userpassword: {MD5}X03MO1qnZdYdgyfeuILPmQ==
```
### From Rancher
* **Hostname/IP:** `localhost`
* **Service Account Distinguished Name:** `cn=admin,dc=example,dc=com`
* **Service Account Password:** `password`
* **User Search Base:** `ou=users,dc=example,dc=com`
* **Group Search Base:** `ou=groups,dc=example,dc=com`
* **Username:** `alessio`
* **Password:** `password`
## WIP: Join k3d as downstream cluster from Rancher (docker)
Setup the environment with `docker-compose` (`rancher`, `ldap`, etc.).
Then create a **kubernetes** cluster with `k3d`:
**N.B.** always remember to have `nameserver 8.8.8.8` in your `/etc/resolv.conf` to not face with weird DNS issues.
```bash
k3d cluster create --network ldap-config_openldap downstream-cluster -v ~/Documents/github/work/ldap-config/:/tmp@server:0 --image rancher/k3s:v1.26.4-k3s1
```
In this case we are joining the existing network (eg. `ldap-config_openldap`) and mounting a volume that contains static binaries such as `curl` that are not present in `k3d`.
Once `k3d` is ready, we can import it from **rancher**. Remember to add the environment variable to be injected in the `cattle-agent`:
* `CATTLE_AGENT_STRICT_VERIFY`: `false`
* `STRICT_VERIFY`: `false`
### Troubleshooting
In case the following error appear:
Edit the **cattle-agent** `Deployment` with `dnsPolicy` parameter value from `ClusterFirst` to `Default`
(https://forums.rancher.com/t/waiting-for-api-to-be-available-for-downstream-eks-cluster/37898/3)
*NOTE*
The procedure is not working yet, needs some adjustment.
## Develop the terraform provider rancher2
In order to manually testing the `terraform-provider-rancher2`, once the plugin is built, you have to move it under the right directory in order to be picked up by `terraform`.
Create the right directory tree by:
```bash
mkdir -p ~/.terraform.d/plugins/terraform.local/local/rancher2/0.0.0/linux_amd64
```
Specify the custom plugin in the `terraform` script by adding this:
```hcl!
terraform {
required_providers {
rancher2 = {
source = "terraform.local/local/rancher2" # same path as above ↑↑↑
version = "0.0.0"
}
}
}
```
Run `terraform init` to setup the plugin.
## Create Cloud-init VM
I'm using a `cloud-init` VM in order to create a local node to install **k3s/rke2** on it.
Download the `cloud-init` image from: `https://cloud-images.ubuntu.com/`.
Run this command to create a dedicated disk with more capacity:
```shell
qemu-img create -b Downloads/noble-server-cloudimg-amd64.img -f qcow2 -F qcow2 downstream.img 35G
```
Create the files to setup the VM with `cloud-init`:
`user-data`:
```yaml
#cloud-config
users:
- name: rancher
passwd: "$5$v80c4leA1M4qdXbi$igc2QjPYcIwkqBe4O3l3itqy5eBkWkAiYYA1fs1CEG/" #passwd: password
ssh_authorized_keys:
- ssh-ed25519 ...
shell: /bin/bash
sudo: "ALL=(ALL) NOPASSWD:ALL"
lock_passwd: false
ssh_pwauth: true
disable_root: false
```
`meta-data`:
```yaml
instance-id: downstream
local-hostname: downstream
```
Validate the configuration with the following command:
```shell
sudo cloud-init schema -c user-data --annotate
```
Create the `iso` image with this command:
```shell
genisoimage -output cloud-init.iso -V cidata -r -J user-data meta-data
```
Run `virt-manager` in order to complete the installation (remember to add the needed CD-ROM storage to load the `cloud-init.iso` file and move it as first boot option).
## Tips & Tricks
### Rebase forked branch with upstream branch
Eg. `local-branch` needs to be rebased with `release/v2.9` (some fixes have been added).
sync fork (branch `release/v2.9`) from github UI.
On the terminal:
```bash
git checkout release/v2.9
git pull
git checkout local-branch
git rebase release/v2.9
git push -f
```
### Add new branch from remote into fork
In case the upstream repo creates a new branch that I didn't fork (because this was not existing at the time I forked).
Eg. `main` branch didn't exists when I forked `rancher/rancher`
In order to update my fork `alegrey91/rancher` do as follow (from `alegrey91/rancher`):
```bash
git remote add upstream https://github.com/rancher/rancher
git fetch upstream
git checkout -b new-branch upstream/new-branch
git push -u origin new-branch
```
### Delete all the stopped containers
To delete all the created containers:
```bash
for c in $(docker ps -a | tail -n +2 | awk '{print $1}'); do docker rm $c; done
```
### Mocks
If you want to generate mocks for a specific interface:
```bash
mockgen -destination=pkg/controllers/management/auth/zz_user_fakes.go -package=auth k8s.io/client-go/tools/cache Indexer
```
Where `k8s.io/client-go/tools/cache` is the package that holds the `Indexer` methods.
### Ngrok
If you are using **ngrok** to have a public IP with **rancher** behind, remember to to update the `server-url` setting on the rancher settings dashboard.
```shell
ngrok http https://localhost:8443
```
### Cherry-Picking
Usually to back port some commits from one branch to another, is helpful using the command `git cherry-pick`.
In order to back port a commit from a branch, go to the branch that already has the commit. Find the commit hash with `git log`, then `checkout` to the branch where you want to port the commit and run the command:
```shell
git cherry-pick <commit_hash>
```
Since you are importing a commit, there's no need to `git add` / `git commit` since it's already done.
The only thing you need to do is `git push`.