# Kubernetes Cluster on Netcup for USEFLEDS
## Infos
**Nodes:** (details on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/servers))
* Master: k8s-0.oydapp.eu (v22019071530492888)
intern: k8s-0 und k8s-master.local, IP: `152.89.104.40`
* Worker:
* k8s-1.oydapp.eu (v22019071530492886)
intern: k8s-1 und k8s-worker01.local, IP: `152.89.105.62`
* k8s-2.oydapp.eu (v22019071530492887)
intern: k8s-2 und k8s-worker02.local, IP: `152.89.107.39`
* Storage:
* IP: `37.120.184.76` (v220200315304112158)
## Baremetal Setup
Resources:
* [How to Install Kubernetes Cluster on Debian](https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/)
* kubeadm init: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
* login to Server Control Panel to deploy Linux image: https://www.servercontrolpanel.de/SCP/Home
Steps:
1) deploy latest Debian Image with minimial setup
(don't configure user or SSH keys; afterwards login using ssh with provided root password)
2) install Kubernetes Cluster
<details><summary>Configure accounts</summary>
```bash=
# configure vi as default editor
update-alternatives --config editor
# enable passwordless sudo
visudo # -> update at bottom: %sudo ALL=(ALL:ALL) NOPASSWD:ALL
```
for each user:
```bash=
adduser christoph --disabled-password
su christoph
cd
mkdir .ssh
chmod 700 .ssh
vi .ssh/authorized_keys # <- copy pubkeys
vi .vimrc # add line: set nocompatible
# and as root
usermod -aG sudo christoph
```
</details>
<details><summary>Basic system configuration</summary>
run as a user:
```bash=
# install additional packages
sudo apt-get update
sudo apt-get -y install iptables-persistent gnupg gnupg2 curl software-properties-common
# disable ssh root access with password
sudo vi /etc/ssh/sshd_config # uncomment "PermitRootLogin prohibit-password"
# restart sshd to apply changes
sudo systemctl restart sshd
# set hostname
sudo hostnamectl set-hostname "k8s-xxx.local"
sudo vi /etc/hosts
152.89.104.40 k8s-master.local k8s-0
152.89.105.62 k8s-worker01.local k8s-1
152.89.107.39 k8s-worker02.local k8s-2
# disable Swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
```
</details>
<details><summary>Configure Firewall</summary>
```bash=
cat << EOF | sudo tee /etc/iptables/rules.v4
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 193.170.140.34 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A INPUT -s 152.89.105.62 -j ACCEPT
-A INPUT -s 152.89.107.39 -j ACCEPT
-A INPUT -s 152.89.104.40 -j ACCEPT
-A INPUT -s 37.120.184.76 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED,UNTRACKED -j ACCEPT
COMMIT
EOF
cat << EOF | sudo tee /etc/iptables/rules.v6
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
#-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED,UNTRACKED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmpv6 -j ACCEPT
COMMIT
EOF
```
</details>
<details><summary>Install and configure containerd</summary>
```bash=
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# to apply above changes:
sudo sysctl --system
# install conatinerd
sudo apt-get -y install containerd
# configure containerd so that it works with Kubernetes
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
# set cgroupdriver to systemd: "SystemdCgroup = true"
sudo vi /etc/containerd/config.toml
# restart and enable containerd service
sudo systemctl restart containerd
sudo systemctl enable containerd
```
</details>
<details><summary>Install Kubernetes tools</summary>
```bash=
# add Kubernetes Apt repository
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
# install Kubernetes
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
</details>
<details><summary>Setup master node</summary>
```bash=
sudo kubeadm init --control-plane-endpoint=k8s-0.oydapp.eu
# save output for worker nodes to join
# store kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
</details>
<details><summary>Setup worker nodes</summary>
update command with output from control-plan initialization on [Github](https://github.com/OYD-private/netcup-k8s/blob/main/servers/control-plane.txt)
```bash=
sudo kubeadm join k8s-master.oydapp.eu:6443 --token xxx \
--discovery-token-ca-cert-hash sha256:xxx
```
</details>
<details><summary>Setup Pod Network Using Calico on Master</summary>
```bash=
command
```
</details>
## Basic Kubernetes Setup
- [x] [Prerequisites](#Prequisites)
- [x] [Dashboard](#Dashboard-Installation)
- [x] [Ingress & Certiticates](#Ingress-amp-Certiticates)
- [x] [Postgres](#PostgreSQL-Cluster)
- sipped for now: LoadBalancer: https://metallb.universe.tf/installation/
- [ ] Monitoring
### Prequisites
* kubeconfig: [YAML on Github](https://github.com/OYD-private/netcup-k8s/blob/main/config.yml)
<details><summary>Tests</summary>
```bash=
# check version
kubectl --kubeconfig=config.yml version
# to skip --kubeconfig
export KUBECONFIG=/path/to/config.yml
kubectl get pods
```
</details>
<details><summary>Helper</summary>
shortcuts for `.bashrc` / `.zshrc`
```
export K8S_CONFIG="/path/to/config.yml"
# kubectl with config
k8() {
kubectl --kubeconfig=$K8S_CONFIG "$@"
}
# get Pods matching first argument (are all)
k8p() {
if [ -z "$1" ]; then
kubectl --kubeconfig=$K8S_CONFIG get pods
else
kubectl --kubeconfig=$K8S_CONFIG get pods | grep $1
fi
}
# get last 200 lines of log for first Pod matching argument
k8l() {
kubectl --kubeconfig=$K8S_CONFIG logs -f --tail 200 `kubectl --kubeconfig=$K8S_CONFIG get pods | grep $1 | awk '{print $1}' | head -n 1`
}
# connect with a shell to the first Pod matching argument
k8c() {
kubectl --kubeconfig=$K8S_CONFIG exec -it `kubectl --kubeconfig=$K8S_CONFIG get pods | grep $1 | awk '{print $1}' | head -n 1` -- bash
}
```
</details>
:::info
all commands below assume helpers from above: `k8 ...`
:::
### Kubernetes & Server upgrade
* read the correct Howto for the version you want to upgrade:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
The upgrade is always done by updating kubeadm first, then running sanity checks followed by an upgrade to the cluster. After that, the kubelet is upgraded by apt and restartet. During update and restart, the pods are migrated by kubectl drain and kubect uncordon commands, so that there is no outage.
Update has to be done to control nodes first and then on the worker nodes.
Draining nodes can also be used to apply system upgrades to the OS and reboot of the servers.
Skipping minor versions during upgrade is not supported, so the procedure may have to be repeated some times.
<details><summary>Install (new) Kubeadm Repo</summary>
> commment out or remove old repo in /etc/apt/sources.list.d/archive_uri-http_apt_kubernetes_io_-bookworm.list
```
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
apt-get update
```
change version in repo to the version of cluster you want to update to.
</details>
<details><summary>Prepare Cluster with kubeadm</summary>
replace x in 1.31.x-* with the latest patch version in the repository of /etc/apt/sources.list/
Upgrade only kubeadm and prevent furter automatic upgrades
```
apt-mark unhold kubeadm && \
apt-get update && sudo apt-get install -y kubeadm='1.31.x-*' && \
apt-mark hold kubeadm
```
Check version of kubeadm, this should match the version from the repo.
`kubeadm version`
Run diagnostic tests and see if cluster can be upgraded
`kubeadm upgrade plan`
if ok, upgrade cluster with the supplied command:
`kubeadm upgrade apply v1.31.x`
</details>
<details><summary>Upgrade kubelet packages and reload</summary>
this has to be done for control and worker nodes
at any time you can check the status and version of the cluster with:
` kubectl get nodes`
first drain the node so pods are started on the other nodes:
` kubectl drain <node-to-drain> --ignore-daemonsets`
upgrade the packages and prevent automatic upgrades:
replace x in 1.31.x-* with the latest patch version in the repository of /etc/apt/sources.list
```
apt-mark unhold kubelet kubectl && \
apt-get update && sudo apt-get install -y kubelet='1.31.x-*' kubectl='1.31.x-*' && \
apt-mark hold kubelet kubectl
```
restart the service
```
systemctl daemon-reload
systemctl restart kubelet
```
Put node in service again
replace <node-to-uncordon> with the name of your node
` kubectl uncordon <node-to-uncordon> `
</details>
### Renew Certificats
* cluster usually renews certificats in time
* check status with `sudo kubeadm certs check-expiration`
* renew manually with `sudo kubeadm certs renew all`
* copy certificates to current user
* home: `sudo cp /etc/kubernetes/admin.conf ~/.kube/config`
* renew local: update `~/dev/deploy/netcup-k8s/config.yml`
### Dashboard Installation
Installation:
* deploy dashboard
```bash=
k8 apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
```
* create user
config on [Github `dashboard/`](https://github.com/OYD-private/netcup-k8s/tree/main/dashboard)
```bash=
k8 apply -f serviceAccount.yml
k8 create -f clusterRoleBinding.yml # might already exist
```
Access Dashboard:
* create token: `k8 -n kubernetes-dashboard create token admin-user`
* run `k8 proxy`
* [open Dashboard](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
### Metrics Servcer
Info: https://github.com/kubernetes-sigs/metrics-server
config on [Github `metrics/`](https://github.com/OYD-private/netcup-k8s/tree/main/metrics) ([disabled TLS](https://github.com/OYD-private/netcup-k8s/blob/656a476562c9809b61ca2be51180ee67a10f8a65/metrics/high-availability-1.21%2B.yaml#L150) for now)
```bash=
k8 apply -f high-availability-1.21+.yaml
```
use:
```bash=
k8 top pod --all-namespaces --sort-by=memory
```
### Ingress & Certiticates
Steps:
* install [ingress-nginx](https://github.com/kubernetes/ingress-nginx)
config on [Github `ingress/`](https://github.com/OYD-private/netcup-k8s/tree/main/ingress) (requires to set external IP)
```bash=
k8 apply -f ingress-nginx.yaml
```
* install [cert-manager](https://cert-manager.io/docs/installation/kubectl/)
```bash=
k8 apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml
```
* configure [issuer](https://cert-manager.io/docs/configuration/acme/)
config on [Github `ingress/`](https://github.com/OYD-private/netcup-k8s/tree/main/ingress)
```bash=
k8 apply -f letsencrypt-production.yaml
```
* deploy demo container
config on [Github `ingress/`](https://github.com/OYD-private/netcup-k8s/tree/main/ingress)
```bash=
k8 apply -f demo-pod.yaml
```
## PostgreSQL Cluster
Postgres-Operator Kubegres: https://www.kubegres.io
## Installation
* Github: https://github.com/reactive-tech/kubegres
* Docs: https://www.kubegres.io/doc/getting-started.html
### Basics
Steps:
* install Kubegres operator
```bash=
k8 apply -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.17/kubegres.yaml
```
* create Secret resource
```bash=
k8 apply -f postgres-secret.yaml
```
### Setup NFS Client Provisioner for Storage Class
config on [Github postgres/](https://github.com/OYD-private/netcup-k8s/tree/main/postgres)
Steps:
* enable access of K8s nodes to storage server
* login as root to storage server (`ssh user@37.120.184.76`and `sudo su`)
* edit exports: `vi /etc/exports` and add a line with the IP of each node for the relevant mount point `/var/nfs/postgres`
* 152.89.104.40
* 152.89.105.62
* 152.89.107.39
* restart nfs-server: `systemctl restart nfs-kernel-server`
* check access with: `showmount -e 37.120.184.76`
* install `nfs-common` package on each node
`apt-get install nfs-common`
* deploy nfs-provisioner:
```bash=
k8 apply -f namespace_nfs-provisioner.yaml
k8 apply -f serviceaccount_nfs-provisioner.yaml
k8 apply -f clusterrole_nfs-provisioner.yaml
k8 apply -f clusterrolebinding_nfs-provisioner.yaml
k8 apply -f deployment_nfs-provisioner.yaml
#check
k8 get pods -n nfs-provisioner
```
* create StorageClass
```bash=
k8 apply -f storageclass.yaml
```
### Create a cluster of PostgreSql instances
currently using PostgreSQL v15.5 ([versions](https://www.postgresql.org/support/versioning/))
```bash=
k8 apply -f postgres.yaml
# check
k8 get pod,statefulset,svc,configmap,pv,pvc -o wide
# connect with psql
k8 exec -it postgres-1-0 -- psql -h postgres -U postgres -d postgres
```
Password is `superUserPassword` from `postgres-secret.yaml`
## Instances
### chats.go-data.at
```bash=
k8 set image deployment.v1.apps/chat-disp \
chat-disp=oydeu/dc-intermediary:240129
```
### DEC112 Staging Environment
* use customised ingress: config on [Github `ingress/`](https://github.com/OYD-private/netcup-k8s/tree/main/ingress)
```bash=
k8 apply -f ingress-nginx-dec112.yaml
```
* Redis deployment: config on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/redis)
```bash=
k8 apply -f redis-config.yaml
k8 apply -f redis-statefulset.yaml
k8 apply -f redis-service.yaml
```
to run `redis-cli` get Redis master password on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/redis/redis-config.yaml)
```bash=
k8 exec -it redis-0 -- redis-cli
auth secret-password-here
pubsub channels #example command
keys * #list all keys
hgetall key #detail for key
```
* RegAPI deployment: config on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/regapi)
```bash=
k8 apply -f regapi-secrets.yaml
k8 apply -f regapi-deploy.yaml
k8 apply -f sms-deploy.yaml
k8 apply -f ida-deploy.yaml
k8 apply -f sip-deploy.yaml
```
updates:
```bash=
k8 set image deployment.v1.apps/regapi regapi=oydeu/dc-regapi:240221a
k8 set image deployment.v1.apps/sms-plugin sms-plugin=oydeu/dc-decsms:240221b
k8 set image deployment.v1.apps/ida-plugin ida-plugin=oydeu/dc-decida:240221
k8 set image deployment.v1.apps/sip-plugin sip-plugin=oydeu/dc-decsip:240221a
```
quick test:
```bash=
# init
REGAPI="https://regapi.data-container.net"
DID=`echo '' | oydid create --json --add-x25519pubkey-keyAgreement | jq -r '.did'`
TOKEN=`oydid auth $DID $REGAPI --json | jq -r '.access_token'`
# send SMS
REG_ID=`echo '{"header": {"method": "sms","action": "init"},
"payload":{"phone_number": "004367761753112",
"email": "christoph.fabianek@gmail.com"}}' | \
curl -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- -s \
-X PUT $REGAPI/api/v3/register | \
jq -r '.reg_id'`
# enter SMS-Code
CODE=12345678
echo '{"header": {"method": "sms","action": "SmsVerificationCode"},
"payload":{}}' | \
jq --arg reg_id "$REG_ID" '.header += {"reg_id": $reg_id}' | \
jq --arg code "$CODE" '.payload += {"sms_code": $code}' | \
curl -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- \
-X PUT $REGAPI/api/v3/register
# check state
curl -H "Authorization: Bearer $TOKEN" \
"$REGAPI/api/v3/register?reg_id=$REG_ID"
```
* Kamailio deployment: set ghcr.io and use config on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/kamailio)
```bash=
k8 apply -f kamailio-deploy.yaml
```
* DEC112 Chatbot deployment: config on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/chatbot)
```bash=
k8 apply -f chatbot-deploy.yaml
```
update:
```bash=
k8 set image deployment.v1.apps/dec112-chatbot \
dec112-chatbot=oydeu/dc-chatbot:240211
```
* Wallet Onboarding deployment: config on [Github](https://github.com/OYD-private/netcup-k8s/tree/main/regapi)
```bash=
k8 apply -f wallet-deploy.yaml
```
update:
```bash=
# update image
k8 set image deployment.v1.apps/wallet-onboarding \
wallet-onboarding=oydeu/dc-dec_onboarding:240212
# update environment variable
k8 set env deployment/wallet-onboarding \
REGAPI_APPKEY=""
```
## Monitoring
#### Links
* Details: https://hackmd.io/VtIODwZhREOmM429FDCirA?view
* Tutorial: [Squadcast](https://www.squadcast.com/blog/infrastructure-monitoring-using-kube-prometheus-operator)
#### ToDos
* [ ] how to configure alerts
* configure unavailability of node and notify in Slack
Access Dashboard:
* run `k8 --namespace monitoring port-forward svc/prometheus 9090`
* open [Prometheus](http://localhost:9090/)