# k8s on bare-metal
[TOC]
## qemu-guest-agent
Do this if the machine is an virtual machine
```bash=
sudo apt install -y qemu-guest-agent
```
## swapoff
```bash=
sudo swapoff -a
sudo vim /etc/fstab
# comment off /swap.img ...
```
## docker
https://docs.docker.com/engine/install/debian/#install-using-the-repository
https://docs.docker.com/engine/install/linux-postinstall/
```bash=
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# fix
sudo sed -i 's/debian/ubuntu/g' /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```
post install
```bash=
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
```
## cri-dockerd
https://github.com/Mirantis/cri-dockerd
```bash=
curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.7/cri-dockerd_0.3.7.3-0.ubuntu-jammy_amd64.deb
sudo apt install ./cri-dockerd_0.3.7.3-0.ubuntu-jammy_amd64.deb
rm ./cri-dockerd_0.3.7.3-0.ubuntu-jammy_amd64.deb
```
## kubeadm + kubelet + kubectl
## Master Node (Control Plane)
https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
### oh-my-bash
https://github.com/ohmybash/oh-my-bash
```bash=
bash -c "$(curl -fsSL https://raw.githubusercontent.com/ohmybash/oh-my-bash/master/tools/install.sh)"
```
### kubeadm init
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
```bash=
# sudo kubeadm init --cri-socket /run/cri-dockerd.sock
```
https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-APIEndpoint
```bash=
KUBEADM_TOKEN=$(kubeadm token generate)
HOST_IP=10.121.240.133
echo "kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: $KUBEADM_TOKEN
ttl: 24h0m0s
usages:
- signing
- authentication
localAPIEndpoint:
# master ip address
advertiseAddress: $HOST_IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
# master hostname
name: $HOSTNAME
taints: null
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
apiServer:
timeoutForControlPlane: 4m0s
# master ip address:6443
controlPlaneEndpoint: \"$HOST_IP:6443\"
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
" > kubeadm-config.yaml
sudo kubeadm init --config kubeadm-config.yaml
```
:::spoiler kdto-0
```bash=
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.121.240.140:6443 --token hwcg4h.0egzhg2iziunw79k \
--discovery-token-ca-cert-hash sha256:e665256a30b391eaa41c8961c2c23409cc8c9d97473223207dcda2d6a5d2b0c6 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.121.240.140:6443 --token hwcg4h.0egzhg2iziunw79k \
--discovery-token-ca-cert-hash sha256:e665256a30b391eaa41c8961c2c23409cc8c9d97473223207dcda2d6a5d2b0c6
```
:::
:::spoiler kdto-single-0
```bash=
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.121.240.133:6443 --token cw5ccx.dc77t52tswgb8367 \
--discovery-token-ca-cert-hash sha256:d5c3d6a7c425f8f8809b910ea5f31fa377b74f580d30f20c5b577e4b83c4391a \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.121.240.133:6443 --cri-socket unix:///var/run/cri-dockerd.sock --token cw5ccx.dc77t52tswgb8367 \
--discovery-token-ca-cert-hash sha256:d5c3d6a7c425f8f8809b910ea5f31fa377b74f580d30f20c5b577e4b83c4391a
```
:::
### reset (ONLY WHEN RESETTING)
```bash=
sudo kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
# reset iptables
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
etcdctl del "" --prefix
sudo rm -r /etc/cni/net.d/
```
### kube config
```bash=
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
### cillium-cli
https://github.com/cilium/cilium-cli/releases/tag/v0.15.0
```bash=
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
```
### cilium
```bash=
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.14.3 --namespace kube-system
cilium version --client
cilium status --wait
cilium connectivity test
```
### fluent-bit
```bash=
git clone https://github.com/fluent/helm-charts
vim helm-charts/charts/fluent-bit/values.yaml
helm install fluent-bit ./helm-charts/charts/fluent-bit
```
[Fluent-bit: a log forwarder](https://docs.fluentbit.io/manual/installation/kubernetes#installing-with-helm-chart)
```bash=
helm repo add fluent https://fluent.github.io/helm-charts
helm upgrade --install fluent-bit fluent/fluent-bit
# export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}")
# kubectl --namespace default port-forward $POD_NAME 2020:2020
# curl http://127.0.0.1:2020
```
:::spoiler pv-log-collector.yaml
```bash=
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-logs
labels:
app: logs-collector
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: manual
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: logs-log-collector-0
namespace: logging
capacity:
storage: 5Gi
hostPath:
path: /logs
```
:::
```bash=
kubectl apply -f pv-log-collector.yaml
```
[knative logging](https://knative.dev/docs/serving/observability/logging/collecting-logs/#setting-up-the-collector)
```bash=
# apply the collector
kubectl apply -f https://github.com/knative/docs/raw/main/docs/serving/observability/logging/fluent-bit-collector.yaml
```
```bash=
kubectl edit -n logging configmap/log-collector-config
# add
@INCLUDE output-forward.conf
output-forward.conf: |
[OUTPUT]
Name forward
Host log-collector.logging
Port 24224
Require_ack_response True
```
```bash=
kubectl edit configmaps/fluent-bit
# remove all output and add this
[OUTPUT]
Name forward
Host log-collector.logging
Port 24224
Require_ack_response True
```
```bash=
# Restart fluent-bit by deleting the pod
kubectl delete pods $(kubectl get pods --namespace default -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}")
# Wait for the daemonset to start a new pod, log it to check errors
kubectl logs $(kubectl get pods --namespace default -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit" -o jsonpath="{.items[0].metadata.name}")
```
Connecting to log collector
```bash=
# Port forward the logging pod
kubectl port-forward --namespace logging service/log-collector 8080:80
```
```bash=
# Also port forward to the machine
ssh -L localhost:8080:localhost:8080 [HOST_IP]
```
Open the website in browser [http://localhost:8080/](http://localhost:8080/)
## Worker Nodes
```bash=
sudo kubeadm join 10.121.240.140:6443 --cri-socket unix:///var/run/cri-dockerd.sock --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
# example
# sudo kubeadm join 10.121.240.140:6443 --cri-socket /run/cri-dockerd.sock --token hwcg4h.0egzhg2iziunw79k \
# --discovery-token-ca-cert-hash sha256:e665256a30b391eaa41c8961c2c23409cc8c9d97473223207dcda2d6a5d2b0c6
```
## Controller
* specify cluster kubeconfig
```bash=
KUBECONFIG=~/.kube/config_kdto-0 kubectl get nodes
```
* Install go
### knative
Install knative
```bash=
# Serving
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.12.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.12.0/serving-core.yaml
# Eventing
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.12.0/eventing-crds.yaml
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.12.0/eventing-core.yaml
```
metallb
https://metallb.universe.tf/installation/
https://metallb.universe.tf/configuration/
```bash=
# see what changes would be made, returns nonzero returncode if different
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system
# actually apply the changes, returns nonzero returncode on errors only
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
# install by manifest
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
echo "apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.100.0/24" > metallb-config.yaml
kubectl apply -f metallb-config.yaml
```
kourier
:::spoiler Without metallb and with kind
```bash=
# Without metallb and with kind
curl -LO https://github.com/knative/net-kourier/releases/download/knative-v1.12.1/kourier.yaml
vim kourier.yaml
# nodePort: 31080
# nodePort: 31443
# LoadBalancer -> NodePort
kubectl apply -f kourier.yaml
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
kubectl patch configmap/config-domain \
--namespace knative-serving \
--type merge \
--patch '{"data":{"127.0.0.1.sslip.io":""}}'
kubectl --namespace kourier-system get service kourier
```
:::
```bash=
# With metallb
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.12.1/kourier.yaml
kubectl --namespace kourier-system get service kourier
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
```
magic dns (gives a domain to the ip assigned to kourier with metallb)
```bash=
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.12.0/serving-default-domain.yaml
```
knative-cli(kn)
```bash=
git clone https://github.com/knative/client.git knative-cli
cd knative-cli/
hack/build.sh -f
sudo mv kn /usr/local/bin
cd ..
rm -rf knative-cli
kn version
```
hello world service test
:::spoiler To delete the service
```bash=
kn service delete hello
```
:::
```bash=
kn service create hello \
--image gcr.io/knative-samples/helloworld-go \
--port 8080 \
--env TARGET=World
```