# K8S Guide
## Prerequisite
* Setup Server for Control Plane and Nodes
* [Open Firewall Port According to k8s requirement](https://kubernetes.io/docs/reference/networking/ports-and-protocols/)
* **Control Plane:** Ingress (tcp: 6443, 2379, 2380, 10250, 10259, 10257, 443)
* **Node:** Ingress (tcp: 10250, 30000-32767, 443)
* **Reverse Proxy:** Ingress (tcp: 80, 443)
* **CP - Load Balancer:** Ingress (tcp: 6443)
* **Calico:** Ingress (tcp: 179, 5473, 6443, 2379 udp: 4789, 51820, 51821)
> Forwarding IPv4 and letting iptables see bridged traffic
```
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
```
```
sudo modprobe overlay
sudo modprobe br_netfilter
```
```
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
EOF
```
> Change GCP Config
```
cat <<EOF | sudo tee /etc/sysctl.d/60-gce-network-security.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
EOF
```
> Change RHEL Hardening Config
```
cat <<EOF | sudo tee /etc/sysctl.d/99-sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
EOF
```
```
sudo sysctl --system
```
> Verify values = 1
```
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
```
## Install Container Runtime
* [using containerd](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd)
* [containerd on RPM provided by Docker](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#option-2-from-apt-get-or-dnf)
### Install Containerd from Docker
```
sudo yum install -y yum-utils
```
```
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
```
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```
```
sudo systemctl start docker
```
> verify
```
sudo docker run hello-world
```
### [Configure systemd as the cgroup driver for the container runtime](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd)
```
sudo vi /etc/containerd/config.toml
```
> Activate CRI
```
#disabled_plugins = ["cri"]
```
```
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
```
### [Overriding the sandbox (pause) image](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#override-pause-image-containerd)
```
sudo vi /etc/containerd/config.toml
```
```
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.2"
```
```
systemctl restart containerd
```
---
## Install Kubeadm, Kubelet, Kubectl
```
-> required for kubelet working properly
sudo swapoff -a
```
> verify swapoff 0 byte
```
free -h
```
```
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
```
---
## Load Balancer For HA Control Plane
### Install nginx
```
sudo yum install nginx
```
### Nginx.conf
```
http {
}
# put stream{} align with http{}
stream {
upstream apiserver_read {
server <IP_ControlPlane_1>:6443; # control plane 1
server <IP_ControlPlane_2>:6443; # control plane 2
server <IP_ControlPlane_3>:6443; # control plane 3
}
server {
listen 6443;
proxy_pass apiserver_read;
}
}
```
- Note: Control Plane will deployed as normal without listing server ip in upstream config, but after one of the Control Plane "not-ready", all the pods will failed to up
> Disable SELinux
```
vi /etc/sysconfig/selinux
```
```
#edit selinux file
SELINUX=disabled
```
```
reboot
```
---
## Kubeadm Init
### [Configure systemd as the cgroup driver for kubelet](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver)
* Configured after kubernetes installation
```
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
networking:
podSubnet: "10.244.0.0/16"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
```
>Kubeadm init from config file
```
kubeadm init --config kubeadm-config.yaml
```
>Kubeadm init for HA Control Plane
```
sudo kubeadm init \
--control-plane-endpoint "<LB_IP>:6443" \
--upload-certs \
--pod-network-cidr <POD_CIDR>/16 \
--apiserver-advertise-address=<IP_ControlPlane_1>
```
>Taint Node
```
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node.kubernetes.io/not-ready-
```
---
## Install Network Policy
### Flannel
```
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
```
### Calico
```
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
```
```
#custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16 #cidr calico must be the same as kubeadm init cidr
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
```
---
## Kubeadm Join (CP & Worker)
### Kubeadm Join for HA Control Plane
```
kubeadm join <IP_LB>:6443 \
--token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<DISCOVERY_TOKEN> \
--control-plane \
--certificate-key <CERTIFICATE_KEY>
```
### Kubeadm Join for Worker Node
```
sudo kubeadm join 10.121.91.167:6443
--token <TOKEN>
--discovery-token-ca-cert-hash sha256:<DISCOVERY_TOKEN>
```
>Generate new Certificate (Certificate will be expired after 2 hours)
```
sudo kubeadm init phase upload-certs --upload-certs
```
> Generate Token for Worker Join
```
kubeadm token create --print-join-command
```
---
## Setup Istio
```
curl -L https://istio.io/downloadIstio | sh -
```
```
cd istio-1.17.2
```
```
export PATH=$PWD/bin:$PATH
```
> Istio Install for Production (cni, ingress, egress, ingress-nodeport)
```
istioctl install \
--set profile=default \
--set components.cni.enabled=true \
--set components.egressGateways[0].enabled=true \
--set components.egressGateways[0].name=istio-egressgateway \
--set values.gateways.istio-ingressgateway.type=NodePort -y
```
---
---
---
---
## Notes:
### Nodes Labeling
```
kubectl label nodes <NODES_NAME> <KEY>=<VALUE>
kubectl label --overwrite nodes <NODES_NAME> <KEY>=<VALUE>
kubectl get nodes --show-labels -l <KEY>=<VALUE>
```
### Nodes Role Labeling for Worker-Node
```
kubectl label nodes <NODE_NAME> node-role.kubernetes.io/worker-node=
```
### Istio namespace inject sidecard
```
kubectl label namespace <NAMESPACE> istio-injection=enabled
```
> Pod logs all containers
```
kubectl logs <POD_NAME> --all-containers
```
> Apply sample apps for check istio running properly
```
#check istio pod 2/2
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
```
> Run Skaffold
```
skaffold run -n <NAMESPACE> --kube-context kubernetes-admin@kubernetes
```
> Node memerlukan akses internet untuk pull image dari registry