# Build Kubernetes Cluster HA with KubeSphere
## Environment
| Role | Hostname | IP | RAM | CPU | OS |
| -------- | -------- | -------- | -------- | -------- | -------- |
| VIP | - | 192.168.0.100 | - | - | - |
| Load Balancer | load-balancer-1 | 192.168.0.166 | 4 GB | 2 | ubuntu-focal |
| Load Balancer | load-balancer-2 | 192.168.0.217 | 4 GB | 2 | ubuntu-focal |
| Load Balancer | load-balancer-3 | 192.168.0.76 | 4 GB | 2 | ubuntu-focal |
| Master | master-1 | 192.168.0.147 | 4 GB | 2 | ubuntu-focal |
| Master | master-2 | 192.168.0.65 | 4 GB | 2 | ubuntu-focal |
| Master | master-3 | 192.168.0.180 | 4 GB | 2 | ubuntu-focal |
| Worker | worker-1 | 192.168.0.123 | 4 GB | 2 | ubuntu-focal |
| Worker | worker-2 | 192.168.0.188 | 4 GB | 2 | ubuntu-focal |
| Worker | worker-3 | 192.168.0.89 | 4 GB | 2 | ubuntu-focal |
## Semua Node
Instal dependensi
```shell
sudo apt install -y socat conntrack ebtables ipset
```
Mapping Host
```shell
sudo nano /etc/hosts
```
```=
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
192.168.0.166 load-balancer-1
192.168.0.217 load-balancer-2
192.168.0.76 load-balancer-3
192.168.0.147 master-1
192.168.0.65 master-2
192.168.0.180 master-3
192.168.0.123 worker-1
192.168.0.188 worker-2
192.168.0.89 worker-3
```
## Load Balancer
Instal Haproxy dan Keepalived
```shell
sudo apt install -y keepalived haproxy
```
Konfigurasi Haproxy
```shell
sudo nano /etc/haproxy/haproxy.cfg
```
```yaml=
frontend kube-apiserver
bind *:6443
mode tcp
option tcplog
default_backend kube-apiserver
backend kube-apiserver
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server kube-apiserver-1 192.168.0.147:6443 check # Master 1
server kube-apiserver-2 192.168.0.65:6443 check # Master 2
server kube-apiserver-3 192.168.0.180:6443 check # Master 3
```
Restart dan enable service Haproxy
```shell
systemctl restart haproxy && systemctl enable haproxy
```
Konfigurasi Keepalive
```shell
sudo nano /etc/keepalived/keepalived.conf
```
### Master 1
```yaml=
global_defs {
notification_email {
}
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance haproxy-vip {
state MASTER
priority 101
interface ens3
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 192.168.0.147 # Master 1
unicast_peer {
192.168.0.65 # Master 2
192.168.0.180 # Master 3
}
virtual_ipaddress {
192.168.0.100/24
}
track_script {
chk_haproxy
}
}
```
### Master 2
```yaml=
global_defs {
notification_email {
}
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance haproxy-vip {
state BACKUP
priority 100
interface ens3
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 192.168.0.65 # Master 2
unicast_peer {
192.168.0.147 # Master 1
192.168.0.180 # Master 3
}
virtual_ipaddress {
192.168.0.100/24
}
track_script {
chk_haproxy
}
}
```
### Master 3
```yaml=
global_defs {
notification_email {
}
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance haproxy-vip {
state BACKUP
priority 99
interface ens3
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 192.168.0.180 # Master 3
unicast_peer {
192.168.0.65 # Master 2
192.168.0.147 # Master 1
}
virtual_ipaddress {
192.168.0.100/24
}
track_script {
chk_haproxy
}
}
```
Restart dan enable service Keepalive
```
systemctl restart keepalived && systemctl enable keepalived
```
## Master
Download KubeKey
```shell
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.2 sh -
```
Ubah menjadi executeable
```shell
chmod +x kk
```
Generate File Konfigurasi
```shell
./kk create config --with-kubernetes v1.23.10 -f k8s-cluster.yaml
```
Custom sesuai environment
```shell
nano k8s-cluster.yaml
```
```yaml=
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master-1, address: 192.168.0.147, internalAddress: 192.168.0.147, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: master-2, address: 192.168.0.65, internalAddress: 192.168.0.65, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: master-3, address: 192.168.0.180, internalAddress: 192.168.0.180, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: worker-1, address: 192.168.0.123, internalAddress: 192.168.0.123, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: worker-2, address: 192.168.0.188, internalAddress: 192.168.0.188, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: worker-3, address: 192.168.0.89, internalAddress: 192.168.0.89, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
roleGroups:
etcd:
- master-1
- master-2
- master-3
control-plane:
- master-1
- master-2
- master-3
worker:
- worker-1
- worker-2
- worker-3
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: "192.168.0.100"
port: 6443
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
```
Deploy Cluster
```shell
./kk create cluster -f k8s-cluster.yaml
```
Verifikasi
```shell
ubuntu@master-1:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master-1 Ready control-plane,master 13h v1.23.10
master-2 Ready control-plane,master 13h v1.23.10
master-3 Ready control-plane,master 13h v1.23.10
worker-1 Ready worker 13h v1.23.10
worker-2 Ready worker 13h v1.23.10
worker-3 Ready worker 13h v1.23.10
ubuntu@master-1:~$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-84897d7cdf-skdmh 1/1 Running 1 (13h ago) 13h
kube-system calico-node-4wqxx 1/1 Running 0 13h
kube-system calico-node-6j7gc 1/1 Running 0 13h
kube-system calico-node-km82x 1/1 Running 0 13h
kube-system calico-node-lmvkt 1/1 Running 0 13h
kube-system calico-node-rnxl8 1/1 Running 0 13h
kube-system calico-node-x7jkv 1/1 Running 0 13h
kube-system coredns-b7c47bcdc-84dpz 1/1 Running 0 13h
kube-system coredns-b7c47bcdc-lgrb5 1/1 Running 0 13h
kube-system kube-apiserver-master-1 1/1 Running 0 13h
kube-system kube-apiserver-master-2 1/1 Running 0 13h
kube-system kube-apiserver-master-3 1/1 Running 0 13h
kube-system kube-controller-manager-master-1 1/1 Running 0 13h
kube-system kube-controller-manager-master-2 1/1 Running 0 13h
kube-system kube-controller-manager-master-3 1/1 Running 0 13h
kube-system kube-proxy-ccfh6 1/1 Running 0 13h
kube-system kube-proxy-p77xd 1/1 Running 0 13h
kube-system kube-proxy-p8xg2 1/1 Running 0 13h
kube-system kube-proxy-v6z84 1/1 Running 0 13h
kube-system kube-proxy-vq2ht 1/1 Running 0 13h
kube-system kube-proxy-z5xz8 1/1 Running 0 13h
kube-system kube-scheduler-master-1 1/1 Running 0 13h
kube-system kube-scheduler-master-2 1/1 Running 0 13h
kube-system kube-scheduler-master-3 1/1 Running 0 13h
kube-system nodelocaldns-9px5q 1/1 Running 0 13h
kube-system nodelocaldns-f8pc7 1/1 Running 0 13h
kube-system nodelocaldns-g8rz5 1/1 Running 0 13h
kube-system nodelocaldns-glfg2 1/1 Running 0 13h
kube-system nodelocaldns-h2jqq 1/1 Running 0 13h
kube-system nodelocaldns-s9rnm 1/1 Running 0 13h
```
Enable kubectl Autocompletion
```shell
apt-get install bash-completion
```
```shell
echo 'source <(kubectl completion bash)' >>~/.bashrc
```
```shell
kubectl completion bash >/etc/bash_completion.d/kubectl
```