# Kube-vip on kubeadm 使用 Loadbalence 功能
## 部署 Kube-vip RBAC
```
$ kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
```
## 部署 Kube-vip DaemonSet
```
# 宣告網卡名稱
$ export INTERFACE=ens18
# 取得 kube-vip 版本代號
$ export KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
# 檢查 kube-vip 版本
$ echo $KVVERSION
v0.9.1
# 在 containerd 環境上部署
$ alias kube-vip="sudo ctr -n k8s.io image pull ghcr.io/kube-vip/kube-vip:$KVVERSION;sudo ctr -n k8s.io run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
```
* 部署 Kube-vip
```
$ kube-vip manifest daemonset --services --inCluster --arp --interface $INTERFACE | kubectl apply -f -
```
* DaemonSet 的 pod 會產生在所有 worker 節點上
* kube-vip-ds (DaemonSet):負責底層網路宣告 (ARP/BGP) 與流量路由。
```
$ kubectl -n kube-system get po -l app.kubernetes.io/name=kube-vip-ds
NAME READY STATUS RESTARTS AGE
kube-vip-ds-6s9mf 1/1 Running 0 43s
kube-vip-ds-jjj9b 1/1 Running 0 43s
kube-vip-ds-wcr6v 1/1 Running 0 43s
```
## 部署 kube-vip-cloud-provider
* kube-vip-cloud-provider (Deployment):作為 Kubernetes 的 Controller,負責監聽 Service 並指派 VIP (IP Address Management)。
```
$ kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml
$ kubectl -n kube-system get po -l app=kube-vip
NAME READY STATUS RESTARTS AGE
kube-vip-cloud-provider-fb9c65946-85rd2 1/1 Running 0 5m26s
```
* 建立一個 IP range,可用範圍是 `10.10.7.50-10.10.7.55`
```
$ kubectl create configmap --namespace kube-system kubevip --from-literal range-global=10.10.7.50-10.10.7.55
```
```
$ kubectl -n kube-system describe cm kubevip
Name: kubevip
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
range-global:
----
10.10.7.50-10.10.7.55
BinaryData
====
Events: <none>
```
## 驗證
* 建立 deployment
```
$ echo 'apiVersion: apps/v1
kind: Deployment
metadata:
name: s1-dep
spec:
replicas: 2
selector:
matchLabels:
app: s1-dep
template:
metadata:
labels:
app: s1-dep
spec:
containers:
- name: app
image: quay.io/flysangel/image:app.golang' | kubectl apply -f -
```
* 建立 LoadBalancer service
```
$ echo 'apiVersion: v1
kind: Service
metadata:
name: s1
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: s1-dep
type: LoadBalancer' | kubectl apply -f -
```
```
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.244.0.1 <none> 443/TCP 20h
s1 LoadBalancer 10.244.177.105 10.10.7.50 80:31244/TCP 12s
```
* 在叢集外訪問
```
$ curl 10.10.7.50
{"message":"Hello Golang"}
```
## 驗證容錯功能
* 確認目前 svc ip 在 w1 節點上
```
$ kubectl -n kube-system get lease plndr-svcs-lock
NAME HOLDER AGE
plndr-svcs-lock w1 7m30s
```
* 將 w1 節點關機後確認 svc ip 馬上轉移至 w3 節點上,並且服務都還可以正常訪問
```
$ kubectl -n kube-system get lease plndr-svcs-lock
NAME HOLDER AGE
plndr-svcs-lock w3 8m31s
```
## 測試 VIP 與 pod 相同節點,並且 VIP 會隨者 pod 飄移
* 清理環境
```
$ kubectl delete svc/s1 Deployment/s1-dep
```
* 重新設定添加 Kube-vip ds `svc_election=true` 環境變數
```
$ kubectl -n kube-system edit ds kube-vip-ds
spec:
......
template:
......
spec:
containers:
- args:
- manager
env:
- name: svc_election # 添加此行
value: "true" # 添加此行
```
* 在第一次部署時就添加 `svc_election=true` 環境變數,要在指定 `--servicesElection`
```
$ kube-vip manifest daemonset --services --servicesElection --inCluster --arp --interface $INTERFACE | kubectl apply -f -
```
* 將副本數調整至一個
```
$ echo 'apiVersion: apps/v1
kind: Deployment
metadata:
name: s1-dep
spec:
replicas: 1
selector:
matchLabels:
app: s1-dep
template:
metadata:
labels:
app: s1-dep
spec:
containers:
- name: app
image: quay.io/flysangel/image:app.golang' | kubectl apply -f -
```
* svc 需要設定 `externalTrafficPolicy: Local` ,因為 kube-vip 的 `svc_election=true` 需要搭配 svc 的 `externalTrafficPolicy: Local` 才會把 VIP 綁到有 local endpoint 的節點上。
```
$ echo 'apiVersion: v1
kind: Service
metadata:
name: s1
spec:
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
app: s1-dep
type: LoadBalancer' | kubectl apply -f -
```
* 此時 pod 被調度到 w1 節點
```
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
s1-dep-6f646b4998-qcqk2 1/1 Running 0 17s 10.0.3.42 w1 <none> <none>
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
s1 LoadBalancer 10.96.28.180 10.10.7.50 80:32377/TCP 36s
```
* 進到 w1 節點可以看到 vip 也是被分配到這個節點
```
bigred@w1:~$ ip a s ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:6e:d9:18 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.10.7.35/24 brd 10.10.7.255 scope global ens18
valid_lft forever preferred_lft forever
inet 10.10.7.50/32 scope global deprecated ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fe6e:d918/64 scope link
valid_lft forever preferred_lft forever
```
* 重新調度 pod
```
$ kubectl cordon w1
$ kubectl delete po s1-dep-6f646b4998-qcqk2
```
* 此時 pod 已重新被調度到 m2 節點上
```
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
s1-dep-6f646b4998-lbtlf 1/1 Running 0 6m21s 10.0.1.60 m2 <none> <none>
```
* 這時我們在進到 m2 節點檢查網卡,可以發現 vip 在節點沒有損毀的情況下當 pod 轉移時也會跟著轉移
```
bigred@m2:~$ ip a s ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:ae:8a:3d brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.10.7.33/24 brd 10.10.7.255 scope global ens18
valid_lft forever preferred_lft forever
inet 10.10.7.50/32 scope global deprecated ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:feae:8a3d/64 scope link
valid_lft forever preferred_lft forever
```
```
$ kubectl uncordon w1
```
* 將 pod 副本數條到 2 個,pod 分別在 m2、m3 節點
```
$ kubectl scale deploy s1-dep --replicas=2
```
```
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
s1-dep-6f646b4998-b9g2r 1/1 Running 0 13s 10.0.2.163 m3 <none> <none>
s1-dep-6f646b4998-lbtlf 1/1 Running 0 9m3s 10.0.1.60 m2 <none> <none>
```
* 這時 VIP 還是在 m2 節點上
```
bigred@m2:~$ ip a s ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:ae:8a:3d brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.10.7.33/24 brd 10.10.7.255 scope global ens18
valid_lft forever preferred_lft forever
inet 10.10.7.50/32 scope global deprecated ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:feae:8a3d/64 scope link
valid_lft forever preferred_lft forever
```
* 將 m2 節點上的 pod 驅離
```
$ kubectl uncordon m2
$ kubectl delete po s1-dep-6f646b4998-lbtlf
```
* pod 被重新分配到 w1 節點
```
$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
s1-dep-6f646b4998-b9g2r 1/1 Running 0 14m 10.0.2.163 m3 <none> <none>
s1-dep-6f646b4998-lp4wb 1/1 Running 0 4s 10.0.3.212 w1 <none> <none>
```
* 這時 vip 已經被分配到 m3 節點上,因為 `svc_election=true` 只會讓有 pod 的節點來競選 leader。
```
bigred@m3:~$ ip a s ens18
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:a0:25:74 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 10.10.7.34/24 brd 10.10.7.255 scope global ens18
valid_lft forever preferred_lft forever
inet 10.10.7.50/32 scope global deprecated ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fea0:2574/64 scope link
valid_lft forever preferred_lft forever
```
## LoadBalancer SVC 指定獲取 IP
```
apiVersion: v1
kind: Service
metadata:
name: nginx-static-lb
annotations:
# 使用 Annotation 指定 IP
kube-vip.io/loadbalancerIPs: 10.10.7.55
spec:
type: LoadBalancer
selector:
app: s1-dep
ports:
- protocol: TCP
port: 80
targetPort: 8080
name: web
```
## 參考
https://kube-vip.io/docs/usage/kind/#deploy-kube-vip-as-a-daemonset
https://kube-vip.io/docs/installation/flags/?utm_source=chatgpt.com