# Harvester Cloud Provider * 原生 k8s 並不能直接建立 loadbalance 類型的 service,需要在叢集內安裝像是 metalLB 才能建立 loadbalance 的 service。 * 透過 Harvester Cloud Provider 的功能在 Harvester 建立的 guest cluster 就不用在自己安裝 metalLB 了,可以直接透過 Cloud Provider 分配 ip 給 loadbalance 的 service 使用。 ## 實作 * 在 Harvester cluster 執行以下指令, > `bash -s <serviceaccount name> <namespace>`,須注意自己的 guest cluster 要使用哪個 sa 創建,以及放在哪個 namespace 下。 ``` $ curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s default default ...... ########## cloud config ############ apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TnpJNU1qTXhPRFF6TUI0WERUSTBNVEF4T0RBMk1UQTBNMW9YRFRNME1UQXhOakEyTVRBMApNMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRjeU9USXpNVGcwTXpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJGTDV5RkFEM0dnNzZBVnh3RWNNL0liQnVCdVR2MmdzU0dyeEZGVlgKNEtSU1VzYkMwRW9wNHFvODVmaGw5c21QSi9QaVlYOUt6bktFT0x1YjBFeXVmMkdqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCVDYwM2ZUeStzNjl6cFRLc21DCnFIZko2dXUrOGpBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBNWRTNHBkdSs2Q1htU0JPUWpJWXZ2emt2K0k3Ny8KR1JDeGl0dG94WWovMUFJaEFLcmZWenkzb1FsZzV2NktqZERlLzBtcUowdzMrUnp4NnlibTdXTEY3YWozCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default namespace: default user: default-default-default name: default-default-default current-context: default-default-default kind: Config preferences: {} users: - name: default-default-default user: token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5wRWJzQnBvcGhQNGctYy1VNC1aZkhRZXEyR0lXNHc5S3RUZUJnYTBWZ28ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBjN2ExZGFlLWM4NjYtNGU4NC05NTJmLTgxNjQzZjc4NGFiZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.KKub9rmfNzceXR1b96RG406eYVSk0eLXMzwMEG3GTwvuP2RNFa6oJcYTBi6xWY5KYmmaJxGmpH8bbjr7qbA_-ud66Bcej9w98Zwh1wLVFcfRa7Uia6Ml0S39k62oxk2Q0qVQG_qJr6O6wDDo7_dDI4rLdS2ss10nuX9pqVENhQ7tsjuTa0_BcMh8oTMEH8Rlz1tF1tMrS5vywJ6VIfkVERdtxWPT9W33NWGVE5Y51H5L1MAXPUjkIgoPiNaIQIGZznlf5wjNyKlXJkj4Vhl2jXPGhEJhB9CLk3AsbUIb3eePiNxdpozJzcFD8yg6P-oOyNJDoympyP-zWTf8u0dKkg ########## cloud-init user data ############ write_files: - encoding: b64 content: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VKbFZFTkRRVklyWjBGM1NVSkJaMGxDUVVSQlMwSm5aM0ZvYTJwUFVGRlJSRUZxUVd0TlUwbDNTVUZaUkZaUlVVUkVRbXg1WVRKVmVVeFlUbXdLWTI1YWJHTnBNV3BaVlVGNFRucEpOVTFxVFhoUFJGRjZUVUkwV0VSVVNUQk5WRUY0VDBSQk1rMVVRVEJOTVc5WVJGUk5NRTFVUVhoT2FrRXlUVlJCTUFwTk1XOTNTa1JGYVUxRFFVZEJNVlZGUVhkM1dtTnRkR3hOYVRGNldsaEtNbHBZU1hSWk1rWkJUVlJqZVU5VVNYcE5WR2N3VFhwQ1drMUNUVWRDZVhGSENsTk5ORGxCWjBWSFEwTnhSMU5OTkRsQmQwVklRVEJKUVVKR1REVjVSa0ZFTTBkbk56WkJWbmgzUldOTkwwbGlRblZDZFZSMk1tZHpVMGR5ZUVaR1ZsZ0tORXRTVTFWellrTXdSVzl3TkhGdk9EVm1hR3c1YzIxUVNpOVFhVmxZT1V0NmJrdEZUMHgxWWpCRmVYVm1Na2RxVVdwQ1FVMUJORWRCTVZWa1JIZEZRZ292ZDFGRlFYZEpRM0JFUVZCQ1owNVdTRkpOUWtGbU9FVkNWRUZFUVZGSUwwMUNNRWRCTVZWa1JHZFJWMEpDVkRZd00yWlVlU3R6TmpsNmNGUkxjMjFEQ25GSVprbzJkWFVyT0dwQlMwSm5aM0ZvYTJwUFVGRlJSRUZuVGtsQlJFSkdRV2xCTldSVE5IQmtkU3MyUTFodFUwSlBVV3BKV1haMmVtdDJLMGszTnk4S1IxSkRlR2wwZEc5NFdXb3ZNVUZKYUVGTGNtWldlbmt6YjFGc1p6VjJOa3RxWkVSbEx6QnRjVW93ZHpNclVucDRObmxpYlRkWFRFWTNZV296Q2kwdExTMHRSVTVFSUVORlVsUkpSa2xEUVZSRkxTMHRMUzBLCiAgICBzZXJ2ZXI6IGh0dHBzOi8vMTI3LjAuMC4xOjY0NDMKICBuYW1lOiBkZWZhdWx0CmNvbnRleHRzOgotIGNvbnRleHQ6CiAgICBjbHVzdGVyOiBkZWZhdWx0CiAgICBuYW1lc3BhY2U6IGRlZmF1bHQKICAgIHVzZXI6IGRlZmF1bHQtZGVmYXVsdC1kZWZhdWx0CiAgbmFtZTogZGVmYXVsdC1kZWZhdWx0LWRlZmF1bHQKY3VycmVudC1jb250ZXh0OiBkZWZhdWx0LWRlZmF1bHQtZGVmYXVsdApraW5kOiBDb25maWcKcHJlZmVyZW5jZXM6IHt9CnVzZXJzOgotIG5hbWU6IGRlZmF1bHQtZGVmYXVsdC1kZWZhdWx0CiAgdXNlcjoKICAgIHRva2VuOiBleUpoYkdjaU9pSlNVekkxTmlJc0ltdHBaQ0k2SWs1d1JXSnpRbkJ2Y0doUU5HY3RZeTFWTkMxYVpraFJaWEV5UjBsWE5IYzVTM1JVWlVKbllUQldaMjhpZlEuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUmxabUYxYkhRdGRHOXJaVzRpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1dVlXMWxJam9pWkdWbVlYVnNkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJqTjJFeFpHRmxMV000TmpZdE5HVTROQzA1TlRKbUxUZ3hOalF6WmpjNE5HRmlaaUlzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwa1pXWmhkV3gwT21SbFptRjFiSFFpZlEuS0t1YjlybWZOemNlWFIxYjk2Ukc0MDZlWVZTazBlTFhNendNRUczR1R3dnVQMlJORmE2b0pjWVRCaTZ4V1k1S1ltbWFKeEdtcEg4YmJqcjdxYkFfLXVkNjZCY2VqOXc5OFp3aDF3TFZGY2ZSYTdVaWE2TWwwUzM5azYyb3hrMlEwcVZRR19xSnI2TzZ3RERvN19kREk0ckxkUzJzczEwbnVYOXBxVkVOaFE3dHNqdVRhMF9CY01oOG9UTUVIOFJsejF0RjF0TXJTNXZ5d0o2Vklma1ZFUmR0eFdQVDlXMzNOV0dWRTVZNTFINUwxTUFYUFVqa0lnb1BpTmFJUUlHWnpubGY1d2pOeUtsWEprajRWaGwyalhQR2hFSmhCOUNMazNBc2JVSWIzZWVQaU54ZHBvekp6Y0ZEOHlnNlAtb095TkpEb3ltcHlQLXpXVGY4dTBkS2tnCg== owner: root:root path: /etc/kubernetes/cloud-config permissions: '0644' ``` * 透過 Harvester 創建一個 rke2 叢集 ![image](https://hackmd.io/_uploads/BJSeA1vm1e.png) * 在 `User Data:` 的 `#cloud-config` 新增上一個步驟的 `cloud-init user data` 輸出,設定好所有參數後創建一個 rke2 叢集。 ![image](https://hackmd.io/_uploads/S1IC6kPXJl.png) * 進到 Harvester 管理介面創建 IP Pools。 ![image](https://hackmd.io/_uploads/B1ZGkxvQJx.png) * 填寫 IP Range,這裡規劃有 6 個 ip 可以使用。 ![image](https://hackmd.io/_uploads/SkJUkgP7yg.png) * 選擇 VMNetwork 以及可以使用這個 IP Pools 的叢集有哪些,以下示範設定所有叢集都可以使用。 ![image](https://hackmd.io/_uploads/BkWwyxPXJg.png) ## 驗證 * 在 guest cluster 創建一個 pod 以及 LoadBalancer type 的 service,service 需要透過 annotations 宣告。 ``` $ echo 'apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx image: nginx:latest --- apiVersion: v1 kind: Service metadata: annotations: cloudprovider.harvesterhci.io/ipam: pool # 須加上這個註解 name: lb-svc namespace: default spec: ports: - name: http port: 80 protocol: TCP targetPort: http selector: app: nginx type: LoadBalancer' | kubectl apply -f - ``` * 在 guest cluster 沒有 metalLB 也可以直接創建 LoadBalancer type 的 service,並且有被分配到了 `172.20.0.200` ip。 ``` $ kubectl get po,svc NAME READY STATUS RESTARTS AGE pod/nginx-pod 1/1 Running 0 2m49s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 27m service/lb-svc LoadBalancer 10.43.93.16 172.20.0.200 80:32284/TCP 2m49s ``` * 回到 Harvester 管理介面,在 Load Balancers 可以看到是哪個叢集使用了 LoadBalancer type 的 service。 ![image](https://hackmd.io/_uploads/r1xfMgDmyl.png) ## troubleshooting * 注意,在下游叢集 yaml 內如果少了 `global.cattle` 這個參數,那麼會導致 Harvester Load Balancers 找不到 "demo" 這個叢集,而他會一直匹配到 "kubernetes" 這個名稱的叢集導致報錯。 ![image](https://hackmd.io/_uploads/BkVhFew7Jl.png) ## 參考 https://docs.harvesterhci.io/v1.2/rancher/cloud-provider/