[k8s] kube-proxy ============ ###### tags: `kubernetes` :::info + `kube-proxy`主要負責監控 ***controlplane*** 上 `Service` 資源的新增與刪除,並根據以上的動作對本機的 `iptables` 進行 iptables rules 的修改 + 實現 ***Service*** 這個抽象層功能(服務探索、負載平衡)的角色是 kube-proxy。 + 這些 `iptables rules` 就是負責攔截前往 Service (ClusterIP:port) 的網路流量,並重新導向到 Service 所代理的其中一個 endpoint (Pod)。 + 是一個DaemonSet,其config儲存於`ConfigMap` ::: ![](https://hackmd.io/_uploads/HJu5gHOHh.png =400x) + kube-proxy 會在每一個 K8s node 上運作,會透過 K8s API 監視叢集中的 service 物件與 Endpoint。 + kube-proxy 會在 node 中藉由 iptables 模式撰寫規則,完成轉遞流量目的(負載平衡),service 越多則 iptables 規則也越多。 + ==iptables== 模式也因為流量封包會透過核心層 iptables 轉遞,故效能的損耗問題不得不關注。 + ==ipvs== 模式是為了解决 iptables 模式的性能問題,採用增量式更新 iptables 規則,並且可保證 service 更新期間連線不中斷。 ## insuffiency of kube-proxy + kube-proxy 目前只有支持 TCP 和 UDP,不支持 HTTP,並且也沒有健康檢查機制。 + 這些得需透過另一個方式 Ingress Controller 來解決問題。 ## how to monitor kube-proxy ![](https://i.imgur.com/02gniex.png) ## how to config & troubleshoot + `kube-proxy`是一個`DaemonSet` + 其config儲存於`ConfigMap` ```shell= $ k get ds -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 15m ``` ```yaml= apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system spec: selector: matchLabels: k8s-app: kube-proxy template: metadata: labels: k8s-app: kube-proxy spec: containers: - command: - /usr/local/bin/kube-proxy - --config=/var/lib/kube-proxy/config.conf - --hostname-override=$(NODE_NAME) env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: k8s.gcr.io/kube-proxy:v1.18.0 volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock - hostPath: path: /lib/modules type: "" name: lib-modules ``` ```yaml= apiVersion: v1 kind: ConfigMap metadata: annotations: kubeadm.kubernetes.io/component-config.hash: sha256:3c1c57a3d54be93d94daaa656d9428e3fbaf8d0cbb4d2221339c2d450391ee35 labels: app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: "250" data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "" nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" udpIdleTimeout: 0s winkernel: enableDSR: false networkName: "" sourceVip: "" kubeconfig.conf: |- apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://controlplane:6443 name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token ```