---
tags: Kubernetes, network policy, global network polocy, calico
description: what's the network policy, global network polocy?
robots: index, follow
---
<style>
html, body, .ui-content {
background-color: #333;
color: #ddd;
}
.markdown-body h1,
.markdown-body h2,
.markdown-body h3,
.markdown-body h4,
.markdown-body h5,
.markdown-body h6 {
color: #ddd;
}
.markdown-body h1,
.markdown-body h2 {
border-bottom-color: #ffffff69;
}
.markdown-body h1 .octicon-link,
.markdown-body h2 .octicon-link,
.markdown-body h3 .octicon-link,
.markdown-body h4 .octicon-link,
.markdown-body h5 .octicon-link,
.markdown-body h6 .octicon-link {
color: #fff;
}
.markdown-body img {
background-color: transparent;
}
.ui-toc-dropdown .nav>.active:focus>a, .ui-toc-dropdown .nav>.active:hover>a, .ui-toc-dropdown .nav>.active>a {
color: white;
border-left: 2px solid white;
}
.expand-toggle:hover,
.expand-toggle:focus,
.back-to-top:hover,
.back-to-top:focus,
.go-to-bottom:hover,
.go-to-bottom:focus {
color: white;
}
.ui-toc-dropdown {
background-color: #333;
}
.ui-toc-label.btn {
background-color: #191919;
color: white;
}
.ui-toc-dropdown .nav>li>a:focus,
.ui-toc-dropdown .nav>li>a:hover {
color: white;
border-left: 1px solid white;
}
.markdown-body blockquote {
color: #bcbcbc;
}
.markdown-body table tr {
background-color: #5f5f5f;
}
.markdown-body table tr:nth-child(2n) {
background-color: #4f4f4f;
}
.markdown-body code,
.markdown-body tt {
color: #eee;
background-color: rgba(230, 230, 230, 0.36);
}
a,
.open-files-container li.selected a {
color: #5EB7E0;
}
</style>
# Calico Network Policy介紹
在講到Calico Network Policy前,我們可以回顧一下過往我們需要防火牆時,需要設定一些規則,INPUT、OUTPUT、開啟哪一個port...等,讓服務可以按照預期的被存取或阻擋外部的一些異常的存取。
進入kubernetes時代後,除了原本的防火牆功能外,更深化到容器的存取等級,說得比較直白些,更複雜了,而原本防火牆的功能對容器管理來說,能夠做的事情有限,或者要升級到新世代的防火牆設備,為此,我們需要Network Policy進行更精細的網路管理作業。
在Kubernetes的世界中,容器要有網路需要透過CNI(Container Network Interface)處理網路相關的問題,有些支援CNI的套件,例如最常見的Calico就有支援這個功能,除了社群版的功能之外,Calico也有提供商用版本支援,為Tigera公司提供商業支援,能夠提供更多的使用方式外,另外也提供資料可視性(Observability)的功能,讓管理上除了一般的監控(Monitoring)外,讓管理者能夠更了解目前容器封包的活動狀況、流量資訊等,做得最優異的,是提供視覺化介面讓管理者知道Network Policy制定後,目前的流量狀況為何。

*此為商用版介面-不用另外加入sidecar的網路流量拓墣圖*

*此為商用版介面-依照namespace分出流量狀況的圓餅圖*
常見縮寫:
```
CPS: connections per second.
PPS: Packets per Second.
BPS: Bits per Second.
```
而這些資訊,都能夠透過企業版的安裝腳本進行安裝,如果原本是使用社群版Calico,還會偵測現行的Calico進行版本調整。
## 1. 社群版、雲端管理版本、商用版差異
與雲端版、商用版相較起來,社群版缺了以下功能:
* Egress access controls (DNS policies, egress gateways)
* Extend firewall to Kubernetes
* Hierarchical tiers
* FQDN / DNS based policy
* Micro-segmentation across host/VMs/containers
* Security policy preview, staging, and recommendation
* Compliance reporting and alerts
* Intrusion detection & prevention (IDS / IPS) for Kubernetes
* SIEM Integrations
* Application Layer (L7) observability
* Dynamic packet capture
* DNS dashboards
### 1.1 Global Network Policy為商用版本所提供的功能。
作為CNI,商用或雲端管理版本除了在網通功能上的好處外,能夠對整個環境的可視性、規則彈性、應對策略得到全面的進化。

除了上述的功能在社群版沒有外,Global Network Policy也會遇到版本(v1 and v3)的問題,例如以下輸出:
```shell=
inwinstack@master:~$ kubectl create -f GNP.yaml
error: unable to recognize "GNP.yaml": no matches for kind "GlobalNetworkPolicy" in version "projectcalico.org/v3"
inwinstack@master:~$ kubectl api-resources |grep GlobalNetworkPolicy
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
```
後面例子用到的Global Network Policy皆是以v3為準,其他的應用(e.g. pod、service建立...等)都是標準的K8S規格。
## 2. Netowrk Policy and Global Network Policy
Network Policy在Calico中,範疇主要分成全域型(Global)與指定型(namespace),兩個差異主要在於Global是針對整個叢集進行控制,全部的network policy都必須遵循Global Network Policy的規範,在相同優先權的狀況下,才允許namespace底下的Network Policy 複寫規則,不過就如前面所提,Global Netowrk Policy可以設置最高優先的規則,讓其他規則沒辦法使用。
這與過往的IPTables中的First Match的概念相同,Global Netowrk Policy也有相同的作法,我們也可以利用這個特性,將規則搭配RBAC(Role-Based Access Control)拆分至不同的team進行管理作業,例如管理整個平台的team能夠操作global network policy,Developer、test team能夠使用、不能修改規則等,藉此避免錯誤規則所導致的問題。
切分權限很多人都會想到,多了一些責任外,不會用怎麼辦?商用、雲端版本提供了預覽功能,在套用前可以先觀察一下狀況。

~~最好的作法還是交給原本的Team吧。~~
不過在這個變化越來越大的世代,多一份知識,少一份熬夜的機會。
~~最好透過Dashboard就看到不甘我的事~~
在Kubernetes中,大部分的網路架構是扁平式網路架構(flatnetwork),不過有些會使用SDN(Software Define Network)技術另外處理網通問題,不過大致上都是要對網路做一些管理功能作業。
但預設的kubernetes環境中,網路都是互通的,沒有限制。這意味著許多的問題會在這邊發生,例如遭到入侵後,進行全網段掃描、收集資訊...等,不過遭到入侵的層面很多,改天再討論或來個有趣的文章展示一下,回到我們的主軸,我要Global Network Policy幫我解決這些問題,以下我們會模擬一下幾個情境:
1. 預設全部Namespace禁止相互存取,只有指定的namespace可以相互存取(Global Network Policy: Default Deny Rule)
2. Global Netowrk Policy優先權調整,讓一般優先權的Network Policy失效。
3. 額外資訊:設定出包了,無法連線你該怎麼辦?
## 3. 測試Lab
### 3.1. 環境資訊
**Harvester控管的叢集:**
**Spec:**
1. Ubuntu 18.04, 4 core, 8 G RAM, 70G Disk.
2. Role: 1 master, 2 worker.
3. K8S Version: 1.22.1
4. CNI: Calico
5. 安裝腳本:https://github.com/yansheng133/initk8sfortraining

6. [註冊Calico Cloud](https://www.calicocloud.io/home "試用Calico Cloud")
點擊Managed Cluster。

選擇你的cluster種類。

確認相容性問題。

複製產生出來的安裝腳本,貼到叢集中執行。

:::info
1. 環境安裝也可以用其他的虛擬化平台取代。
2. Calico的安裝方式可以透過很多部署機制處理,特殊功能會需要關閉kube-proxy,若kube-proxy不可控可能要多注意。
3. Calico公有雲環境也可以使用。
:::
### 3.2. 驗測容器設定
先建立一個可以四處連線的pod(testconn)
```shell=
inwinstack@master:~$ kubectl run testconn --image=nginx
pod/testconn created
```
確認一下Kubernetes API Service IP位置
```shell=
inwinstack@master:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
#testconn安裝所需要的套件
inwinstack@master:~$ kubectl exec -it testconn -- bash
root@testconn:/# apt-get update && apt-get install iputils-ping wget dnsutils -y
Get:1 http://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
Get:2 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
...
...
Setting up bind9-host (1:9.16.22-1~deb11u1) ...
Setting up bind9-dnsutils (1:9.16.22-1~deb11u1) ...
Setting up dnsutils (1:9.16.22-1~deb11u1) ...
Processing triggers for libc-bin (2.31-13+deb11u2) ...
#稍微測試一下ping
root@testconn:/# ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=6.27 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=6.46 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=114 time=6.25 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=114 time=12.2 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 6.248/7.785/12.165/2.530 ms
#抓檔案試試
root@testconn:/# wget www.google.com
--2022-01-25 02:53:32-- http://www.google.com/
Resolving www.google.com (www.google.com)... 172.217.160.68, 2404:6800:4012:4::2004
Connecting to www.google.com (www.google.com)|172.217.160.68|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'index.html'
index.html [ <=> ] 14.27K --.-KB/s in 0.005s
2022-01-25 02:53:32 (2.90 MB/s) - 'index.html' saved [14614]
```
測試連線DNS
```
root@testconn:/# nslookup 10.96.0.1
1.0.96.10.in-addr.arpa name = kubernetes.default.svc.cluster.local.
root@testconn:/# nslookup kube-dns.kube-system.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kube-dns.kube-system.svc.cluster.local
Address: 10.96.0.10
#測試由pod中存取Kubernetes API Server預設位置
root@testconn:/# curl -k https://10.96.0.1
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
```
建立一個小型web服務,在安裝腳本資料夾內有相關的yaml可以使用。
```shell=
inwinstack@master:~$ kubectl create -f initk8sfortraining/sample/03/internal.yaml
namespace/internalservice created
pod/internalweb created
service/internalweb created
pod/networkpod created
inwinstack@master:~$ kubectl -n internalservice get pod,svc
NAME READY STATUS RESTARTS AGE
pod/internalweb 1/1 Running 0 26m
pod/networkpod 1/1 Running 0 26m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/internalweb NodePort 10.97.127.169 <none> 9000:30014/TCP 26m
#抓一下主機列表
inwinstack@master:~$ kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master25237.inwinstack.lab Ready control-plane,master 18h v1.22.1 172.24.0.163 <none> Ubuntu 18.04.6 LTS 5.4.0-77-generic docker://20.10.12
worker26987.inwinstack.lab Ready <none> 18h v1.22.1 172.24.0.156 <none> Ubuntu 18.04.6 LTS 5.4.0-77-generic docker://20.10.12
worker8236.inwinstack.lab Ready <none> 18h v1.22.1 172.24.0.155 <none> Ubuntu 18.04.6 LTS 5.4.0-77-generic docker://20.10.12
inwin1@registry:~$ curl 172.24.0.163:30014
==this is internal web service==
```
*internal service*
```yaml=
---
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: internalservice
labels:
name: internalservice
spec: {}
status: {}
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: internalweb
name: internalweb
namespace: internalservice
spec:
initContainers:
- name: webcontentgen
image: busybox
command: ['sh', '-c', "echo '==this is internal web service==' > /opt/web/index.html"]
volumeMounts:
- mountPath: /opt/web
name: web
containers:
- image: nginx
name: internalweb
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: web
resources: {}
volumes:
- name: web
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
run: internalweb
name: internalweb
namespace: internalservice
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 80
selector:
run: internalweb
type: NodePort
status:
loadBalancer: {}
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: networkpod
name: networkpod
namespace: internalservice
spec:
containers:
- args:
- /bin/sleep
- "3600"
image: busybox
name: networkpod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
```
:::danger
1. 以上這些操作似乎稀鬆平常,都寫在很多教學文件上,其實潛在問題很多,後續我們再來探討:)
:::
### 3.3. 基本環境驗證
1. 沒有default deny rule存取大家的服務:)
使用testconn來測試一下是否有回應
```shell=
inwinstack@master:~$ kubectl exec -it testconn -- bash
root@testconn:/# nslookup internalweb.internalservice.svc.cluster.local.
root@testconn:/# nslookup internalweb.internalservice.svc.cluster.local.
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: internalweb.internalservice.svc.cluster.local
Address: 10.97.127.169
root@testconn:/# curl internalweb.internalservice.svc.cluster.local:9000
==this is internal web service==
```
透過internalservice中的netowrkpod測試對kube-dns連線
```shell=
inwinstack@master:~$ kubectl -n internalservice exec -it networkpod -- sh
/ # nslookup 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10:53
10.0.96.10.in-addr.arpa name = kube-dns.kube-system.svc.cluster.local
```
:::info
okay, 基本的環境設置已經完成,也檢視dns跟一些連線測試都是正常的,接下來就可以進入下一個階段:Global Netowrk Policy設定。
:::
2. Global Network Policy設定
default deny all規則
:::danger
1. "default-deny"規則套用後,叢集就會因為不允許任何規則而鎖住,僅用於說明主要規格,強烈建議不要直接拿來用。
2. "default-deny"可與"default-app-policy"做比較。
:::
```yaml=
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: all() #選擇全部的pod
types: #ingress, egress全部啟用
- Ingress
#未定義ingress規格,表示未放行任何規則。
- Egress
#未定義egress規格,表示未放行任何規則。
```
指定namespace的允許使用DNS
先來看一下定義:
```yaml=
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-app-policy
spec:
namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system","default", "tigera-system"}
# 在此表示GNP - default-app-policy套用到全部的namespace, 但不包含"kube-system", "kube-public", "default"
types:
- Ingress
- Egress
egress:
- action: Allow
protocol: UDP
destination:
selector: k8s-app == "kube-dns" #能夠存取的是帶有kube-dns標籤的pod
ports:
- 53 #允許53 port, DNS常用Port
```
我們先檢查一下誰有kube-dns標籤,這個應該會在kube-system中
```shell=
inwinstack@master:~$ kubectl -n kube-system get po --show-labels -owide |grep kube-dns
coredns-78fcd69978-bn8bb 1/1 Running 1 (17h ago) 19h 10.6.28.203 master25237.inwinstack.lab <none> <none> k8s-app=kube-dns,pod-template-hash=78fcd69978
coredns-78fcd69978-nw9pt 1/1 Running 1 (17h ago) 19h 10.6.28.202 master25237.inwinstack.lab <none> <none> k8s-app=kube-dns,pod-template-hash=78fcd69978
```
我們可以看到目前我的環境中具備兩個DNS POD,位於master25237.inwinstack.lab中。
順便看一下service的資訊。
```shell=
inwinstack@master:~$ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19h
metrics-server ClusterIP 10.104.57.107 <none> 443/TCP 19h
traefik-ingress-service ClusterIP 10.109.120.97 <none> 80/TCP,8080/TCP 19h
```
該是時候套用Global Network Policy了。
```shell=
inwinstack@master:~$ kubectl create -f gnp.yaml
globalnetworkpolicy.projectcalico.org/default.default-app-policy created
inwinstack@master:~$ kubectl get globalnetworkpolicy
NAME CREATED AT
default.default-app-policy 2022-02-11T01:42:11Z
```
使用先前的結果,重新測試一下,如果FQDN有回應,應該是cache,幾秒內就會被清除。
```shell=
inwinstack@master:~$ kubectl -n internalservice exec -it networkpod -- sh
/ # nslookup 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10:53
10.0.96.10.in-addr.arpa name = kube-dns.kube-system.svc.cluster.local
/ # nslookup internalweb.internalservice.svc.cluster.local.
Server: 10.96.0.10
Address: 10.96.0.10:53
*** Can't find internalweb.internalservice.svc.cluster.local.: No answer
/ # nslookup www.google.com
Server: 10.96.0.10
Address: 10.96.0.10:53
Non-authoritative answer:
Name: www.google.com
Address: 2404:6800:4012:2::2004
*** Can't find www.google.com: No answer
/ # ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # ping -c 4 10.43.46.165
PING 10.43.112.40 (10.43.112.40): 56 data bytes
--- 10.43.46.165 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
```
:::info
依照上面的結果,我們得到幾個資訊:
1. DNS正解(internalweb.internalservice.svc.cluster.local.)在cluster內沒有回應。
2. DNS正解外部網址(www.google.com),得到Non-authoritative answer回應。
4. ping 8.8.8.8都沒有回應。
:::
回顧一下規則的片段,對外lookup與對內的
```yaml=
egress:
- action: Allow
protocol: UDP
destination:
selector: k8s-app == "kube-dns" #能夠存取的是帶有kube-dns標籤的pod
ports:
- 53 #允許53 port, DNS常用Port
```
由default namespace進行測試
```shell=
inwinstack@master:~$ kubectl exec -it testconn -- bash
root@testconn:/# nslookup kube-dns.kube-system.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kube-dns.kube-system.svc.cluster.local
Address: 10.96.0.10
root@testconn:/# nslookup internalweb.internalservice.svc.cluster.local.
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: internalweb.internalservice.svc.cluster.local
Address: 10.97.127.169
root@testconn:/# nslookup 10.97.127.169
169.127.97.10.in-addr.arpa name = internalweb.internalservice.svc.cluster.local.
root@testconn:/# nslookup www.google.com
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: www.google.com
Address: 172.217.160.68
Name: www.google.com
Address: 2404:6800:4012:2::2004
root@testconn:/# nslookup 172.217.160.68
68.160.217.172.in-addr.arpa name = tsa01s09-in-f4.1e100.net.
Authoritative answers can be found from:
root@testconn:/# curl internalweb.internalservice.svc.cluster.local:9000
^C
root@testconn:/# curl 10.97.127.169:9000
^C
root@testconn:/# ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=6.01 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=6.15 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=114 time=5.88 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=114 time=5.93 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 5.882/5.993/6.153/0.103 ms
root@testconn:/# wget www.google.com
--2022-01-27 05:29:55-- http://www.google.com/
Resolving www.google.com (www.google.com)... 142.251.42.228, 2404:6800:4012::2004
Connecting to www.google.com (www.google.com)|142.251.42.228|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'index.html'
index.html [ <=> ] 14.23K --.-KB/s in 0.004s
2022-01-27 05:29:56 (3.18 MB/s) - 'index.html' saved [14569]
```
:::info
依照上面的結果,我們得到幾個資訊:
1. default namespace DNS正、反解皆正確。
2. curl存取internalweb.internalservice.svc.cluster.local.服務,不管是透過DNS或IP都失敗。
3. ping、wget www.google.com皆正常。
:::
由叢集內的pod,透過不同的namespace進行驗證完成,接下來我們透過叢集外進行測試。
對master(172.24.0.163), 2 worker(172.24.0.155, 156)進行連線測試。
看看原本的NodePort會不會回應。
```shell=
└[~]> curl 172.24.0.163:30014
^C
└[~]> curl 172.24.0.156:30014
^C
└[~]> curl 172.24.0.155:30014
^C
```
:::info
1. 結果為全部不回應,因為除了DNS外,全部被禁用。
:::
刪除GlobalNetworkPolicy。
```shell=
inwinstack@master:~$ kubectl delete -f gnp.yaml
globalnetworkpolicy.projectcalico.org "default-app-policy" deleted
```
重新連線測試
```shell=
└[~]> curl 172.24.0.163:30014
==this is internal web service==
└[~]> curl 172.24.0.155:30014
==this is internal web service==
└[~]> curl 172.24.0.156:30014
==this is internal web service==
```
:::info
1. 在此我們可以確認Global Network Policy的動作是正確的。
2. 移除Global Network Policy後,服務會立刻依照預期執行。
:::
## 4. 出包救援
萬一Global Network Policy或一般Network Policy設置錯誤時,會導致CLI無法使用,例如前述的Global Network Policy(default-deny)。
此時可以先回過頭來看CLI作動的基礎點,kubectl是client端,朝cluster進行呼叫,由cluster處理完畢後,將結果回傳給client。
無法操作主要是因為iptable或其他規則阻擋kubectl所進行的呼叫工作,這個時候我們可以尋找master node中有kubectl套件的pod,上傳config檔後,把主機位置改成127.0.0.1,透過這個方式呼叫cluster就可以進行環境修正作業了,例如下列config sample中的server位置可以從https://172.24.0.163:6443 改為 https://127.0.0.1:6443。
```yaml=
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t....
...
...0tCg==
server: https://172.24.0.163:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tL...
...
...0tCg==
client-key-data: LS0tLS...
...
...LQo=
```
## 5. 參考資料
[1. Official - Cluster Networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/ "Cluster Networking")
[2. 試用Calico Cloud](https://www.calicocloud.io/home "試用Calico Cloud")