# Configure MTU to maximize network performance in kind
## 1. Configure the MTU value in Podman Network
### 1.1. 修改已經存在的 Podman Network
```
$ sudo nano /etc/containers/networks/kind.json
```
新增以下內容 :
```
"options": {
"mtu": "6000"
},
```
完整內容如下 :
```
{
"name": "kind",
"id": "df72cb97d019a6025357f12b68870a26897b2c288a24b11c5ec0c2f9b3e5b878",
"driver": "bridge",
"network_interface": "podman1",
"created": "2024-09-27T16:14:32.979631978+08:00",
"subnets": [
{
"subnet": "fc00:f853:ccd:e793::/64",
"gateway": "fc00:f853:ccd:e793::1"
},
{
"subnet": "10.89.0.0/24",
"gateway": "10.89.0.1"
}
],
"ipv6_enabled": true,
"internal": false,
"dns_enabled": true,
"options": {
"mtu": "6000"
},
"ipam_options": {
"driver": "host-local"
}
}
```
### 1.2. 檢查 Podman1 網卡 MTU 已設為 6000
```
$ ip a s podman1
```
執行結果如下 :
```
9: podman1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 6000 qdisc noqueue state UP group default qlen 1000
link/ether 5e:25:b2:4b:c0:cd brd ff:ff:ff:ff:ff:ff
inet 10.89.0.1/24 brd 10.89.0.255 scope global podman1
valid_lft forever preferred_lft forever
inet6 fc00:f853:ccd:e793::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5c25:b2ff:fe4b:c0cd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```
### 1.3. 關閉 Kind Cluster
```
$ kco c31
```
### 1.4. 啟動 Kind Cluster
```
$ kci c31
```
### 1.5. 檢查 Kind node 網卡 MTU 已設為 6000
```
$ ip a s
```
執行結果如下 :
```
10: veth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 6000 qdisc noqueue master podman1 state UP group default qlen 1000
link/ether 4a:c4:0e:46:e5:7e brd ff:ff:ff:ff:ff:ff link-netns netns-9ff96fea-b7ef-8593-61a2-467b31e2d98e
inet6 fe80::48c4:eff:fe46:e57e/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
11: veth1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 6000 qdisc noqueue master podman1 state UP group default qlen 1000
link/ether f2:34:71:74:cf:bd brd ff:ff:ff:ff:ff:ff link-netns netns-86850d7d-bc9e-3509-524b-f40e2cd61514
inet6 fe80::f034:71ff:fe74:cfbd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
12: veth2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 6000 qdisc noqueue master podman1 state UP group default qlen 1000
link/ether de:6c:ef:f7:e6:f5 brd ff:ff:ff:ff:ff:ff link-netns netns-b5b7d1a1-af28-b060-eec2-3c3f87150110
inet6 fe80::dc6c:efff:fef7:e6f5/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```
## 2. Configure Calico MTU
### 2.1. 將 MTU 設為 5950
```!
$ kubectl patch configmap/calico-config -n kube-system --type merge \
-p '{"data":{"veth_mtu": "5950"}}'
```
### 2.2. Rolling restart of all calico/node pods.
```
$ kubectl rollout restart daemonset calico-node -n kube-system
```
### 2.3. Check Calico pod Status
```
$ kubectl -n kube-system get pods -l "k8s-app=calico-node"
```
正確執行結果 :
```
NAME READY STATUS RESTARTS AGE
calico-node-2fxx7 1/1 Running 0 64s
calico-node-cw58f 1/1 Running 0 53s
calico-node-x872p 1/1 Running 0 74s
```
## 3. Testing Connectivity Between Kubernetes Pods with Iperf3
### 3.1. Deploy Iperf3 as a DaemonSet
```
$ echo 'apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iperf3-ds
spec:
selector:
matchLabels:
app: iperf3
template:
metadata:
labels:
app: iperf3
spec:
containers:
- name: iperf3
image: leodotcloud/swiss-army-knife
ports:
- containerPort: 5201' > ~/iperf3-ds.yaml
```
### 3.2. Create the DaemonSet
```
$ kubectl apply -f iperf3-ds.yaml
```
### 3.3. Check that iperf3 pods are running on all nodes in the cluster
```
$ kubectl get pods -l "app=iperf3" -o wide
```
執行結果如下 :
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
iperf3-ds-49bwz 1/1 Running 0 32m 10.244.166.25 c31-worker2 <none> <none>
iperf3-ds-9bhn5 1/1 Running 0 32m 10.244.146.203 c31-control-plane <none> <none>
iperf3-ds-tdfw9 1/1 Running 0 32m 10.244.152.26 c31-worker <none> <none>
```
### 3.4. Choose a pod to run in server mode
```!
$ kubectl exec -it iperf3-ds-tdfw9 -- iperf3 -s -p 12345
```
執行結果如下 :
```
-----------------------------------------------------------
Server listening on 12345
-----------------------------------------------------------
```
### 3.5. Choose a pod to run in client mode
```!
$ kubectl exec -it iperf3-ds-49bwz -- iperf3 -c 10.244.152.26 -p 12345
```
執行結果如下 :
```
Connecting to host 10.244.152.26, port 12345
[ 4] local 10.244.166.25 port 36468 connected to 10.244.152.26 port 12345
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.89 GBytes 16.2 Gbits/sec 112 933 KBytes
[ 4] 1.00-2.00 sec 1.85 GBytes 15.9 Gbits/sec 83 979 KBytes
[ 4] 2.00-3.00 sec 1.90 GBytes 16.3 Gbits/sec 121 1.20 MBytes
[ 4] 3.00-4.00 sec 1.92 GBytes 16.5 Gbits/sec 0 1.20 MBytes
[ 4] 4.00-5.00 sec 1.91 GBytes 16.4 Gbits/sec 154 1.20 MBytes
[ 4] 5.00-6.00 sec 1.89 GBytes 16.3 Gbits/sec 0 1.20 MBytes
[ 4] 6.00-7.00 sec 1.88 GBytes 16.2 Gbits/sec 188 1.21 MBytes
[ 4] 7.00-8.00 sec 1.88 GBytes 16.1 Gbits/sec 0 1.21 MBytes
[ 4] 8.00-9.00 sec 1.91 GBytes 16.4 Gbits/sec 238 1.48 MBytes
[ 4] 9.00-10.00 sec 1.85 GBytes 15.9 Gbits/sec 0 1.48 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 18.9 GBytes 16.2 Gbits/sec 896 sender
[ 4] 0.00-10.00 sec 18.9 GBytes 16.2 Gbits/sec receiver
iperf Done.
```
## 4. 測試 K8s Pod 網路效能
### 4.1. 原始 podman network MTU 改 1500, Calico 改 1480
```!
$ kubectl exec -it iperf3-ds-676l2 -- iperf3 -c 10.244.152.26 -p 12345
```
執行結果如下 :
```
Connecting to host 10.244.152.26, port 12345
[ 4] local 10.244.166.23 port 44364 connected to 10.244.152.26 port 12345
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.50 GBytes 12.9 Gbits/sec 135 1.50 MBytes
[ 4] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec 0 1.52 MBytes
[ 4] 2.00-3.00 sec 1.50 GBytes 12.9 Gbits/sec 0 1.52 MBytes
[ 4] 3.00-4.00 sec 1.50 GBytes 12.8 Gbits/sec 646 1.55 MBytes
[ 4] 4.00-5.00 sec 1.51 GBytes 12.9 Gbits/sec 1659 1.09 MBytes
[ 4] 5.00-6.00 sec 1.37 GBytes 11.8 Gbits/sec 0 1.09 MBytes
[ 4] 6.00-7.00 sec 1.49 GBytes 12.8 Gbits/sec 656 1.10 MBytes
[ 4] 7.00-8.00 sec 1.50 GBytes 12.9 Gbits/sec 630 1.12 MBytes
[ 4] 8.00-9.00 sec 1.44 GBytes 12.4 Gbits/sec 438 1.13 MBytes
[ 4] 9.00-10.00 sec 1.50 GBytes 12.9 Gbits/sec 703 1.15 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 14.8 GBytes 12.7 Gbits/sec 4867 sender
[ 4] 0.00-10.00 sec 14.8 GBytes 12.7 Gbits/sec receiver
iperf Done.
```
### 4.2. podman network MTU 改 9000, Calico 改 8950
```!
$ kubectl exec -it iperf3-ds-rlsnj -- iperf3 -c 10.244.152.36 -p 12345
```
執行結果如下 :
```
Connecting to host 10.244.152.36, port 12345
[ 4] local 10.244.166.33 port 54388 connected to 10.244.152.36 port 12345
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.81 GBytes 15.5 Gbits/sec 7 3.57 MBytes
[ 4] 1.00-2.00 sec 1.85 GBytes 15.9 Gbits/sec 0 3.60 MBytes
[ 4] 2.00-3.00 sec 1.83 GBytes 15.8 Gbits/sec 0 3.60 MBytes
[ 4] 3.00-4.00 sec 1.84 GBytes 15.8 Gbits/sec 0 3.66 MBytes
[ 4] 4.00-5.00 sec 1.81 GBytes 15.5 Gbits/sec 0 3.66 MBytes
[ 4] 5.00-6.00 sec 1.80 GBytes 15.5 Gbits/sec 0 3.66 MBytes
[ 4] 6.00-7.00 sec 1.78 GBytes 15.3 Gbits/sec 0 4.18 MBytes
[ 4] 7.00-8.00 sec 1.82 GBytes 15.7 Gbits/sec 0 4.18 MBytes
[ 4] 8.00-9.00 sec 1.83 GBytes 15.7 Gbits/sec 0 4.18 MBytes
[ 4] 9.00-10.00 sec 1.79 GBytes 15.4 Gbits/sec 0 4.18 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 18.2 GBytes 15.6 Gbits/sec 7 sender
[ 4] 0.00-10.00 sec 18.2 GBytes 15.6 Gbits/sec receiver
iperf Done.
```
## Ref
- [How to configure the MTU value in podman network - Red Hat kb](https://access.redhat.com/solutions/7007687)
- [Configure MTU to maximize network performance - Calico Docs](https://docs.tigera.io/calico/latest/networking/configuring/mtu#configure-mtu)
- [Testing Connectivity Between Kubernetes Pods with Iperf3 - SUSE kb](https://www.suse.com/support/kb/doc/?id=000020954)