Try   HackMD

k8sインストール・multus遊び

https://thinkit.co.jp/article/18188

途中まで実施

    1  sudo apt-get install -y apt-transport-https ca-certificates curl
    2  echo "$(logname) ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/$(logname)
    3  sudo apt-get update
    4  ip a
    5  sudo apt-get install -y apt-transport-https ca-certificates curl
    6  sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    7  curl https://packages.cloud.google.com/apt/doc/apt-key.gpg
    8  sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    9  sudo mydir -p 777 /etc/apt/keystrings
   10  sudo mkdir -p 777 /etc/apt/keystrings
   11  sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
   12  sudo rm -rf /etc/apt/keystrings
   13  sudo mkdir -p 777 /etc/apt/keyrings
   14  sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
   15  echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
   16  sudo apt-get update
   17  sudo apt-get install -y kubelet kubeadm kubectl
   18  sudo apt-mark hold kubelet kubeadm kubectl
   19  cat /var/lib/kubelet/kubeadm-flags.env
   20  ip a
   21  sudo dhclient -r ens160
   22  sudo hostanmectl set-hostname master
   23  sudo hostnamectl set-hostname master
   24  ip a
   25  history

masterを2つコピー

swapをoff

katsuma@master:~$ sudo swapoff -a
katsuma@master:~$ sudo vi /etc/fstab

環境設定、コンテナランタイム
https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

katsuma@master:~$ sudo modprobe overlay
katsuma@master:~$ sudo modprobe br_netfilter
katsuma@master:~$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-iptables  = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> net.ipv4.ip_forward                 = 1
> EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
katsuma@master:~$ sudo sysctl --system

以下で設定を確認する

katsuma@master:~$ lsmod | grep br_netfilter
br_netfilter           28672  0
bridge                176128  1 br_netfilter
katsuma@master:~$ lsmod | grep overlay
overlay               118784  0
katsuma@master:~$ sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

containerdをインストールする
https://github.com/containerd/containerd/blob/main/docs/getting-started.md

公式はよくわからんかったので、↓でインストール

sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

masterを起動(init)する。このときCIDRがかぶっているので変えておく

katsuma@master:~$ sudo kubeadm init --pod-network-cidr=192.168.100.0/24
[init] Using Kubernetes version: v1.27.1

///snip///

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.4.151:6443 --token zces31.85tolwvckjk435cj \
	--discovery-token-ca-cert-hash sha256:101463662512ee066b1c1706e1a2f82fd038477e3d809f3ff12fdfd243bf174e

kubectlを使えるようにする

katsuma@master:~$ mkdir -p $HOME/.kube
katsuma@master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
katsuma@master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

worker側でjoinをする

katsuma@worker-1:~$ sudo kubeadm join 192.168.4.151:6443 --token zces31.85tolwvckjk435cj \
> --discovery-token-ca-cert-hash sha256:101463662512ee066b1c1706e1a2f82fd038477e3d809f3ff12fdfd243bf174e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

masterでget nodesして、登録されていることを確認する(この時点では)

katsuma@master:~$ kubectl get nodes
NAME       STATUS     ROLES           AGE     VERSION
master     NotReady   control-plane   4m23s   v1.27.1
worker-1   NotReady   <none>          32s     v1.27.1
worker-2   NotReady   <none>          26s     v1.27.1

calicoをインストールする
参考:https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart#install-calico
↑このやり方はCIDRが異なるため、
https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico
に従い、一度コンフィグを落としてからcidrを書き換えてcreateする

katsuma@master:~$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   827  100   827    0     0   3062      0 --:--:-- --:--:-- --:--:--  3062
katsuma@master:~$ vi custom-resources.yaml
katsuma@master:~$ cat custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 192.168.100.0/24
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}
katsuma@master:~$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
katsuma@master:~$ kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

podsがすべてRunningになることを確認(数分かかる)

katsuma@master:~$ kubectl get pods -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-5f579d4567-9fbnh          1/1     Running   0          109s
calico-apiserver   calico-apiserver-5f579d4567-z6khb          1/1     Running   0          109s
calico-system      calico-kube-controllers-789dc4c76b-mx9dt   1/1     Running   0          2m53s
calico-system      calico-node-ghn89                          1/1     Running   0          2m53s
calico-system      calico-node-rpgw6                          1/1     Running   0          2m53s
calico-system      calico-node-vvwcn                          1/1     Running   0          2m53s
calico-system      calico-typha-df795454f-hn42g               1/1     Running   0          2m44s
calico-system      calico-typha-df795454f-x7j9g               1/1     Running   0          2m53s
calico-system      csi-node-driver-gk5lz                      2/2     Running   0          2m53s
calico-system      csi-node-driver-mhzcc                      2/2     Running   0          2m53s
calico-system      csi-node-driver-rfkkh                      2/2     Running   0          2m53s
kube-system        coredns-5d78c9869d-5nphv                   1/1     Running   0          50m
kube-system        coredns-5d78c9869d-mfhcn                   1/1     Running   0          50m
kube-system        etcd-master                                1/1     Running   0          50m
kube-system        kube-apiserver-master                      1/1     Running   0          50m
kube-system        kube-controller-manager-master             1/1     Running   0          50m
kube-system        kube-proxy-hbtwr                           1/1     Running   0          47m
kube-system        kube-proxy-l54rg                           1/1     Running   0          46m
kube-system        kube-proxy-qmb7v                           1/1     Running   0          50m
kube-system        kube-scheduler-master                      1/1     Running   0          50m
tigera-operator    tigera-operator-549d4f9bdb-bbpv4           1/1     Running   0          3m1s

calicoがrunningになると、各ノードとの通信も行われ、ノードもReadyになる

katsuma@master:~$ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
master     Ready    control-plane   51m   v1.27.1
worker-1   Ready    <none>          47m   v1.27.1
worker-2   Ready    <none>          47m   v1.27.1

Podを立ててみる

マニフェストファイルを作成し、applyする

katsuma@master:~/obenkyo$ cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - image: nginx:1.17
    name: nginx
katsuma@master:~/obenkyo$ kubectl apply -f nginx.yaml
pod/nginx created

Podが立ったことを確認する

katsuma@master:~/obenkyo$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          41s

describeでより詳しい情報が得られる

katsuma@master:~/obenkyo$ kubectl describe pod nginx
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             worker-2/192.168.4.153
Start Time:       Sat, 13 May 2023 16:42:10 +0000
Labels:           <none>
Annotations:      cni.projectcalico.org/containerID: 9d304d8250f126294930fe580c80bff1e74f8ed015bc457ae78bc52064fc7985
                  cni.projectcalico.org/podIP: 192.168.100.195/32
                  cni.projectcalico.org/podIPs: 192.168.100.195/32
Status:           Running
IP:               192.168.100.195
IPs:
  IP:  192.168.100.195
Containers:
  nginx:
    Container ID:   containerd://cb4dba62739549ddf95c12d34abf48a6aa18fff3cab21b2ecdc4471ab73a699e
    Image:          nginx:1.17
    Image ID:       docker.io/library/nginx@sha256:6fff55753e3b34e36e24e37039ee9eae1fe38a6420d8ae16ef37c92d1eb26699
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 13 May 2023 16:42:19 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-twfkr (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-twfkr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  104s  default-scheduler  Successfully assigned default/nginx to worker-2
  Normal  Pulling    104s  kubelet            Pulling image "nginx:1.17"
  Normal  Pulled     96s   kubelet            Successfully pulled image "nginx:1.17" in 8.047756387s (8.047761277s including waiting)
  Normal  Created    96s   kubelet            Created container nginx
  Normal  Started    95s   kubelet            Started container nginx

curlしてみる

katsuma@master:~/obenkyo$ curl 192.168.100.195
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

もうすこしお勉強

https://qiita.com/minorun365/items/0441e4878f0984a9fc0a

踏み台Podを作って起動する

katsuma@master:~/obenkyo$ cat fumidai-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: fumidai
spec:
  containers:
  - name: bastion
    image: debian:stable
    command: ["sleep", "infinity"]
katsuma@master:~/obenkyo$ kubectl apply -f fumidai-pod.yaml
pod/fumidai created

起動したことを確認する(今回はどちらもworker-2になってしまった)

katsuma@master:~/obenkyo$ kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE       NOMINATED NODE   READINESS GATES
fumidai   1/1     Running   0          44s     192.168.100.196   worker-2   <none>           <none>
nginx     1/1     Running   0          8m51s   192.168.100.195   worker-2   <none>           <none>

podに入り込んで、curlする

katsuma@master:~/obenkyo$ kubectl exec -it fumidai -- bash
root@fumidai:/# curl
bash: curl: command not found
root@fumidai:/# apt update && apt install -y curl

///snip///

root@fumidai:/# curl 192.168.100.195
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

レプリカセットで3つ建てる

レプリカセットのyamlをかく

katsuma@master:~/obenkyo$ cat nginx-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: triple-nginx
  labels:
    component: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      component: nginx
  template:
    metadata:
      labels:
        component: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

建てる

katsuma@master:~/obenkyo$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
fumidai   1/1     Running   0          6m18s
nginx     1/1     Running   0          14m
katsuma@master:~/obenkyo$ kubectl apply -f nginx-replicaset.yaml
replicaset.apps/triple-nginx created
katsuma@master:~/obenkyo$ kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGE
fumidai              1/1     Running   0          7m11s
nginx                1/1     Running   0          15m
triple-nginx-9zqm4   1/1     Running   0          33s
triple-nginx-qc655   1/1     Running   0          33s
triple-nginx-qtlh6   1/1     Running   0          33s

壊して復旧することを確認

katsuma@master:~/obenkyo$ kubectl get pods -o wide
NAME                 READY   STATUS    RESTARTS   AGE     IP                NODE       NOMINATED NODE   READINESS GATES
fumidai              1/1     Running   0          8m32s   192.168.100.196   worker-2   <none>           <none>
triple-nginx-9zqm4   1/1     Running   0          114s    192.168.100.197   worker-2   <none>           <none>
triple-nginx-qc655   1/1     Running   0          114s    192.168.100.67    worker-1   <none>           <none>
triple-nginx-qtlh6   1/1     Running   0          114s    192.168.100.68    worker-1   <none>           <none>
katsuma@master:~/obenkyo$ kubectl delete pod triple-nginx-9zqm4
pod "triple-nginx-9zqm4" deleted
katsuma@master:~/obenkyo$ kubectl get pods -o wide
NAME                 READY   STATUS              RESTARTS   AGE     IP                NODE       NOMINATED NODE   READINESS GATES
fumidai              1/1     Running             0          8m44s   192.168.100.196   worker-2   <none>           <none>
triple-nginx-5kzbh   0/1     ContainerCreating   0          2s      <none>            worker-2   <none>           <none>
triple-nginx-qc655   1/1     Running             0          2m6s    192.168.100.67    worker-1   <none>           <none>
triple-nginx-qtlh6   1/1     Running             0          2m6s    192.168.100.68    worker-1   <none>           <none>
katsuma@master:~/obenkyo$ kubectl get pods -o wide
NAME                 READY   STATUS    RESTARTS   AGE     IP                NODE       NOMINATED NODE   READINESS GATES
fumidai              1/1     Running   0          8m51s   192.168.100.196   worker-2   <none>           <none>
triple-nginx-5kzbh   1/1     Running   0          9s      192.168.100.198   worker-2   <none>           <none>
triple-nginx-qc655   1/1     Running   0          2m13s   192.168.100.67    worker-1   <none>           <none>
triple-nginx-qtlh6   1/1     Running   0          2m13s   192.168.100.68    worker-1   <none>           <none>

ロードバランス

サービスのマニフェストをかく

katsuma@master:~/obenkyo$ cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  selector:
    component: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
katsuma@master:~/obenkyo$ kubectl apply -f nginx-service.yaml
service/my-nginx created

サービスが追加される

katsuma@master:~/obenkyo$ kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   75m
my-nginx     ClusterIP   10.106.167.240   <none>        80/TCP    35s

踏み台からcurlする

katsuma@master:~/obenkyo$ kubectl exec -it fumidai -- bash
root@fumidai:/# curl my-nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

ローリングアップデート

v1のマニフェストをかく

katsuma@master:~/obenkyo$ cat <<EOF > nginx-deployment-v1.yaml
> apiVersion: apps/v1
> kind: Deployment
> metadata:
>   name: triple-nginx
>   labels:
>     component: nginx
> spec:
>   replicas: 3
>   selector:
>     matchLabels:
>       component: nginx
>   template:
>     metadata:
>       labels:
>         component: nginx
>     spec:
>       containers:
>       - name: nginx
>         image: nginx:1.20
> EOF

v2のマニフェストも書く

apiVersion: apps/v1
kind: Deployment
metadata:
  name: triple-nginx
  labels:
    component: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      component: nginx
  template:
    metadata:
      labels:
        component: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21

デプロイしてみる

katsuma@master:~/obenkyo$ kubectl apply -f nginx-deployment-v2.yaml
deployment.apps/triple-nginx configured
katsuma@master:~/obenkyo$ kubectl apply -f nginx-deployment-v1.yaml
deployment.apps/triple-nginx configured

踏み台から定期的にcurlすると切り替わる

root@fumidai:/# while true; do echo "-------"; date; curl -s -i my-nginx | grep -E 'Server'; sleep 1; done
-------
Sat May 13 17:12:48 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:49 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:50 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:51 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:52 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:53 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:54 UTC 2023
Server: nginx/1.21.6
-------
Sat May 13 17:12:55 UTC 2023
Server: nginx/1.20.2
-------
Sat May 13 17:12:56 UTC 2023
Server: nginx/1.20.2
^C

ingress

参考:https://amateur-engineer-blog.com/kubernetes-ingress/

ingressはコントローラが必要なので、ingress-nginxを作成する


katsuma@master:~/obenkyo/ingress$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
katsuma@master:~/obenkyo/ingress$ kubectl get -n ingress-nginx service,deployment,pod
NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.102.251.45   <pending>     80:30499/TCP,443:30287/TCP   19s
service/ingress-nginx-controller-admission   ClusterIP      10.110.130.21   <none>        443/TCP                      19s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   0/1     1            0           19s

NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-zgvrp       0/1     Completed   0          19s
pod/ingress-nginx-admission-patch-qxsf8        0/1     Completed   1          19s
pod/ingress-nginx-controller-cccc7499c-b5qd7   0/1     Running     0          19s

3つのマニフェストを作成する

katsuma@master:~/obenkyo/ingress$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: web-container
        image: nginx
        ports:
          - containerPort: 80
katsuma@master:~/obenkyo/ingress$ cat service.yml
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  labels:
    app: myapp-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp
  type: ClusterIP
katsuma@master:~/obenkyo/ingress$ cat ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
spec:
  ingressClassName: "nginx"
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80

3つをapplyする

katsuma@master:~/obenkyo/ingress$ kubectl apply -f deployment.yml
deployment.apps/myapp created
katsuma@master:~/obenkyo/ingress$ kubectl apply -f service.yml
service/myapp-service created
katsuma@master:~/obenkyo/ingress$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/myapp created

確認する

katsuma@master:~/obenkyo/ingress$ kubectl get deployment,service,ingress
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myapp          3/3     3            3           26s
deployment.apps/triple-nginx   3/3     3            3           22m

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP   105m
service/my-nginx        ClusterIP   10.106.167.240   <none>        80/TCP    30m
service/myapp-service   ClusterIP   10.96.6.235      <none>        80/TCP    22s

NAME                              CLASS   HOSTS   ADDRESS   PORTS   AGE
ingress.networking.k8s.io/myapp   nginx   *                 80      14s

GUIを使う

参考:https://kubernetes.io/ja/docs/tasks/access-application-cluster/web-ui-dashboard/

公式の手順に沿って、thick pluginをデプロイしてみる

katsuma@master:~/obenkyo/ingress$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: non-functional in v1.27+; use the "seccompProfile" field instead
deployment.apps/dashboard-metrics-scraper created

Multusで遊ぶ

参考:https://qiita.com/masashicco/items/f39e09040735c8c2f826
参考:https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md

まずは概要

  • Multus CNIは、複数NICをPodにアタッチできるk8sのCNIプラグイン
  • 2種類のデプ色イメンとがある
    • thin plugin
    • thick plugin: multus-daemonとmultus-shim CNI pluginの2つで構成
      • multus-daemonは全ノードにデプロイされ、メトリクスなどの機能をサポートしている
  • Multusデーモンセットが行うこと
    • daemonセットを起動し、各ノードの/opt/cni/binにMultusバイナリを配置
    • /etc/cni/net.d/00-multus.conf を作成
    • 各ノード上に/etc/cni/net.d/multus.dディレクトリを作成し、MultusがKubernetes APIにアクセスするための認証情報を記載

デプロイ

workerに2つインターフェイスをつける

22: ens224: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0c:29:0d:02:da brd ff:ff:ff:ff:ff:ff
23: ens192: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0c:29:0d:02:d0 brd ff:ff:ff:ff:ff:ff

参考:https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md
git clone してapplyする

katsuma@master:~/multus$ git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni
Cloning into 'multus-cni'...
remote: Enumerating objects: 40040, done.
remote: Counting objects: 100% (40040/40040), done.
remote: Compressing objects: 100% (19271/19271), done.
remote: Total 40040 (delta 19479), reused 38997 (delta 18983), pack-reused 0
Receiving objects: 100% (40040/40040), 47.68 MiB | 21.07 MiB/s, done.
Resolving deltas: 100% (19479/19479), done.
katsuma@master:~/multus/multus-cni$ ls
cmd  CODE_OF_CONDUCT.md  CONTRIBUTING.md  deployments  docs  e2e  examples  go.mod  go.sum  hack  images  LICENSE  NOTICE  pkg  README.md  vendor
katsuma@master:~/multus/multus-cni$ cat ./deployments/multus-daemonset-thick.yml | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-daemon-config created
daemonset.apps/kube-multus-ds created
katsuma@master:~/multus/multus-cni$ kubectl get pods --all-namespaces | grep -i multus
kube-system            kube-multus-ds-bsmcz                         1/1     Running     0          11m
kube-system            kube-multus-ds-qfzpr                         1/1     Running     0          11m
kube-system            kube-multus-ds-wmkqt                         1/1     Running     0          11m

各ノードに認証情報?がある

katsuma@worker-1:~$ sudo cat /etc/cni/net.d/00-multus.conf
{"capabilities":{"bandwidth":true,"portMappings":true},"cniVersion":"0.3.1","logLevel":"verbose","name":"multus-cni-network","clusterNetwork":"/host/etc/cni/net.d/10-calico.conflist","type":"multus-shim","socketDir":"/host/run/multus/"}

追加のインターフェイスを付与するには、CRD(Custum Resource Difinition)を作成する。このCRDはPodにつけるインターフェイスの構成要素である。

CNIコンフィグは以下のようになっている。

{
  "cniVersion": "0.3.0",
  "type": "loopback",
  "additional": "information"
}
  • cniVersion: 各CNIプラグインのバージョン
  • type: どのバイナリを呼び出すかをCNIに伝える(以下)
    • ​​​​​​  katsuma@worker-1:~$ sudo cat /opt/cni/bin/
      ​​​​​​  bandwidth    calico       dhcp         firewall     host-device  install      loopback     multus-shim  ptp          static       vlan
      ​​​​​​  bridge       calico-ipam  dummy        flannel      host-local   ipvlan       macvlan      portmap      sbr          tuning       vrf
      
  • addtional: typeで呼び出すバイナリ固有の追加パラメータ

これらのCNIコンフィグはPodの作成・削除のたびに読み込まれるので、既存のPodに適用する場合は再起動が必要。

macvlanを作成してみる

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf  ←ここはあとで使う
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "master": "eth0", ←各ノードで持っている必要がある
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.1.0/24",
        "rangeStart": "192.168.1.200",
        "rangeEnd": "192.168.1.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "192.168.1.1"
      }
    }'

実行すると、network-attachment-definitionsに追加される

katsuma@master:~/multus$ cat multus-macvlan.yaml | kubectl create -f -
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created
katsuma@master:~/multus$ kubectl get network-attachment-definitions
NAME           AGE
macvlan-conf   5s
katsuma@master:~/multus$ kubectl describe network-attachment-definitions macvlan-conf
Name:         macvlan-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2023-05-15T15:34:28Z
  Generation:          1
  Resource Version:    368142
  UID:                 18665937-e46f-4969-ba4a-5d39493e46bc
Spec:
  Config:  { "cniVersion": "0.3.0", "type": "macvlan", "master": "ens224", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "rangeStart": "192.168.1.200", "rangeEnd": "192.168.1.216", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway": "192.168.1.1" } }
Events:    <none>

Podを建てる

katsuma@master:~/multus$ cat multus-samplepod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: samplepod
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
  containers:
  - name: samplepod
    command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
katsuma@master:~/multus$ cat multus-samplepod.yaml |kubectl create -f -
pod/samplepod created

worker-1にたった!

katsuma@master:~/multus$ kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP                NODE       NOMINATED NODE   READINESS GATES
fumidai                         1/1     Running   0          46h    192.168.100.196   worker-2   <none>           <none>
myapp-7c79cbcb7-h8gvp           1/1     Running   0          46h    192.168.100.208   worker-2   <none>           <none>
myapp-7c79cbcb7-jkv86           1/1     Running   0          46h    192.168.100.79    worker-1   <none>           <none>
myapp-7c79cbcb7-ntd5s           1/1     Running   0          46h    192.168.100.207   worker-2   <none>           <none>
samplepod                       1/1     Running   0          111s   192.168.100.81    worker-1   <none>           <none>
triple-nginx-79b59fbc9b-9nwrr   1/1     Running   0          46h    192.168.100.76    worker-1   <none>           <none>
triple-nginx-79b59fbc9b-wdpt2   1/1     Running   0          46h    192.168.100.77    worker-1   <none>           <none>
triple-nginx-79b59fbc9b-x2hgg   1/1     Running   0          46h    192.168.100.204   worker-2   <none>           <none>

worker-1になんか増えてる!

katsuma@worker-1:~$ ip l
22: ens224: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:0d:02:da brd ff:ff:ff:ff:ff:ff
23: ens192: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:0d:02:d0 brd ff:ff:ff:ff:ff:ff
24: cali3de775f9cb2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-efa3fc9c-c9fc-a547-6933-b30f1b0b0a03

コンテナを覗くと、NICが2つついていることがわかる。eth0はデフォルトNW、eth1はmultusによるmacvlan。

katsuma@master:~/multus$ kubectl exec -it samplepod -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
    link/ether d6:f8:d0:ee:b7:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.81/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d4f8:d0ff:feee:b78c/64 scope link
       valid_lft forever preferred_lft forever
4: net1@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 6e:31:bf:e4:68:e7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.200/24 brd 192.168.1.255 scope global net1
       valid_lft forever preferred_lft forever
    inet6 fe80::6c31:bfff:fee4:68e7/64 scope link
       valid_lft forever preferred_lft forever

ここからはvGWチックなことをやってみる

katsuma@master:~/multus$ kubectl create -f multus-sgiplane.yaml
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-sgiplane created
katsuma@master:~/multus$ kubectl create -f multus-ecplane.yaml
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-ecplane created
katsuma@master:~/multus$ kubectl get network-attachment-definitions
NAME               AGE
macvlan-conf       20m
macvlan-ecplane    5s
macvlan-sgiplane   9s

multusで静的なIPを割り当てる

↓で試してみる(失敗)

katsuma@master:~/multus$ cat multus-sgiplane2.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-sgiplane
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "master": "ens224",
      "mode": "bridge",
      "ipam": {
        "type": "static"
      }
    }'

FRRコンテナでフォワードする

以下のマニフェストファイルを用意してapplyする

katsuma@master:~/multus$ cat multus-frr.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frr1
  annotations:
          k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.11/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.11/24"]}]'
spec:
  containers:
  - name: frr1
    image: frrouting/frr:latest
    securityContext:
      privileged: true
---
apiVersion: v1
kind: Pod
metadata:
  name: frr2
  annotations:
          k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.12/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.12/24"]}]'
spec:
  containers:
  - name: frr2
    image: frrouting/frr:latest
    securityContext:
      privileged: true

vtyshに入って、ip forwardingをいれるとフォワードできる

katsuma@master:~/multus$ kubectl exec -it frr1 -- touch /etc/frr/vtysh.conf
katsuma@master:~/multus$ kubectl exec -it frr1 -- vtysh -c configure -c 'ip forwarding'

priviledgedは必要かの確認→必要だった

マニフェストファイルでfrr1のみprivilegedを外してみる

katsuma@master:~/multus$ cat multus-frr.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frr1
  annotations:
          k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.11/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.11/24"]}]'
spec:
  containers:
  - name: frr1
    image: frrouting/frr:latest
    #securityContext:
    #  privileged: true
---
apiVersion: v1
kind: Pod
metadata:
  name: frr2
  annotations:
          k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.12/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.12/24"]}]'
spec:
  containers:
  - name: frr2
    image: frrouting/frr:latest
    securityContext:
      privileged: true

コンフィグがはいらない

frr1(config)# ip forwarding
zebra is not running

privilegedがないとzebraが起動しないため、エラーとなっているっぽい

katsuma@master:~/multus$ kubectl exec -it frr1 -- ps
PID   USER     TIME  COMMAND
    1 root      0:00 /sbin/tini -- /usr/lib/frr/docker-start
    7 root      0:00 {docker-start} /bin/bash /usr/lib/frr/docker-start
   13 root      0:00 /usr/lib/frr/watchfrr zebra staticd
   52 frr       0:00 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
   55 root      0:00 ps
katsuma@master:~/multus$ kubectl exec -it frr2 -- ps
PID   USER     TIME  COMMAND
    1 root      0:19 /sbin/tini -- /usr/lib/frr/docker-start
    7 root      0:00 {docker-start} /bin/bash /usr/lib/frr/docker-start
   13 root      0:40 /usr/lib/frr/watchfrr zebra staticd
   23 frr       0:16 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000
   28 frr       0:15 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
   32 root      0:00 ps

ノードを指定してみる

参考:https://kubernetes.io/ja/docs/concepts/scheduling-eviction/assign-pod-node/#nodename
nodeNameで指定すると、スケジューリングをすっ飛ばすので、あまり一般的な使い方ではなさそう


katsuma@master:~/multus$ cat multus-frr.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frr1
  annotations:
          k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.11/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.11/24"]}]'
spec:
  containers:
  - name: frr1
    image: frrouting/frr
    securityContext:
      privileged: true
  nodeName: worker-1
---
apiVersion: v1
kind: Pod
metadata:
  name: frr2
  annotations:
          k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.12/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.12/24"]}]'
spec:
  containers:
  - name: frr2
    image: frrouting/frr
    securityContext:
      privileged: true
  nodeName: worker-1

起動時にフォワーディングするコンフィグを投入する

参考:

lifecycleで、entrypointやcmdとは別で、Pod起動時に実行するコマンドを指定できる
起動直後にコマンド実行するとエラーになるので、sleepをいれる

apiVersion: v1
kind: Pod
metadata:
  name: frr1
  annotations:
    k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.11/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.11/24"]}]'
spec:
  containers:
  - name: frr1
    image: frrouting/frr
    imagePullPolicy: IfNotPresent
    securityContext:
      privileged: true
    lifecycle:
      postStart:
        exec:
          command:
            - /bin/sh
            - -c
            - sleep 10 && touch /etc/frr/vtysh.conf && vtysh -c configure -c "ip forwarding"

ノードをdrainさせたときに自動でノードが切り替わることを確認する

レプリカセットを用意する。以下はdeploymentで実施。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-frr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: deployment-frr
  template:
    metadata:
      labels:
        app: deployment-frr
      annotations:
        k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.11/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.11/24"]}]'
    spec:
      containers:
      - name: frr
        image: frrouting/frr
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        lifecycle:
          postStart:
            exec:
              command:
                - /bin/sh
                - -c
                - sleep 10 && touch /etc/frr/vtysh.conf && vtysh -c configure -c "ip forwarding"

pingを流しながらdrainする

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-frr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: deployment-frr
  template:
    metadata:
      labels:
        app: deployment-frr
      annotations:
        k8s.v1.cni.cncf.io/networks: '[{"name": "macvlan-sgiplane", "ips": ["192.168.1.11/24"]}, {"name": "macvlan-ecplane", "ips": ["192.168.2.11/24"]}]'
    spec:
      containers:
      - name: frr
        image: frrouting/frr
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        lifecycle:
          postStart:
            exec:
              command:
                - /bin/sh
                - -c
                - sleep 10 && touch /etc/frr/vtysh.conf && vtysh -c configure -c "ip forwarding"

コンテナが起動するまでにかかる時間があるが、およそ10秒くらいで疎通再開する
ただし、全部がworker-2に片寄状態になる

NAME                                  READY   STATUS    RESTARTS        AGE   IP                NODE       NOMINATED NODE   READINESS GATES
pod/deployment-frr-7c4c8b7477-mzf55   1/1     Running   0               20m   192.168.100.245   worker-2   <none>           <none>
pod/fumidai                           1/1     Running   1 (3h12m ago)   9d    192.168.100.216   worker-2   <none>           <none>
pod/myapp-7c79cbcb7-h8gvp             1/1     Running   1 (3h12m ago)   9d    192.168.100.217   worker-2   <none>           <none>
pod/myapp-7c79cbcb7-ntd5s             1/1     Running   1 (3h12m ago)   9d    192.168.100.218   worker-2   <none>           <none>
pod/myapp-7c79cbcb7-s5q2n             1/1     Running   0               20m   192.168.100.243   worker-2   <none>           <none>
pod/triple-nginx-79b59fbc9b-clr5q     1/1     Running   0               20m   192.168.100.242   worker-2   <none>           <none>
pod/triple-nginx-79b59fbc9b-v8dfx     1/1     Running   0               20m   192.168.100.241   worker-2   <none>           <none>
pod/triple-nginx-79b59fbc9b-x2hgg     1/1     Running   1 (3h12m ago)   9d    192.168.100.214   worker-2   <none>           <none>

configmap

params.txtを作る

name hoge
age 1
country piyo

createする

katsuma@master:~/obenkyo2/configmap$ kubectl create configmap params-test --from-file=params.txt
configmap/params-test created
katsuma@master:~/obenkyo2/configmap$ kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      10d
params-test        1      5s

describeすると中身も見える

katsuma@master:~/obenkyo2/configmap$ kubectl get configmap params-test -o yaml
apiVersion: v1
data:
  params.txt: |-
    name hoge
    age 1
    country piyo
kind: ConfigMap
metadata:
  creationTimestamp: "2023-05-24T14:47:28Z"
  name: params-test
  namespace: default
  resourceVersion: "2030588"
  uid: b731d289-29c6-41b7-b34b-c1f3e8d99fa4