# calico安裝教學
###### tags: `build wazuh`
> https://www.lixueduan.com/posts/kubernetes/01-install/
## 1. 下載配置文件並拉取鏡像
第一步獲取官方給的 yaml 文件
```=bash
curl https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml -O
```
查看一共需要哪些鏡像
```=bash
[root@k8s-1 ~]# cat calico.yaml |grep docker.io|awk {'print $2'}
docker.io/calico/cni:v3.23.1
docker.io/calico/cni:v3.23.1
docker.io/calico/node:v3.23.1
docker.io/calico/kube-controllers:v3.23.1
```
手動拉取
```=bash
for i in `cat calico.yaml |grep docker.io|awk {'print $2'}`;do ctr images pull $i;done
```
最後查看一下,確定是否拉取下來了
```=bash
[root@k8s-2 ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/calico/cni:v3.23.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:26802bb7714fda18b93765e908f2d48b0230fd1c620789ba2502549afcde4338 105.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le -
docker.io/calico/kube-controllers:v3.23.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:e8b2af28f2c283a38b4d80436e2d2a25e70f2820d97d1a8684609d42c3973afb 53.8 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le -
docker.io/calico/node:v3.23.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:d2c1613ef26c9ad43af40527691db1f3ad640291d5e4655ae27f1dd9222cc380 73.0 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le -
```
## 2. 配置網卡名
在 k8s-1 上執行該步驟
calico 默認會找 eth0網卡,如果當前機器網卡不是這個名字,可能會無法啟動,需要手動配置以下。
```=bash
[root@k8s-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
```
我這裡網卡名是 ens33,不符合默認條件,需要修改 calico.yaml 手動指定一下。
```=bash
vi calico.yaml
```
然後直接搜索 CLUSTER_TYPE,找到下面這段
```=yaml
- name: CLUSTER_TYPE
value: "k8s,bgp"
```
然後添加一個和 CLUSTER_TYPE 同級的 **IP_AUTODETECTION_METHOD **字段,具體如下:
```=yaml
# value 就是指定你的网卡名字,我这里网卡是 ens33,然后直接配置的通配符 ens.*
- name: IP_AUTODETECTION_METHOD
value: "interface=ens.*"
```
## 3. 部署
在 k8s-1上執行該步驟
```=bash
kubectl apply -f calico.yaml
```
如果不錯意外的話等一會 calico 就安裝好了,可以通過以下命令查看:
```=bash
[root@k8s-1 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6c75955484-hhvh6 1/1 Running 0 7m37s
kube-system calico-node-5xjqd 1/1 Running 0 7m37s
kube-system calico-node-6lnd6 1/1 Running 0 7m37s
kube-system calico-node-vkgfr 1/1 Running 0 7m37s
kube-system coredns-6d8c4cb4d-8gxsf 1/1 Running 0 20m
kube-system coredns-6d8c4cb4d-m596j 1/1 Running 0 20m
kube-system etcd-k8s-1 1/1 Running 0 20m
kube-system kube-apiserver-k8s-1 1/1 Running 0 20m
kube-system kube-controller-manager-k8s-1 1/1 Running 1 (6m16s ago) 20m
kube-system kube-proxy-5qj6j 1/1 Running 0 20m
kube-system kube-proxy-rhwb7 1/1 Running 0 20m
kube-system kube-proxy-xzswm 1/1 Running 0 20m
kube-system kube-scheduler-k8s-1 1/1 Running 1 (5m56s ago) 20m
```
calico 開頭的以及 coredns 都跑起來就算完成。
```=bash
kubectl get pod ${POD_NAME} -n ${NAMESPACE} -o yaml | kubectl replace --force -f -
```
```=bash
kubectl get pod calico-node-68fnx -n kube-system -o yaml | kubectl replace --force -f -
kubectl get pod calico-node-5d6zb -n kube-system -o yaml | kubectl replace --force -f -
kubectl get pod calico-kube-controllers-56cdb7c587-blc9l -n kube-system -o yaml | kubectl replace --force -f -
```
## 4. FAQ
calico controller 無法啟動,報錯信息如下:
```=bash
client.go 272: Error getting cluster information config ClusterInformation="default" error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded
```
查看對應 pod 日誌發現有一個錯誤,提示內核版本過低,需要 4.x 版本才行。於是更新內核版本只會就可以了
寫本文時安裝的是 5.18 版本內核,所有應該不會出現這個問題。
## 5. 檢查集群狀態
在 k8s-1 上執行該步驟
檢查各組件運行狀態
```=bash
[root@k8s-1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
```
查看集群信息
```=bash
[root@k8s-1 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.2.131:6443
CoreDNS is running at https://192.168.2.131:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
查看節點狀態
```=bash
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 22m v1.23.5
k8s-2 Ready <none> 21m v1.23.5
k8s-3 Ready <none> 21m v1.23.5
```