# RKE2 & Rancher Prime Installation
<style>
.indent-title-1{
margin-left: 1em;
}
.indent-title-2{
margin-left: 2em;
}
.indent-title-3{
margin-left: 3em;
}
</style>
## Preface
<div class="indent-title-1">
本篇文章會主要會介紹,如何安裝 Rancher Kubernetes Engine 2 或 K3S,並在 Kubernetes 之中部署 Rancher Prime
可以透過點擊以下目錄,選擇想看的內容,跳轉至特定章節
:::warning
:::spoiler {state="open"} 目錄
[TOC]
:::
</div>
# Install RKE2 v1.31.2
## 一、 安裝前環境檢查
- 請確保你的環境符合要求
- [RKE2 Requirements](https://docs.rke2.io/install/requirements)
- 如果你的主機上安裝並啟用了 NetworkManager,確保它被配置為忽略 CNI 管理的接口。
- 對於 1.21 及以上版本的 RKE2,如果主機核心支持 AppArmor,那麼在安裝 RKE2 之前還必須有 AppArmor 工具(通常通過 apparmor-parser 軟件包獲得)。
- RKE2 的安裝過程必須以 root 用戶或通過 sudo 運行。
### Hardware
<div class="indent-title-1">
Linux/Windows
- RAM: 4GB Minimum (we recommend at least 8GB)
- CPU: 2 Minimum (we recommend at least 4CPU)
- Disks: SSD
> [RKE2 Hardware](https://docs.rke2.io/install/requirements#hardware)
</div>
## 二、Server Node Installation
### 1. Config RKE2 Basic Parameters
<div class="indent-title-1">
**1-1. 下載 RKE2 的安裝腳本,並賦予執行權限**
在 Server node 的機器上執行以下命令
```bash!
~> curl -sfL https://get.rke2.io --output install.sh && chmod +x install.sh
```
> RKE2 提供了一個安裝腳本,在 systemd 的系統上將其作為一個服務來安裝。
</div>
<div class="indent-title-1">
**1-2. 編輯 Server Node 設定檔**
- `advertise-address`, `node-ip` 和 `tls-san` 的 IP 需要依據不同的環境來設定。
```
# 定義 K8s 要跑在哪張網卡上
$ NIC='eth1'
$ IP=$(ip -4 -o addr show ${NIC} | awk '{print $4}' | cut -d/ -f1)
# 定義節點名稱
$ NODE_NAME='m2'
```
```!
~> sudo mkdir -p /etc/rancher/rke2/ && \
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml
write-kubeconfig-mode: "0644"
debug: False
advertise-address: "${IP}"
node-ip: "${IP}"
node-name:
- "${NODE_NAME}"
token: "hehe"
tls-san:
- "${NODE_NAME}"
- "${IP}"
- "127.0.0.1"
etcd-extra-env: TZ=Asia/Taipei
kube-apiserver-extra-env: TZ=Asia/Taipei
kube-controller-manager-extra-env: TZ=Asia/Taipei
kube-proxy-extra-env: TZ=Asia/Taipei
kube-scheduler-extra-env: TZ=Asia/Taipei
cloud-controller-manager-extra-env: TZ=Asia/Taipei
kubelet-arg:
- container-log-max-files=5
- container-log-max-size=10Mi
- "--config=/etc/kubernetes/kubeletconfig.yml"
- "--eviction-hard=memory.available<1000Mi,imagefs.available<5%,nodefs.available<5%"
#以下參數所有 master node 都要相同
disable-cloud-controller: true
disable-kube-proxy: true
disable:
- rke2-canal
- rke2-kube-proxy
cni:
- "multus"
- "cilium"
#selinux: false
EOF
```
</div>
<div class="indent-title-1">
**1-3. 編輯 cilium 設定檔**
```!
~> sudo mkdir -p /var/lib/rancher/rke2/server/manifests && \
cat <<EOF | sudo tee /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |-
operator:
replicas: 1
podAnnotations:
container.apparmor.security.beta.kubernetes.io/cilium-agent: "unconfined"
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: "unconfined"
container.apparmor.security.beta.kubernetes.io/mount-cgroup: "unconfined"
container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: "unconfined"
devices: "eth1"
nodePort:
directRoutingDevice: "eth1"
autoDirectNodeRoutes: true
hubble:
enabled: true
relay:
enabled: true
ui:
enabled: true
frontend:
server:
ipv6:
enabled: false
service:
type: NodePort
enableLBIPAM: false
ipv4NativeRoutingCIDR: 10.42.0.0/16
k8sServiceHost: 127.0.0.1
k8sServicePort: 6443
kubeProxyReplacement: true
routingMode: native
l2announcements:
enabled: true
EOF
```
</div>
**1-4. 編輯 kubelet 設定檔**
```!
~> sudo mkdir -p /etc/kubernetes && \
cat <<EOF | sudo tee /etc/kubernetes/kubeletconfig.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
memory: "1Gi"
kubeReserved:
memory: "2Gi"
EOF
```
1. **`evictionHard`**:
- **用途**:設定強制驅逐的值,當系統資源(如記憶體)低於此設定值時,kubelet 會立即終止部分 Pod,以釋放資源,確保系統穩定運行。
- **預設值**: 預設打開,Memory 是 100MiB, Ephemeral-Storage 是 10% (開源的 K8s), rke2 預設值是 imagefs 和 nodefs = 5%
- **範例**:`memory.available: "500Mi"` 表示當可用記憶體少於 500MiB 時,kubelet 會觸發驅逐機制。
2. **`evictionMinimumReclaim`**:
- **用途**:設定在驅逐 Pod 時,kubelet 希望至少回收的資源量。這有助於避免頻繁的驅逐操作,確保每次驅逐能有效釋放足夠的資源。
- **範例**:`memory.available: "10Mi"` 表示每次驅逐操作至少要回收 10MiB 的記憶體。
3. **`systemReserved`**:
- **用途**:為系統背景程序(如 sshd、networking 等非 Kubernetes 組件)預留資源,確保這些系統服務在資源緊張時仍能正常運行。
- **範例**:`memory: "1Gi"` 表示為系統服務預留 1GiB 的記憶體。
4. **`kubeReserved`**:
- **用途**:為 Kubernetes 系統的背景程序(如 kubelet、container runtime 時等)預留資源,確保這些組件在資源緊張時的穩定性。
- **範例**:`memory: "2Gi"` 表示為 Kubernetes 系統的背景程序預留 2GiB 的記憶體。
</div>
### 2. Install RKE2 with 1.31.2 version
<div class="indent-title-1">
**2-1. 執行以下安裝命令**
```shell=!
~> sudo INSTALL_RKE2_CHANNEL=v1.31.2+rke2r1 ./install.sh
```
> - `INSTALL_RKE2_CHANNEL`
> - 指定要下載 RKE2 URL 的 Channel,預設是 `latest`
:::spoiler 以上命令執行後的螢幕輸出
```=
[WARN] /usr/local is read-only or a mount point; installing to /opt/rke2
[INFO] finding release for channel v1.23.9+rke2r1
[INFO] using v1.23.9+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /opt/rke2
[INFO] updating tarball contents to reflect install path
[INFO] moving systemd units to /etc/systemd/system
[INFO] install complete; you may want to run: export PATH=$PATH:/opt/rke2/bin
```
:::
**2-2. 將 `/opt/rke2/bin` 加到 PATH 環境變數**
```
~> export PATH=$PATH:/opt/rke2/bin
```
</div>
### 3. enable rke2 and setup kubeconfig
<div class="indent-title-1">
**3-1. 將 rke2-server 的服務設為開機自動啟動,並立即啟動服務**
```shell=!
~> sudo systemctl enable rke2-server --now
```
以上命令執行後的螢幕輸出
```!
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /etc/systemd/system/rke2-server.service.
```
**3-2. Setup kubeconfig and crictl**
```!
~> mkdir -p $HOME/.kube && \
sudo cp /etc/rancher/rke2/rke2.yaml $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config && \
sudo cp /var/lib/rancher/rke2/bin/* /usr/local/bin/ && \
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/k3s/containerd/containerd.sock
image-endpoint: unix:///run/k3s/containerd/containerd.sock
timeout: 10
EOF
```
</div>
### 4. check pod status
<div class="indent-title-1">
```!
~> kubectl get po -A
```
螢幕輸出 :
```shell=
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-operator-84b8ff5b6c-stfdw 1/1 Running 0 3m47s
kube-system cilium-whvhw 1/1 Running 0 3m46s
kube-system etcd-m1 1/1 Running 0 10m
kube-system helm-install-rke2-cilium-z7qsz 0/1 Completed 0 3m48s
kube-system helm-install-rke2-coredns-9cg6t 0/1 Completed 0 10m
kube-system helm-install-rke2-ingress-nginx-fscw4 0/1 Completed 0 10m
kube-system helm-install-rke2-metrics-server-gw8qc 0/1 Completed 0 10m
kube-system helm-install-rke2-multus-j8h9d 0/1 Completed 0 10m
kube-system helm-install-rke2-snapshot-controller-crd-h58vl 0/1 Completed 0 10m
kube-system helm-install-rke2-snapshot-controller-v6lwh 0/1 Completed 1 10m
kube-system helm-install-rke2-snapshot-validation-webhook-gz77q 0/1 Completed 0 10m
kube-system kube-apiserver-m1 1/1 Running 0 10m
kube-system kube-controller-manager-m1 1/1 Running 0 10m
kube-system kube-scheduler-m1 1/1 Running 0 10m
kube-system rke2-coredns-rke2-coredns-79f8cbfcf7-rgzzg 1/1 Running 0 9m54s
kube-system rke2-coredns-rke2-coredns-autoscaler-7cf7db4bdc-t8rn7 1/1 Running 0 9m54s
kube-system rke2-ingress-nginx-controller-qj4fh 1/1 Running 0 3m17s
kube-system rke2-metrics-server-6d887864c-vdkgn 1/1 Running 0 3m29s
kube-system rke2-multus-qvs8w 1/1 Running 0 52s
kube-system rke2-snapshot-controller-658d97fccc-7gkrv 1/1 Running 0 3m28s
kube-system rke2-snapshot-validation-webhook-784bcc6c8-xfq68 1/1 Running 0 3m29s
```
</div>
## 三、Join Additional Server Nodes
### 1. Config RKE2 Basic Parameters
<div class="indent-title-1">
**1-1. 下載 RKE2 的安裝腳本,並賦予執行權限**
**以下命令請在 Additional Server node 上執行**
```bash!
~> curl -sfL https://get.rke2.io --output install.sh && chmod +x install.sh
```
> RKE2 提供了一個安裝腳本,在 systemd 的系統上將其作為一個服務來安裝。
**1-2. 編輯 RKE2 Additional Server Node 設定檔**
```
# 定義 K8s 要跑在哪張網卡上
$ NIC='eth1'
$ IP=$(ip -4 -o addr show ${NIC} | awk '{print $4}' | cut -d/ -f1)
# 定義 API Server 的 IP 位址
$ ServerIP='192.168.11.35'
# 定義節點名稱
$ NODE_NAME='m2'
```
```!
~> sudo mkdir -p /etc/rancher/rke2/ && \
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml
server: https://${ServerIP}:9345
write-kubeconfig-mode: "0644"
debug: False
advertise-address: "${IP}"
node-ip: "${IP}"
node-name:
- "${NODE_NAME}"
token: "hehe"
tls-san:
- "${NODE_NAME}"
- "${IP}"
- "127.0.0.1"
etcd-extra-env: TZ=Asia/Taipei
kube-apiserver-extra-env: TZ=Asia/Taipei
kube-controller-manager-extra-env: TZ=Asia/Taipei
kube-proxy-extra-env: TZ=Asia/Taipei
kube-scheduler-extra-env: TZ=Asia/Taipei
cloud-controller-manager-extra-env: TZ=Asia/Taipei
kubelet-arg:
- container-log-max-files=5
- container-log-max-size=10Mi
- "--config=/etc/kubernetes/kubeletconfig.yml"
- "--eviction-hard=memory.available<1000Mi,imagefs.available<5%,nodefs.available<5%"
#以下參數所有 master node 都要相同
disable-cloud-controller: true
disable-kube-proxy: true
disable:
- rke2-canal
- rke2-kube-proxy
cni:
- "multus"
- "cilium"
#selinux: false
EOF
```
> **Note: 要指定第一台 RKE2 Server node 的 IP**
</div>
**1-3. 編輯 kubelet 設定檔**
```!
~> sudo mkdir -p /etc/kubernetes && \
cat <<EOF | sudo tee /etc/kubernetes/kubeletconfig.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
memory: "1Gi"
kubeReserved:
memory: "2Gi"
EOF
```
### 2. Install RKE2 with 1.31.2 veersion
<div class="indent-title-1">
**2-1. 執行以下安裝命令**
```shell=!
~> sudo INSTALL_RKE2_CHANNEL=v1.31.2+rke2r1 ./install.sh
```
> - `INSTALL_RKE2_CHANNEL`
> - 指定要下載 RKE2 URL 的 Channel,預設是 `latest`
:::spoiler 以上命令執行後的螢幕輸出
```=
[WARN] /usr/local is read-only or a mount point; installing to /opt/rke2
[INFO] finding release for channel v1.23.9+rke2r1
[INFO] using v1.23.9+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /opt/rke2
[INFO] updating tarball contents to reflect install path
[INFO] moving systemd units to /etc/systemd/system
[INFO] install complete; you may want to run: export PATH=$PATH:/opt/rke2/bin
```
:::
**2-2. 將 `/opt/rke2/bin` 加到 PATH 環境變數**
```
~> export PATH=$PATH:/opt/rke2/bin
```
</div>
### 3. enable rke2 and setup kubeconfig
<div class="indent-title-1">
**3-1. 將 rke2-server 的服務設為開機自動啟動,並立即啟動服務**
```shell=!
~> sudo systemctl enable rke2-server --now
```
以上命令執行後的螢幕輸出
```!
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /etc/systemd/system/rke2-server.service.
```
</div>
### 4. check pod status
<div class="indent-title-1">
```!
~> kubectl get po -A
```
螢幕輸出 :
```shell=
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-operator-84b8ff5b6c-stfdw 1/1 Running 4 (145m ago) 4h9m
kube-system cilium-p45b4 1/1 Running 1 (145m ago) 3h44m
kube-system cilium-qvqps 1/1 Running 0 2m28s
kube-system etcd-m1 1/1 Running 1 4h15m
kube-system etcd-m2 1/1 Running 0 2m1s
kube-system helm-install-rke2-cilium-z7qsz 0/1 Completed 0 4h9m
kube-system helm-install-rke2-coredns-9cg6t 0/1 Completed 0 4h15m
kube-system helm-install-rke2-ingress-nginx-fscw4 0/1 Completed 0 4h15m
kube-system helm-install-rke2-metrics-server-gw8qc 0/1 Completed 0 4h15m
kube-system helm-install-rke2-multus-j8h9d 0/1 Completed 0 4h15m
kube-system helm-install-rke2-snapshot-controller-crd-h58vl 0/1 Completed 0 4h15m
kube-system helm-install-rke2-snapshot-controller-v6lwh 0/1 Completed 1 4h15m
kube-system helm-install-rke2-snapshot-validation-webhook-gz77q 0/1 Completed 0 4h15m
kube-system kube-apiserver-m1 1/1 Running 2 3h24m
kube-system kube-apiserver-m2 1/1 Running 0 2m14s
kube-system kube-controller-manager-m1 1/1 Running 10 (145m ago) 4h15m
kube-system kube-controller-manager-m2 1/1 Running 0 2m7s
kube-system kube-scheduler-m1 1/1 Running 2 (145m ago) 4h15m
kube-system kube-scheduler-m2 1/1 Running 0 2m7s
kube-system rke2-coredns-rke2-coredns-79f8cbfcf7-9rh2l 1/1 Running 0 2m22s
kube-system rke2-coredns-rke2-coredns-79f8cbfcf7-bwvtc 1/1 Running 0 145m
kube-system rke2-coredns-rke2-coredns-autoscaler-7cf7db4bdc-t8ff7 1/1 Running 1 (145m ago) 3h44m
kube-system rke2-ingress-nginx-controller-8qrjf 1/1 Running 0 108s
kube-system rke2-ingress-nginx-controller-rwpdf 1/1 Running 1 (145m ago) 3h23m
kube-system rke2-metrics-server-6d887864c-vdkgn 0/1 Completed 0 4h9m
kube-system rke2-metrics-server-6d887864c-vdt5s 1/1 Running 1 (145m ago) 3h44m
kube-system rke2-multus-hfqc6 1/1 Running 1 (145m ago) 3h44m
kube-system rke2-multus-t7vln 1/1 Running 2 (2m ago) 2m28s
kube-system rke2-snapshot-controller-658d97fccc-7gkrv 1/1 Running 7 (145m ago) 4h9m
kube-system rke2-snapshot-validation-webhook-784bcc6c8-xfq68 1/1 Running 1 (145m ago) 4h9m
```
</div>
## 四、Join Agent Nodes
### 1. Config RKE2 Basic Parameters
<div class="indent-title-1">
**1-1. 下載 RKE2 的安裝腳本,並賦予執行權限**
**以下命令請在 agent node 上執行**
```bash!
~> curl -sfL https://get.rke2.io --output install.sh && chmod +x install.sh
```
> RKE2 提供了一個安裝腳本,在 systemd 的系統上將其作為一個服務來安裝。
**1-2. 編輯 RKE2 Additional Server Node 設定檔**
```
# 定義 K8s 要跑在哪張網卡上
$ NIC='eth1'
$ IP=$(ip -4 -o addr show ${NIC} | awk '{print $4}' | cut -d/ -f1)
# 定義 API Server 的 IP 位址
$ ServerIP='192.168.11.35'
# 定義節點名稱
$ NODE_NAME='w1'
```
```!
~> sudo mkdir -p /etc/rancher/rke2/ && \
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml
server: https://${ServerIP}:9345
write-kubeconfig-mode: "0644"
debug: False
node-ip: "${IP}"
node-name:
- "${NODE_NAME}"
token: "hehe"
etcd-extra-env: TZ=Asia/Taipei
kube-apiserver-extra-env: TZ=Asia/Taipei
kube-controller-manager-extra-env: TZ=Asia/Taipei
kube-proxy-extra-env: TZ=Asia/Taipei
kube-scheduler-extra-env: TZ=Asia/Taipei
cloud-controller-manager-extra-env: TZ=Asia/Taipei
kubelet-arg:
- container-log-max-files=5
- container-log-max-size=10Mi
- "--config=/etc/kubernetes/kubeletconfig.yml"
- "--eviction-hard=memory.available<1000Mi,imagefs.available<5%,nodefs.available<5%"
#selinux: false
EOF
```
> **Note: 要指定第一台 RKE2 Server node 的 IP**
</div>
**1-3. 編輯 kubelet 設定檔**
```!
~> sudo mkdir -p /etc/kubernetes && \
cat <<EOF | sudo tee /etc/kubernetes/kubeletconfig.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
memory: "1Gi"
kubeReserved:
memory: "2Gi"
EOF
```
### 2. Install RKE2 with 1.31.2 veersion
<div class="indent-title-1">
**2-1. 執行以下安裝命令**
```shell=!
~> sudo INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.31.2+rke2r1 ./install.sh
```
> - `INSTALL_RKE2_CHANNEL`
> - 指定要下載 RKE2 URL 的 Channel,預設是 `latest`
:::spoiler 以上命令執行後的螢幕輸出
```=
[WARN] /usr/local is read-only or a mount point; installing to /opt/rke2
[INFO] finding release for channel v1.23.9+rke2r1
[INFO] using v1.23.9+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /opt/rke2
[INFO] updating tarball contents to reflect install path
[INFO] moving systemd units to /etc/systemd/system
[INFO] install complete; you may want to run: export PATH=$PATH:/opt/rke2/bin
```
:::
**2-2. 將 `/opt/rke2/bin` 加到 PATH 環境變數**
```
~> export PATH=$PATH:/opt/rke2/bin
```
</div>
### 3. enable rke2
<div class="indent-title-1">
**3-1. 將 rke2-agent 的服務設為開機自動啟動,並立即啟動服務**
```shell=!
~> sudo systemctl enable rke2-agent --now
```
以上命令執行後的螢幕輸出
```!
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /etc/systemd/system/rke2-server.service.
```
</div>
### 4. check pod status
<div class="indent-title-1">
以下命令在第一台 RKE2 Server 執行
```!
~> kubectl get nodes,po -A
```
螢幕輸出 :
```shell=
NAME STATUS ROLES AGE VERSION
node/m1 Ready control-plane,etcd,master 4h32m v1.31.1+rke2r1
node/m2 Ready control-plane,etcd,master 18m v1.31.1+rke2r1
node/m3 Ready control-plane,etcd,master 12m v1.31.1+rke2r1
node/w1 Ready <none> 84s v1.31.1+rke2r1
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/cilium-2w97z 1/1 Running 0 84s
kube-system pod/cilium-b744m 1/1 Running 0 12m
kube-system pod/cilium-operator-84b8ff5b6c-stfdw 1/1 Running 4 (162m ago) 4h25m
kube-system pod/cilium-p45b4 1/1 Running 1 (162m ago) 4h
kube-system pod/cilium-qvqps 1/1 Running 0 18m
kube-system pod/etcd-m1 1/1 Running 1 4h32m
kube-system pod/etcd-m2 1/1 Running 0 18m
kube-system pod/etcd-m3 1/1 Running 0 11m
kube-system pod/helm-install-rke2-cilium-z7qsz 0/1 Completed 0 4h25m
kube-system pod/helm-install-rke2-coredns-9cg6t 0/1 Completed 0 4h32m
kube-system pod/helm-install-rke2-ingress-nginx-fscw4 0/1 Completed 0 4h32m
kube-system pod/helm-install-rke2-metrics-server-gw8qc 0/1 Completed 0 4h32m
kube-system pod/helm-install-rke2-multus-j8h9d 0/1 Completed 0 4h32m
kube-system pod/helm-install-rke2-snapshot-controller-crd-h58vl 0/1 Completed 0 4h32m
kube-system pod/helm-install-rke2-snapshot-controller-v6lwh 0/1 Completed 1 4h32m
kube-system pod/helm-install-rke2-snapshot-validation-webhook-gz77q 0/1 Completed 0 4h32m
kube-system pod/kube-apiserver-m1 1/1 Running 2 3h41m
kube-system pod/kube-apiserver-m2 1/1 Running 0 18m
kube-system pod/kube-apiserver-m3 1/1 Running 0 12m
kube-system pod/kube-controller-manager-m1 1/1 Running 10 (162m ago) 4h32m
kube-system pod/kube-controller-manager-m2 1/1 Running 0 18m
kube-system pod/kube-controller-manager-m3 1/1 Running 0 12m
kube-system pod/kube-scheduler-m1 1/1 Running 2 (162m ago) 4h32m
kube-system pod/kube-scheduler-m2 1/1 Running 0 18m
kube-system pod/kube-scheduler-m3 1/1 Running 0 12m
kube-system pod/rke2-coredns-rke2-coredns-79f8cbfcf7-9rh2l 1/1 Running 0 18m
kube-system pod/rke2-coredns-rke2-coredns-79f8cbfcf7-bwvtc 1/1 Running 0 161m
kube-system pod/rke2-coredns-rke2-coredns-autoscaler-7cf7db4bdc-t8ff7 1/1 Running 1 (162m ago) 4h
kube-system pod/rke2-ingress-nginx-controller-4hns2 1/1 Running 0 48s
kube-system pod/rke2-ingress-nginx-controller-7c5bp 1/1 Running 0 11m
kube-system pod/rke2-ingress-nginx-controller-8qrjf 1/1 Running 0 18m
kube-system pod/rke2-ingress-nginx-controller-rwpdf 1/1 Running 1 (162m ago) 3h40m
kube-system pod/rke2-metrics-server-6d887864c-vdkgn 0/1 Completed 0 4h25m
kube-system pod/rke2-metrics-server-6d887864c-vdt5s 1/1 Running 1 (162m ago) 4h
kube-system pod/rke2-multus-6vcp7 1/1 Running 3 (50s ago) 84s
kube-system pod/rke2-multus-bxdkm 1/1 Running 2 (11m ago) 12m
kube-system pod/rke2-multus-hfqc6 1/1 Running 1 (162m ago) 4h
kube-system pod/rke2-multus-t7vln 1/1 Running 2 (18m ago) 18m
kube-system pod/rke2-snapshot-controller-658d97fccc-7gkrv 1/1 Running 7 (162m ago) 4h25m
kube-system pod/rke2-snapshot-validation-webhook-784bcc6c8-xfq68 1/1 Running 1 (162m ago) 4h25m
```
</div>
---
# Install K3S v1.26.9
1. 安裝命令
<div class="indent-title-2">
```!
$ curl -sfL https://get.k3s.io/ | INSTALL_K3S_VERSION=v1.26.9+k3s1 K3S_KUBECONFIG_MODE="644" sh -
```
</div>
2. 設定 kubeconfig
<div class="indent-title-2">
```!
$ mkdir -p $HOME/.kube && sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config && sudo chmod 600 $HOME/.kube/config
```
</div>
---
# Helm Install Rancher Prime v2.8.2
## 1. Add the Helm Chart Repository
### 1.1 Install helm3
在第一台 Master Node 上執行以下命令
1. 下載 helm 壓縮檔
<div class="indent-title-1">
```shell=!
~> wget https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz
```
螢幕輸出
```=
--2022-09-21 09:06:57-- https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.39.108, 2606:2800:247:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.39.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13633605 (13M) [application/x-tar]
Saving to: ‘helm-v3.8.2-linux-amd64.tar.gz’
helm-v3.8.2-linux-amd64.tar.gz 100%[=============================================================>] 13.00M 5.95MB/s in 2.2s
2022-09-21 09:07:00 (5.95 MB/s) - ‘helm-v3.8.2-linux-amd64.tar.gz’ saved [13633605/13633605]
```
</div>
2. 解壓縮 helm 壓縮檔
<div class="indent-title-1">
```
~> tar zxvf helm-v3.8.2-linux-amd64.tar.gz
```
螢幕輸出
```
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
```
</div>
3. 將 helm 執行檔拷貝至 PATH 環境變數中的 `/usr/local/bin/` 目錄下
<div class="indent-title-1">
```
~> sudo cp linux-amd64/helm /usr/local/bin/
```
</div>
<div class="indent-title-1">
:::spoiler {state="open"} 補充 : 一行命令安裝 Helm 最新版
```
~> curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
```
</div>
### 1.2 Add the Helm Chart Repository
<div class="indent-title-1">
Prime: Recommended for production environments
```
~> helm repo add rancher-prime https://charts.rancher.com/server-charts/prime
```
:::spoiler 補充 : Latest: Recommended for trying out the newest features
```
~> helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
```
:::
:::spoiler 補充 : Stable: Recommended for production environments
```
~> helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
```
:::
</div>
## 2. Create a Namespace for Rancher
<div class="indent-title-1">
This should always be `cattle-system`
```
~> kubectl create ns cattle-system
```
</div>
## 3. Choose your SSL Configuration
::: info
::: spoiler 使用自簽憑證
1. 建立和切換至工作目錄
<div class="indent-title-2">
```!
~> mkdir ssl;cd ssl
```
</div>
2. 下載 產生自簽憑證 腳本,並賦予執行權限
<div class="indent-title-2">
```!
~> wget https://raw.githubusercontent.com/cooloo9871/openssl/master/mk && \
chmod +x mk
```
</div>
3. 執行腳本產生自簽憑證
<div class="indent-title-2">
```!
~> ./mk create rancher.example.com 192.168.11.60
```
> 第一個參數放 rancher 的 Domain Name
> 第二個參數放 LB 的 IP (HA Rancher),或是 rancher 本身的 IP
螢幕輸出
```!
Generating RSA private key, 4096 bit long modulus (2 primes)
.........................................................................................................................................................................................................................++++
....................................................................................................................................................................................................................................................++++
e is 65537 (0x010001)
Generating RSA private key, 4096 bit long modulus (2 primes)
..........................................................++++
..++++
e is 65537 (0x010001)
Signature ok
subject=CN = example
Getting CA Private Key
```
檢查是否符合預期
```!
~/ssl> ls -l
total 32
-rw------- 1 rancher users 3326 Feb 13 13:58 ca-key.pem
-rw-r--r-- 1 rancher users 2009 Feb 13 13:58 ca.pem
-rw-r--r-- 1 rancher users 41 Feb 13 13:58 ca.srl
-rw-r--r-- 1 rancher users 1582 Feb 13 13:58 cert.csr
-rw------- 1 rancher users 3243 Feb 13 13:58 cert-key.pem
-rw-r--r-- 1 rancher users 1874 Feb 13 13:58 cert.pem
-rw-r--r-- 1 rancher users 86 Feb 13 13:58 extfile.cnf
-rwxr-xr-x 1 rancher users 1166 Feb 13 13:57 mk
```
</div>
4. Create the certificate secret resource
<div class="indent-title-2">
將伺服器憑證和任何中間證書串聯到一個名為 tls.crt 的檔案中,並在一個名為 tls.key 的檔案中提供相應的證書密鑰。
```!
~/ssl> kubectl -n cattle-system create secret tls tls-rancher-ingress \
--cert=cert.pem \
--key=cert-key.pem
~/ssl> kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem=./cert.pem
~/ssl> kubectl -n cattle-system get secret
NAME TYPE DATA AGE
default-token-h9lwm kubernetes.io/service-account-token 3 2m15s
tls-ca Opaque 1 13s
tls-rancher-ingress kubernetes.io/tls 2 52s
```
</div>
5. 把 ca.pem 複製到 rancher 託管的每座 Downstream cluster 中的每台 Node 上
<div class="indent-title-2">
之後使用 Rancher 註冊就不需要再勾選 insecure 了
```!
~/ssl> scp ca.pem rancher@<client>:~
# ssh 到 client 上
$ sudo cp ca.pem /etc/pki/trust/anchors/
$ sudo cp ca.pem /usr/share/pki/trust/anchors/
$ sudo update-ca-certificates
```
</div>
</div>
:::
::: info
::: spoiler 使用 cert-manager 產生憑證
1. Install from the cert-manager release manifest
```
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.18.2/cert-manager.yaml
```
2. Check cert-manager pod status
<div class="indent-title-2">
```!
$ kubectl get pods -n cert-manager
```
螢幕輸出 :
```!
NAME READY STATUS RESTARTS AGE
cert-manager-64f9f45d6f-flvjt 1/1 Running 0 2m53s
cert-manager-cainjector-56bbdd5c47-2qrct 1/1 Running 0 2m53s
cert-manager-webhook-d4f4545d7-l8svx 1/1 Running 0 2m53s
```
</div>
<br>
:::
## 4. Install Rancher Prime with Helm and Your Chosen Certificate Option
### 建立 Rancher Prime 並且使用自簽憑證
<div class="indent-title-1">
```!
~> helm install rancher rancher-prime/rancher \
--namespace cattle-system \
--version=2.8.2 \
--set bootstrapPassword=rancheradmin \
--set hostname=rancher.example.com \
--set replicas=3 \
--set ingress.tls.source=secret \
--set privateCA=true
--set global.cattle.psp.enabled=false
```
</div>
### 建立 Rancher Prime 並且使用 Cert-manager 產生憑證
<div class="indent-title-1">
```!
~> helm install rancher rancher-prime/rancher \
--namespace cattle-system \
--set hostname=rancher.antony520.com \
--set bootstrapPassword=rancheradmin \
--set global.cattle.psp.enabled=false \
--set replicas=3 \
--version 2.8.3
```
</div>
## 5. Verify that the Rancher Server is Successfully Deployed
<div class="indent-title-1">
```!
~> kubectl -n cattle-system get po
```
螢幕輸出 :
```
NAME READY STATUS RESTARTS AGE
helm-operation-cnlsx 0/2 Completed 0 14m
helm-operation-hfpdh 0/2 Completed 0 15m
helm-operation-llz2m 0/2 Completed 0 15m
helm-operation-pcq4s 0/2 Completed 0 16m
helm-operation-tw5hw 0/2 Completed 0 15m
rancher-647cdd469b-7ctmw 1/1 Running 0 20m
rancher-647cdd469b-l5q7w 1/1 Running 1 (17m ago) 20m
rancher-647cdd469b-mb4rc 1/1 Running 1 (17m ago) 20m
rancher-webhook-fb6768c79-fk9jl 1/1 Running 0 15m
```
</div>
### 檢查名稱解析
<div class="indent-title-1">
```!
~> curl -k -H "host: rancher.example.com" https://192.168.11.50/dashboard/
```
</div>
### 匯出 cert-manager 簽的憑證
<div class="indent-title-1">
```!
~> curl -k -s -fL rancher.example.com/v3/settings/cacerts | jq -r .value
```
</div>
### 將憑證匯入瀏覽器
* 在 client 的瀏覽器把 ca.pem import 進去 (此方法為自簽憑證時使用)
<div class="indent-title-2">

</div>
* 就可以使用 https 登入 rancher
<div class="indent-title-2">

</div>
## Rancher 生成 RKE2 之前的必要設定
- 節點必須能夠透過 DNS 解析 Rancher 的 FQDN,不要透過 `/etc/hosts` 解析,否則 Rancher 的 Agent ( cattle-cluster-agent ) 會在建立的時候無法存取到 Rancher。
- 如果想透過 `/etc/hosts` 去解析 Rancher FQDN 對應的 IP Address 的話,請依照以下步驟設定
<div class="indent-title-2">
1. 編輯 Patch file Yaml 檔
<div class="indent-title-2">
```!
$ cat <<EOF > patch-file-hostalias.yaml
spec:
template:
spec:
hostAliases:
- ip: "192.168.0.11"
hostnames:
- "antony-rancher.example.com"
EOF
```
> `hostAliases` 輸入 Rancher 所在的主機 IP Address
> `hostnames` 輸入 Racher 的 FDQN
</div>
2. Patch `cattle-cluster-agent` 這個 Deployment
<div class="indent-title-2">
```!
$ kubectl -n cattle-system patch deployment cattle-cluster-agent --patch-file patch-file-hostalias.yaml
```
</div>
3. 檢視 Pods 運作狀態
<div class="indent-title-2">
```!
$ kubectl -n cattle-system get pods -l app=cattle-cluster-agent
```
螢幕輸出 :
```
NAME READY STATUS RESTARTS AGE
cattle-cluster-agent-6c54877576-hmtgq 1/1 Running 0 22m
cattle-cluster-agent-6c54877576-zt6mb 1/1 Running 0 22m
```
</div>
:::danger
請注意,這個方法永遠不要在 Production 環境使用,只能在測試環境使用,因為如果 DownStream Cluster 有被更新,Patch 的設定有可能會被覆蓋
:::
</div>
## Rancher 生成 RKE2 之後的必要設定
在每一台 Control Plane Node 的設定
<div class="indent-title-1">
```!
# Copy kubectl and other commands
~> sudo cp /var/lib/rancher/rke2/bin/* /usr/local/bin/
# Copy rke2 Command
~> sudo cp /opt/rke2/bin/* /usr/local/bin/
# Setup kubeconfig
~> mkdir -p $HOME/.kube && sudo cp /etc/rancher/rke2/rke2.yaml $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Setup crictl config
~> cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/k3s/containerd/containerd.sock
image-endpoint: unix:///run/k3s/containerd/containerd.sock
timeout: 10
EOF
```
</div>
在每一台 Worker Node 要做的設定
<div class="indent-title-1">
```!
~> sudo cp /opt/rke2/bin/* /usr/local/bin/ && \
sudo cp /var/lib/rancher/rke2/bin/* /usr/local/bin/
# Setup crictl config
~> cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/k3s/containerd/containerd.sock
image-endpoint: unix:///run/k3s/containerd/containerd.sock
timeout: 10
EOF
```
</div>
# Helm upgrade Rancher
1. Update your local helm repo cache
<div class="indent-title-2">
```
$ helm repo update
```
</div>
2. Export the current values to a file:
<div class="indent-title-2">
```
$ helm get values rancher -n cattle-system -o yaml > values.yaml
```
</div>
3. Helm 查詢 Rancher 的版本,命令如下 :
<div class="indent-title-2">
```
$ helm -n cattle-system repo ls
- name: rancher-prime
url: https://charts.rancher.com/server-charts/prime
$ helm search repo rancher-prime --versions
NAME CHART VERSION APP VERSION DESCRIPTION
rancher-prime/rancher 2.8.2 v2.8.2 Install Rancher Server to manage Kubernetes clu...
...
```
</div>
4. Upgrade Rancher Version
<div class="indent-title-2">
```
$ helm upgrade rancher rancher-<CHART_REPO>/rancher \
--namespace cattle-system \
-f values.yaml \
--version=2.8.2
```
</div>
</div>
# RKE2 設定私有 Image Registry
<div class="indent-title-1">
可以透過 Rancher 設定或是手動更改 Containerd 的設定檔,Rancher 做的設定會直接套用到整個叢集的每一台 Node 上,手動設定會需要一台一台 Node 去做設定。
</div>
## Rancher 設定
1. 更改叢集設定檔
<div class="indent-title-2">

</div>
2. 輸入 Image Registry 資訊
<div class="indent-title-2">
:::spoiler 點擊左側選單的 Registries

:::
:::spoiler 在 Mirrors 區塊輸入資訊
* Registry Hostname: `10.43.0.11:5000`
* Mirror Endpoints: `http://10.43.0.11:5000`

:::
:::spoiler 在 Registry Authentication 區塊輸入資訊
* Registry Hostname: `10.43.0.11:5000`
* TLS Secret: `None`
* Authentication: `Create a HTTP Basic Auth Secret`
* Username: `docker`
* Password: `xxx`
* [x] Skip TLS Verifications

:::
</div>
3. 儲存並套用設定至整個叢集
<div class="indent-title-2">
:::spoiler 點擊 Save 按鈕

:::
</div>
## 手動設定
1. 注意資訊 :
<div class="indent-title-1">
- 如果只改一台 Node,那麼就只有該節點能夠到設定的 Image Registry 下載或上傳 Images。
- `config.toml.tmpl` 這個檔案如果被刪掉,那麼假設 RKE2 是被 Rancher 管理的,設定的內容,會被 Rancher 同步回來。反之,如果沒被刪掉,那麼在 Rancher 設定的值將永遠無法被套用,直到 `config.toml.tmpl` 這個檔案被刪除。
</div>
2. 複製一份 Cotainerd 的設定檔
<div class="indent-title-2">
```!
$ sudo cp /var/lib/rancher/rke2/agent/etc/containerd/config.toml /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
```
> 重要: 檔名一定要是 `config.toml.tmpl`
</div>
3. 修改 `config.toml.tmpl` 檔案內容
<div class="indent-title-2">
```
$ sudo nano /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
```
新增以下內容 :
```
[plugins."io.containerd.grpc.v1.cri".registry.configs."HARBOR_FQDN".auth]
username = "xxx"
password = "xxx"
[plugins."io.containerd.grpc.v1.cri".registry.configs."HARBOR_FQDN".tls]
insecure_skip_verify = true
```
> 如果 Image Registry 是 HTTP,那麼必須再額外新增以下內容
> ```
> [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
>
>[plugins."io.containerd.grpc.v1.cri".registry.mirrors."{FQDN or IP Address}:Port"]
> endpoint = ["http://{FQDN or IP Address}:Port"]
> ```
</div>
4. 重啟 RKE2 服務
<div class="indent-title-2">
```
$ sudo systemctl restart rke2-server.service
```
> 如果是 RKE2 worker 節點要重啟的服務是 `rke2-agent.service`
</div>
5. 刪除 `config.toml.tmpl` 檔案 (不一定要做)
<div class="indent-title-2">
```
$ sudo rm /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
```
> **重要 ! 如果 RKE2 由 Rancher 管理,那麼只要這個檔案存在,在 Rancher 設定的 Image Registry 資訊,都會被這個檔案覆蓋,所以要自行決定是否刪除。**
</div>
# Enable Rrancher Audit log
## PreRequest
<div class="indent-title-1">
:::info
Note: Rancher 要可以收到 Audit log ,它所在的 K8S 也要先啟動 Audit log
:::
</div>
## 1. RKE2 啟動 Audit log
### Step 1: Update the RKE2 Configuration File
<div class="indent-title-1">
```!
$ echo "audit-policy-file: /etc/rancher/rke2/audit-policy.yaml" | sudo tee -a /etc/rancher/rke2/config.yaml
```
</div>
### Step 2: Create an Audit Policy File
<div class="indent-title-1">
```
$ cat <<EOF | sudo tee /etc/rancher/rke2/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
metadata:
creationTimestamp: null
rules:
- level: Metadata
EOF
```
</div>
### Step 3: Restart RKE2 Server
<div class="indent-title-1">
```!
$ sudo systemctl restart rke2-server
```
</div>
### Step 4: Monitor Audit Log
<div class="indent-title-1">
```!
$ sudo tail -f /var/lib/rancher/rke2/server/logs/audit.log
```
</div>
## 2. Enable Rrancher Audit log
### Step 1: Export the current values to a file:
<div class="indent-title-1">
```
$ helm get values rancher -n cattle-system -o yaml > values.yaml
```
</div>
### Step 2: 新增 Audit log 設定
<div class="indent-title-1">
```!
$ cat <<EOF | sudo tee -a values.yaml
auditLog:
destination: sidecar
hostPath: /var/log/rancher/audit/
level: 3
maxAge: 3
maxBackup: 1
maxSize: 100
EOF
```
</div>
### Step 3: 更新 Rancher 啟動 Audit log
<div class="indent-title-1">
```!
$ helm upgrade rancher rancher-prime/rancher \
--namespace cattle-system \
-f values.yaml
```
</div>
# 設定 RKE2 Cluster Pod 總量限制
1. 點選 "Cluster Management" -> 在要設定的叢集旁邊點選 "Edit Config"
<div class="indent-title-2">

</div>
2. 在 Cluster Configuration 的左側選單中點選 "Advanced",並在 Additional Kubelet Args 區塊下點選 "Add Argument"
<div class="indent-title-2">

</div>
3. 輸入以下設定值,並點選右下角 "Save" 套用設定
<div class="indent-title-2">
```
max-pods=250
```
> `max-pods` means Number of Pods that can run on this kubelet.
> 從 Rancher 設定的話,會把以上新增的設定套用到 K8S Cluster 的每個節點上。

</div>
4. 回到 "Home"
<div class="indent-title-2">

> 我的 Cluster 有 3 台 Node,所以 250 x 3 = 750
> 可以參考我做的測試 [測試 K8S Pod 數量超過 110 會發生甚麼事?](https://hackmd.io/@QI-AN/What-happens-when-the-number-of-K8S-Pods-exceeds-110)
</div>
# Trouble shooting
## 在 Join Master Node 的時候,不小心設定到同一個 node name ,然後 etcd 咬住原來 node 的名稱怎麼辦
錯誤訊息
```
ETCD join failed: duplicate node name found, please use a unique name for this node
```
解決辦法
1. 先進去 etcd 的 pod 中
```
kubectl -n kube-system exec -it etcd-staging-kube-master-1.fqdn.com -- bash
```
2. 列出 etcd 的成員
```!
etcdctl --cert /var/lib/rancher/rke2/server/tls/etcd/server-client.crt --key /var/lib/rancher/rke2/server/tls/etcd/server-client.key --endpoints https://127.0.0.1:2379 --cacert /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt member list
```
螢幕輸出
```=
121f9e856446fcc4, started, rsm2-f826f76c, https://192.168.11.62:2380, https://192.168.11.62:2379, false
67f4840cc79f9560, started, rsm1-168e267e, https://192.168.11.61:2380, https://192.168.11.61:2379, false
6b20c6a3411cab14, started, rsm3-2126c18e, https://192.168.11.63:2380, https://192.168.11.63:2379, false
```
3. 刪除重複的 etcd 成員
```!
etcdctl --cert /var/lib/rancher/rke2/server/tls/etcd/server-client.crt --key /var/lib/rancher/rke2/server/tls/etcd/server-client.key --endpoints https://127.0.0.1:2379 --cacert /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt member remove 6b20c6a3411cab14
```
螢幕輸出
```
Member 6b20c6a3411cab14 removed from cluster 9ea915bf6adaa894
```
# 手動更新 Cert-manager 做給 Rancher 用的憑證
:::info
Rancher 透過 Cert-manager 做出來的憑證,有效期限預設為 90 天,在第 60 天的時候會嘗試自動更換憑證。
:::
1. SSH 連線至可以執行 `kubectl` 管理 UpStream Cluster ( Rancher 所在的 Kubernetes Cluster ) 的機器
2. 下載 Cert-manager 管理工具
<div class="indent-title-2">
```!
$ curl -fsSL -o cmctl.tar.gz https://github.com/cert-manager/cert-manager/releases/latest/download/cmctl-linux-amd64.tar.gz && tar xzf cmctl.tar.gz && sudo mv cmctl /usr/local/bin
```
</div>
3. 檢查 Rancher 當前使用憑證的有效日期
<div class="indent-title-2">
```!
$ echo | openssl s_client -connect `kubectl -n cattle-system get ing --no-headers -o custom-columns=HOSTS:.spec.rules[*].host`:443 2>/dev/null | openssl x509 -noout -dates
```
螢幕輸出如下 :
```
notBefore=Feb 19 05:31:37 2024 GMT
notAfter=May 19 05:31:37 2024 GMT
```
</div>
4. 手動 Renew Rancher 憑證
<div class="indent-title-2">
4.1. 檢查 Rancher 在 Kubernetes 中使用的憑證名稱
<div class="indent-title-2">
```!
$ kubectl -n cattle-system get certificate
```
螢幕輸出 :
```
NAME READY SECRET AGE
tls-rancher-ingress True tls-rancher-ingress 2d21h
```
</div>
4.2. 手動 Renew 憑證
<div class="indent-title-2">
```!
$ cmctl -n cattle-system renew tls-rancher-ingress
```
螢幕輸出 :
```
Manually triggered issuance of Certificate cattle-system/tls-rancher-ingress
```
</div>
4.3. 檢查建立憑證的 Request 是否發出
<div class="indent-title-2">
```!
$ kubectl -n cattle-system get certificaterequest --sort-by=.metadata.creationTimestamp
```
螢幕輸出 :
```
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
tls-rancher-ingress-9jb67 True True rancher system:serviceaccount:cert-manager:cert-manager 2d21h
tls-rancher-ingress-m59sk True True rancher system:serviceaccount:cert-manager:cert-manager 3s
```
</div>
</div>
5. 確認憑證當前狀態的詳細資訊
<div class="indent-title-2">
```!
$ cmctl -n cattle-system status certificate tls-rancher-ingress
```
螢幕輸出 :
```
Name: tls-rancher-ingress
Namespace: cattle-system
Created at: 2024-02-16T17:08:39+08:00
Conditions:
Ready: True, Reason: Ready, Message: Certificate is up to date and has not expired
DNS Names:
- 172.20.0.37.nip.io
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Requested 13m cert-manager-certificates-request-manager Created new CertificateRequest resource "tls-rancher-ingress-7hdcg"
Normal Requested 6m49s cert-manager-certificates-request-manager Created new CertificateRequest resource "tls-rancher-ingress-8hhjl"
Normal Reused 2m56s (x6 over 156m) cert-manager-certificates-key-manager Reusing private key stored in existing Secret resource "tls-rancher-ingress"
Normal Issuing 2m56s (x6 over 156m) cert-manager-certificates-issuing The certificate has been successfully issued
Normal Requested 2m56s cert-manager-certificates-request-manager Created new CertificateRequest resource "tls-rancher-ingress-m59sk"
Issuer:
Name: rancher
Kind: Issuer
Conditions:
Ready: True, Reason: KeyPairVerified, Message: Signing CA verified
Events: <none>
Secret:
Name: tls-rancher-ingress
Issuer Country:
Issuer Organisation: dynamiclistener-org
Issuer Common Name: dynamiclistener-ca@1708074590
Key Usage: Digital Signature, Key Encipherment
Extended Key Usages:
Public Key Algorithm: RSA
Signature Algorithm: ECDSA-SHA256
Subject Key ID:
Authority Key ID: bc3674dd7762dca834cb5586c463a2c858d11528
Serial Number: c9246b491d331f8d197eed83d4de06d5
Events: <none>
Not Before: 2024-02-19T14:47:46+08:00
Not After: 2024-05-19T14:47:46+08:00
Renewal Time: 2024-04-19T14:47:46+08:00
No CertificateRequest found for this Certificate
```
> - `Not Before`,憑證有效期限的開始日期
> - `Not After`,憑證有效期限的結束日期
> - `Renewal Time`,cert-manager 將把憑證的 `status.RenewalTime` 設定為嘗試更新的時間。
</div>
</div>
</div>
# 參考網站
- [RKE2 Quick Start - RKE2 Docs](https://docs.rke2.io/install/quickstart)
- [RKE2 High Availability - RKE2 Docs](https://docs.rke2.io/install/ha)
- [Rancher Support Matrix - SUSE Docs](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/rancher-v2-7-6/)
- [Install/Upgrade Rancher on a Kubernetes Cluster - Rancher Docs](https://ranchermanager.docs.rancher.com/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster)
- [Containerd Registry Configuration - RKE2 Docs](https://docs.rke2.io/install/containerd_registry_configuration)
- [Renew Certificate - Cert-manager Docs](https://cert-manager.io/docs/reference/cmctl/#renew)
- [Reissuance triggered by expiry (renewal) - Cert-manager Docs ](https://cert-manager.io/docs/usage/certificate/#reissuance-triggered-by-expiry-renewal)