# Step by Step install OpenShift 4.19 with the Agent-based Installer in Disconnected environment
<style>
.indent-title-1{
margin-left: 1em;
}
.indent-title-2{
margin-left: 2em;
}
.indent-title-3{
margin-left: 3em;
}
</style>
## 0. Preface
<div class="indent-title-1">
本篇文章會介紹,如何在全離線的環境下,在多台 VM 上使用 agent-based 的方式安裝 3 台 Control-plane Node 和 3 台 Worker Node 架構的 OpenShift Container Platform 4.19
簡易架構圖如下:

可以透過點擊展開以下目錄,選擇想看的內容,跳轉至特定章節
:::warning
:::spoiler 文章目錄
[TOC]
:::
</div>
## 1. 環境規劃
|主機名稱| 角色和服務 | 說明 |
|--------|-------------------|------------------------------------------------------------------|
|bastion| Cluster installer/bastion | 發動安裝 OCP 的跳板機, 同時提供 load balance, DNS Server,Project Quay 的服務 |
|master-1| rendezvous host/bootstrap | 安裝時重要的角色, 透過 installer 將 ocp cluster 角色先部署在此, 再透過 scale out 延伸到 master node |
|master-[1-3]| Master node | ocp 重要的控制節點須為三個 |
|worker-[1-3]| Worker/compute node | 執行 Application 的節點 |
|bastion| DNS Server | 提供名稱解析和反解析(網址轉成 IP 或是將 IP 轉回網址) |
|bastion| HA Proxy | 提供 load balance 的服務 |
|bastion| Project Quay | 提供 Image Registry 服務 |
### 1.1. 硬體資源需求
<div class="indent-title-1">

> 參考連結 : [Minimum resource requirements - Red Hat Docs](https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_bare_metal/index#installation-minimum-resource-requirements_ipi-install-prerequisites)
</div>
### 1.2. 環境架構
請在虛擬化平台(Proxmox/vCenter...)產出以下 7 台 VM,<font color=red>並記錄每台 VM 對應的 Mac Address(除了 bastion)</font>
- 1 台 bastion (RHEL)
- 3 台 master (RHCOS)
- 3 台 worker (RHCOS)
### 1.3. 主機名稱設定格式
```
HOSTNAME.CLUSTER_NAME.DOMAIN_NAME
```
example:
```
bastion.topgun.kubeantony.com
```
- `HOSTNAME` 就是 `bastion`
- `CLUSTER_NAME` 就是 `topgun`
- `DOMAIN_NAME` 就是 `kubeantony.com`
### 1.4. 軟體授權
- Red Hat Enterprise Linux subscription
- 60 天個人免費試用連結 : https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/server/trial
- Red Hat OpenShift Container Platform subscription
- 60 天個人免費試用連結 : https://www.redhat.com/en/technologies/cloud-computing/openshift/ocp-self-managed-trial
## 2. 安裝與設定 Bastion 主機
### 2.1. 安裝與設定 Red Hat Enterprise Linux 9
- [Install Red Hat Enterprise Linux 9 - server-world](https://www.server-world.info/en/note?os=Other&p=rhel&f=5)
### 2.2. 下載 "安裝 OCP 程式" 和 "管理 OpenShift 的 CLI 工具"
至 [Red Hat 官網連結](https://access.redhat.com/downloads/content/290/ver=4.19/rhel---9/4.19.10/x86_64/product-software) 下載以下兩個項目的檔案 (一定要有帳號)
1. **OpenShift v4.19.10 Linux Installer**
2. **OpenShift v4.19.10 Linux Client**
請在 bastion 主機執行以下命令
<div class="indent-title-1">
```!
OVERSION="4.19.10"
wget --show-progress -qO "openshift-install-linux-${OVERSION}.tar.gz" "<installer 的下載網址>" && wget --show-progress -qO "oc-${OVERSION}-linux.tar.gz" "<Clinet 的下載網址>"
```
> - `oc-4.19.10-linux.tar.gz`,是 OpenShift CLI 的壓縮檔
> - `openshift-install-linux-4.19.10.tar.gz`,是安裝 OCP 會用到的程式
:::warning
注意! 下載網址需用兩個雙引號包住,不然會報錯。
:::
</div>
</div>
</div>
### 2.3. 解壓縮檔案,並將 oc、openshift-install 和 kubectl 加至 PATH 環境變數
<div class="indent-title-1">
```!
tar -xvf oc-${OVERSION}-linux.tar.gz && \
tar -xvf openshift-install-linux-${OVERSION}.tar.gz && \
sudo mv oc kubectl openshift-install /usr/local/bin
```
螢幕輸出 :
```!
README.md
kubectl
oc
README.md
openshift-install
```
</div>
### 2.4. 確認 oc Command 版本
<div class="indent-title-1">
```!
oc version
```
螢幕輸出 :
<pre>
Client Version: <font color=red>4.19.10</font>
Kustomize Version: <font color=blue>v5.5.0</font>
</pre>
</div>
### 2.5. 確認 openshift-install Command 版本
<div class="indent-title-1">
```!
openshift-install version
```
螢幕輸出 :
<pre>
openshift-install <font color=red>4.19.10</font>
built from commit 87bc6d06e8abd759e92112b434a180c2ddff41f1
release image quay.io/openshift-release-dev/ocp-release@sha256:2f9145136fb387d43c7fff55b30a036c14eb96b0992c292274b6f543c6c33857
release architecture amd64
</pre>
</div>
### 2.6. 確認 kubectl Command 版本
<div class="indent-title-1">
```!
kubectl version --client --output=yaml
```
螢幕輸出 :
<pre>
openshift-install 4.19.10
built from commit 87bc6d06e8abd759e92112b434a180c2ddff41f1
release image quay.io/openshift-release-dev/ocp-release@sha256:2f9145136fb387d43c7fff55b30a036c14eb96b0992c292274b6f543c6c33857
release architecture amd64
[bigred@bastion ~]$ kubectl version --client --output=yaml
clientVersion:
buildDate: "2025-08-26T15:03:13Z"
compiler: gc
gitCommit: 298429ba9831d1d72b89edd9beb82a6ee665c3b7
gitTreeState: clean
gitVersion: v1.32.1
goVersion: go1.23.9 (Red Hat 1.23.9-1.el9_6) X:strictfipsruntime
major: "1"
minor: "32"
platform: linux/amd64
kustomizeVersion: <font color=blue>v5.5.0</font>
</pre>
</div>
### 2.7. 安裝 DNS Server
<div class="indent-title-1">
```
sudo yum -y install bind
```
螢幕輸出 :
```
...以上省略
Complete!
```
</div>
### 2.8. 編輯 DNS Server 設定檔 named.conf
<div class="indent-title-1">
```
sudo nano /etc/named.conf
```
要改的地方有兩個部分
1. 將 `listen-on port 53` 和 `allow-query` 的值, 改成 **`any`**
<div class="indent-title-2">
檔案內容 :
<pre>
options {
listen-on port 53 { <font color=red>any</font>; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { <font color=red>any</font>; };
</pre>
</div>
2. 假設我 bastion 這台機器的 Hostname 是 `bastion.topgun.kubeantony.com`
IP 是 `192.168.11.21`,在 `/etc/named.conf` 新增以下內容至檔案的最後面 :
<div class="indent-title-2">
```
zone "kubeantony.com" {
type master;
file "/etc/named/zones/db.kubeantony.com";
};
zone "11.168.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.kubeantony.com.reverse";
};
```
:::info
在本次範例中
- Cluster name 是 `topgun`
- Base Domain Name 是 `kubeantony.com`
:::
</div>
</div>
### 2.9. 設定 DNS Server 名稱解析
<div class="indent-title-1">
```!
sudo mkdir /etc/named/zones && \
sudo nano /etc/named/zones/db.kubeantony.com
```
檔案內容如下 :
<pre>
$TTL 1W
@ IN SOA <font color=red>bastion.topgun.kubeantony.com.</font> root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS <font color=red>bastion.topgun.kubeantony.com.</font>
IN MX 10 smtp.kubeantony.com.
;
;
<font color=red>bastion.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.21</font>
<font color=red>smtp.kubeantony.com.</font> IN A <font color=red>192.168.11.21</font>
<font color=red>quay.kubeantony.com.</font> IN A <font color=red>192.168.11.21</font>
;
<font color=red>api.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.21</font>
<font color=red>api-int.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.21</font>
;
<font color=red>*.apps.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.21</font>
;
<font color=red>master-1.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.23</font>
<font color=red>master-2.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.24</font>
<font color=red>master-3.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.25</font>
;
<font color=red>worker-1.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.26</font>
<font color=red>worker-2.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.27</font>
<font color=red>worker-3.topgun.kubeantony.com.</font> IN A <font color=red>192.168.11.28</font>
;
;EOF
</pre>
:::danger
**Note: 紅字部分為須依照環境的規劃來設定**
:::
</div>
### 2.10. 設定 DNS Server 反解析
<div class="indent-title-1">
```!
sudo nano /etc/named/zones/db.kubeantony.com.reverse
```
檔案內容如下 :
<pre>
$TTL 1W
@ IN SOA <font color=red>bastion.topgun.kubeantony.com.</font> root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS <font color=red>bastion.topgun.kubeantony.com.</font>
;
<font color=red>21.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>api.topgun.kubeantony.com.</font>
<font color=red>21.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>api-int.topgun.kubeantony.com.</font>
;
<font color=red>23.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>master-1.topgun.kubeantony.com.</font>
<font color=red>24.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>master-2.topgun.kubeantony.com.</font>
<font color=red>25.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>master-3.topgun.kubeantony.com.</font>
;
<font color=red>26.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>worker-1.topgun.kubeantony.com.</font>
<font color=red>27.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>worker-2.topgun.kubeantony.com.</font>
<font color=red>28.11.168.192.in-addr.arpa.</font> IN PTR <font color=red>worker-3.topgun.kubeantony.com.</font>
;
;EOF
</pre>
:::danger
**Note: 紅字部分為須依照環境的規劃來設定**
:::
</div>
### 2.11. 啟動 DNS Server
<div class="indent-title-1">
```
sudo systemctl enable named --now
```
螢幕輸出 :
```!
Created symlink /etc/systemd/system/multi-user.target.wants/named.service → /usr/lib/systemd/system/named.service.
```
檢查
```
sudo systemctl status named
```
螢幕輸出 :
<pre>
● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
Active: <font color=grenn>active (running)</font> since Mon 2023-07-17 16:50:31 CST; 1s ago
...以下省略
</pre>
</div>
### 2.12. 將 bastion 機器的 IP 新增為 DNS Server
<div class="indent-title-1">
```!
sudo nmcli connection modify 'ens18' ipv4.dns '192.168.11.21' +ipv4.dns '8.8.8.8'
sudo systemctl restart NetworkManager
```
檢查
```
sudo cat /etc/resolv.conf
```
螢幕輸出
```
# Generated by NetworkManager
search topgun.kubeantony.com
nameserver 192.168.11.21
nameserver 8.8.8.8
```
</div>
### 2.13. 關閉防火牆
<div class="indent-title-1">
```
sudo systemctl disable firewalld.service --now
```
檢查是否關閉
```
sudo systemctl status firewalld.service --no-pager
```
螢幕輸出 :
```
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
...以下省略
```
</div>
### 2.14. 驗證 DNS Server 正解
<div class="indent-title-1">
```
# 定義 A Zone File
AZF="/etc/named/zones/db.kubeantony.com"
# 從 DNS Zone File 中找出所有 A 記錄的主機名稱
ARecord=$(sudo grep -w 'A' $AZF | cut -d " " -f 1)
# 對每個主機名稱進行查詢,使用系統中第一個 nameserver 為查詢伺服器
for i in $ARecord; do dig @$(awk '/^nameserver/ {print $2; exit}' /etc/resolv.conf) ${i} +short; done
```
螢幕輸出 :
```
192.168.11.21
192.168.11.21
192.168.11.21
192.168.11.21
192.168.11.21
192.168.11.23
192.168.11.24
192.168.11.25
192.168.11.26
192.168.11.27
192.168.11.28
```
</div>
### 2.15. 驗證 DNS Server 反解
<div class="indent-title-1">
```
# 指定 PTR Zone File 檔案路徑
PZF="/etc/named/zones/db.kubeantony.com.reverse"
# 擷取所有 PTR 記錄,將其轉換為標準 IPv4 格式,並去除重複
PTRRecord=$(sudo grep 'PTR' $PZF | awk '{split($1,a,"."); print a[4]"."a[3]"."a[2]"."a[1]}' | sort -u)
# 針對每個 IP 進行反解查詢(reverse DNS lookup),使用第一個 nameserver 查詢
for i in $PTRRecord; do dig -x ${i} @$(awk '/^nameserver/ {print $2; exit}' /etc/resolv.conf) +short; done
```
螢幕輸出 :
```
api-int.topgun.kubeantony.com.
api.topgun.kubeantony.com.
master-1.topgun.kubeantony.com.
master-2.topgun.kubeantony.com.
master-3.topgun.kubeantony.com.
worker-1.topgun.kubeantony.com.
worker-2.topgun.kubeantony.com.
worker-3.topgun.kubeantony.com.
```
</div>
### 2.16. 安裝 HAProxy 服務
<div class="indent-title-1">
```
sudo yum -y install haproxy
```
螢幕輸出 :
```
...以上省略
Complete!
```
</div>
### 2.17. 設定 HAProxy
<div class="indent-title-1">
直接清掉預設的設定檔
```
cat /dev/null | sudo tee /etc/haproxy/haproxy.cfg
```
再編輯設定檔
```
sudo nano /etc/haproxy/haproxy.cfg
```
將以下內容複製到檔案中 :
<pre>
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-server-6443
bind *:6443
mode tcp
server <font color=red>master-1 master-1.topgun.kubeantony.com:6443 check inter 1s</font>
server <font color=red>master-2 master-2.topgun.kubeantony.com:6443 check inter 1s</font>
server <font color=red>master-3 master-3.topgun.kubeantony.com:6443 check inter 1s</font>
listen machine-config-server-22623
bind *:22623
mode tcp
server <font color=red>master-1 master-1.topgun.kubeantony.com:22623 check inter 1s</font>
server <font color=red>master-2 master-2.topgun.kubeantony.com:22623 check inter 1s</font>
server <font color=red>master-3 master-3.topgun.kubeantony.com:22623 check inter 1s</font>
listen ingress-router-443
bind *:443
mode tcp
balance source
server <font color=red>worker-1 worker-1.topgun.kubeantony.com:443 check inter 1s</font>
server <font color=red>worker-2 worker-2.topgun.kubeantony.com:443 check inter 1s</font>
server <font color=red>worker-3 worker-3.topgun.kubeantony.com:443 check inter 1s</font>
listen ingress-router-80
bind *:80
mode tcp
balance source
server <font color=red>worker-1 worker-1.topgun.kubeantony.com:80 check inter 1s</font>
server <font color=red>worker-2 worker-2.topgun.kubeantony.com:80 check inter 1s</font>
server <font color=red>worker-3 worker-3.topgun.kubeantony.com:80 check inter 1s</font>
</pre>
:::danger
**Note: 紅字部分為須依照環境的規劃來設定**
:::
</div>
### 2.18. 設定允許 HAProxy 可以使用 TCP Port
<div class="indent-title-1">
```
sudo setsebool -P haproxy_connect_any=1
```
> If you are using HAProxy as a load balancer and SELinux is set to **`enforcing`**, you must ensure that the HAProxy service can bind to the configured TCP port by running **`setsebool -P haproxy_connect_any=1`**.
</div>
### 2.19. 啟動 HAProxy 服務,並設為開機自動啟動
<div class="indent-title-1">
```
sudo systemctl enable --now haproxy.service
```
螢幕輸出 :
```!
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
```
檢查
```
sudo systemctl status haproxy.service
```
螢幕輸出 :
<pre>
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: <font color=grenn>active (running)</font> since Mon 2023-07-17 17:26:21 CST; 11min ago
...以下省略
</pre>
</div>
### 2.20. 下載 pull-secret
<div class="indent-title-1">
- 執行以下命令
```
mkdir ~/ocp4; cd ~/ocp4;
nano pull-secret.txt
```
- 至以下連結下載,注意 : **需先登入帳號**
- [Download Pull secret](https://console.redhat.com/openshift/create/local)

<div class="indent-title-2">
> 可點選 "**Download pull secret**",或是點 "**Copy pull secret**" 直接將 `pull-secret.txt` 複製到剪貼簿
</div>
</div>
## 3. 安裝 Project Quay Container Registry
### 3.1. 先決條件
* 2 個或以上的 virtual CPU
* 4 GB 或以上的記憶體
* 系統上至少需要約 30 GB 或以上的磁碟空間,可細分如下:
* 約 10 GB 的磁碟空間用於 Red Hat Enterprise Linux (RHEL) 作業系統。
* 約 10 GB 的磁碟空間用於 Podman 儲存,以運行三個容器 (container)。
* 約 10 GB 的磁碟空間用於 Project Quay 本地儲存。
* 已安裝 `podman`
* 這台機器需可上網
### 3.2. 規劃一顆獨立的硬碟給 Project Quay
1. 在 VM 新增一顆硬碟專門給 Project Quay 使用
2. 檢查新的硬碟名稱
```
lsblk
```
執行結果 :
```
...
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 100G 0 disk
```
3. 將硬碟名稱設為變數
```
D="/dev/sdb"
```
4. 使用 parted 建立 GPT 分割區,從 1 MiB 到磁碟尾,並標記為 LVM 用途
```
sudo parted -s ${D} mklabel gpt
sudo parted -s ${D} unit MiB mkpart primary 1 100%
```
5. 將剛建立的分割區初始化為 LVM PV
```
sudo pvcreate ${D}1
```
6. 建立名為 vg-data 的 VG
```
sudo vgcreate vg-data ${D}1
```
7. 從 VG 中建立一 LV,利用全部剩餘空間並命名為 lv-quay
```
sudo lvcreate -l +100%FREE -n lv-quay /dev/vg-data
```
8. 格式化 LV 為 XFS 檔案系統,以提供穩健且高效的儲存
```
sudo mkfs.xfs /dev/vg-data/lv-quay
```
9. 建立 Project Quay 要使用的儲存目錄
```
sudo mkdir -p /data/quay/{root,sqlite-storage,quay-storage}
```
10. 修改目錄權限
```
sudo chown -R $(id -u):$(id -g) /data/quay
```
11. 將目錄設為開機自動掛載 lv 裝置上
```
echo '/dev/vg-data/lv-quay /data/quay xfs defaults 0 0' | sudo tee -a /etc/fstab
```
12. 將目錄掛載到 lv 裝置上
```
sudo mount -a && sudo systemctl daemon-reload
```
### 3.3. 確認已準備好 DNS A record
1. 請執行以下命令查詢
```
dig quay.kubeantony.com +short
```
執行結果:
```
192.168.11.21
```
### 3.4. 透過 Mirror Registry 安裝 Project Quay
> Mirror Registry: 這應用程式能讓使用者透過一個簡單的命令列介面 (CLI) 工具,輕易地安裝 Quay 及其所需元件。其目的是提供一個映像檔倉庫 (registry),用以存放 OpenShift 的映像檔。
1. 建立並切換專案工作目錄
```
mkdir ~/project-quay; cd ~/project-quay/
```
2. 下載並準備 mirror-registry 工具
```
wget https://mirror.openshift.com/pub/cgw/mirror-registry/latest/mirror-registry-amd64.tar.gz
```
3. 將下載的工具包解壓縮到工作目錄中
```
tar xvzf mirror-registry-amd64.tar.gz -C ~/project-quay/
```
4. 執行安裝指令,並設定 Quay 的資料儲存路徑、主機名稱及初始管理員帳號密碼
```
~/project-quay/mirror-registry install \
--quayRoot /data/quay/root \
--quayStorage /data/quay/sqlite-storage \
--sqliteStorage /data/quay/quay-storage \
--initUser admin \
--initPassword redhat123 \
--quayHostname quay.kubeantony.com
```
成功執行結果 :
```
...
INFO[2025-09-16 23:50:07] Quay installed successfully, config data is stored in /data/quay/root
INFO[2025-09-16 23:50:07] Quay is available at https://quay.kubeantony.com:8443 with credentials (admin, redhat123)
```
5. 檢查 Quay、Redis 等相關 app container 是否已成功啟動
```
podman ps -a
```
正確執行結果:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dab9b2b04011 registry.access.redhat.com/ubi8/pause:8.10-5 infinity 3 minutes ago Up 3 minutes 0.0.0.0:8443->8443/tcp 31e6ad76f0aa-infra
cd50dd26d5d3 registry.redhat.io/rhel8/redis-6:1-190 run-redis 3 minutes ago Up 3 minutes 0.0.0.0:8443->8443/tcp, 6379/tcp quay-redis
e5386bb42056 registry.redhat.io/quay/quay-rhel8:v3.12.10 registry 2 minutes ago Up 3 minutes 0.0.0.0:8443->8443/tcp, 7443/tcp, 8080/tcp quay-app
```
6. 將 Quay 自動產生的根憑證 (rootCA.pem) 複製到系統的信任憑證目錄
```
sudo cp -v /data/quay/root/quay-rootCA/rootCA.pem /etc/pki/ca-trust/source/anchors/
```
8. 更新系統的 CA 信任清單,讓 HTTPS 連線時能信任此憑證
```
sudo update-ca-trust
```
9. 使用 podman 登入 Quay 倉庫,確認帳號與連線皆正常
```
podman login -u admin -p redhat123 quay.kubeantony.com:8443
```
正確執行結果:
```
Login Succeeded!
```
10. 透過 systemd 查詢 Quay 相關服務的運行狀態
```
systemctl --user list-units --type=service | grep "quay"
```
執行結果:
```
quay-app.service loaded active running Quay Container
quay-pod.service loaded active exited Infra Container for Quay
quay-redis.service loaded active running Redis Podman Container for Quay
```
> 透過 systemd 重啟 Quay Pod (此操作會連帶重啟 Quay App 與 Redis 容器):
> ```
> systemctl --user restart quay-pod
> ```
## 2. 透過 `oc-mirror` 下載並打包 openshift 叢集所需之 container image,並上傳至本地 project quay
1. 切換工作目錄
```
cd ~/ocp4
```
2. 下載 `oc-mirror` CLI 工具
```
wget https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest/oc-mirror.rhel9.tar.gz
```
3. 將 `oc-mirror` 執行檔解壓縮至 $PATH 路徑中,使其可直接執行
```
mkdir -p $HOME/.local/bin
tar xvzf ./oc-mirror.rhel9.tar.gz -C $HOME/.local/bin
chmod +x $HOME/.local/bin/oc-mirror
```
4. 建立存放 image 倉庫身份憑證檔案的標準目錄
```
mkdir -p $HOME/.docker
```
5. 將 Red Hat Pull Secret 移至標準認證路徑,用於從官方倉庫拉取 image
```
mv -v pull-secret.txt $HOME/.docker/config.json
```
6. 登入本地 Quay 倉庫,認證資訊會自動加入 config.json
```
podman login -u admin \
-p redhat123 \
--authfile $HOME/.docker/config.json \
quay.kubeantony.com:8443
```
7. 備份已包含 Red Hat 和本地 Quay 認證的組合檔案
```
cp -v $HOME/.docker/config.json ~/ocp4/pull-secret.json
```
8. 列出指定 OpenShift 版本 (4.18) 可用的 OperatorHub Catalog
```
oc-mirror list operators --catalogs --version=4.19
```
執行結果:
```
Available OpenShift OperatorHub catalogs:
OpenShift 4.19:
registry.redhat.io/redhat/redhat-operator-index:v4.19
registry.redhat.io/redhat/certified-operator-index:v4.19
registry.redhat.io/redhat/community-operator-index:v4.19
registry.redhat.io/redhat/redhat-marketplace-index:v4.19
```
9. 列出特定 Catalog 中的所有 Operator Packages,並將結果存檔
```
oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.19 > ~/ocp4/redhat-operator-index-package.out
```
10. 從清單中搜尋特定 Operator,以確認其 package 名稱和可用的 channel
```
cat ~/ocp4/redhat-operator-index-package.out | grep -E "opentelemetry-product|cluster-logging|cluster-observability-operator|quay-operator"
```
執行結果:
```
cluster-logging stable-6.3
cluster-observability-operator stable
opentelemetry-product stable
quay-operator stable-3.15
```
11. 查詢與 OpenShift 4.19 版本相關的所有可用的 channel
```
oc-mirror list releases --version 4.19 --channels
```
執行結果:
```
Listing channels for version 4.19.
fast-4.18
stable-4.18
candidate-4.19
fast-4.19
stable-4.19
candidate-4.20
candidate-4.18
eus-4.18
```
12. 列出 `stable-4.19` channel 可安裝的 openshift 版本
```
oc-mirror list releases --channel stable-4.19
```
執行結果:
```
Channel: stable-4.19
Architecture: amd64
...
4.19.3
4.19.4
4.19.5
4.19.6
4.19.7
4.19.9
4.19.10
```
13. 建立存放鏡像 tar 檔的本地目錄
```
mkdir -p /data/quay/mirror_img
```
14. 編輯 ImageSetConfiguration 設定檔,定義要鏡像的內容
```
nano imageset-config.yaml
```
檔案內容如下:
```yaml
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
storageConfig:
# 設定鏡像檔案的本地存放路徑
local:
path: /data/quay/mirror_img
mirror:
# 鏡像 OpenShift 平台的核心映像檔
platform:
channels:
- name: stable-4.19 # 指定 OCP 版本
type: ocp
# 鏡像指定的 Operator
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.19
# 指定要下載的 packages 及其 channel
packages:
- name: opentelemetry-product
channels:
- name: stable
- name: cluster-logging
channels:
- name: stable-6.3
- name: cluster-observability-operator
channels:
- name: stable
- name: loki-operator
channels:
- name: stable-6.3
# 鏡像額外的映像檔
additionalImages:
- name: registry.redhat.io/ubi8/ubi:latest
helm: {}
```
15. 執行鏡像程序:根據設定檔從網路下載所有 image,並打包成一個 tar 檔存放在本地
```
oc mirror --verbose 3 \
--config=imageset-config.yaml \
file:///data/quay/mirror_img
```
16. 確認 tar 檔已成功建立
```
ls -lh /data/quay/mirror_img
```
執行結果:
```
total 39G
-rw-r--r--. 1 bigred bigred 39G Sep 15 16:35 mirror_seq1_000000.tar
drwxr-xr-x. 3 bigred bigred 17 Sep 15 16:11 oc-mirror-workspace
drwxr-x---. 2 bigred bigred 28 Sep 15 16:35 publish
```
17. 將本地 tar 檔中的所有 image 推送至目標 Project Quay 倉庫
```
oc mirror --verbose 3 \
--from=/data/quay/mirror_img/mirror_seq1_000000.tar \
docker://quay.kubeantony.com:8443
```
## 4. 編輯必要叢集設定檔
### 4.1. 設定 `install-config.yaml`
```
nano ~/ocp4/install-config.yaml
```
> 每個環境一定會不同的欄位:
> - `networking.machineNetwork.cidr`,這是叢集節點的網路。
> - `pullSecret` 欄位的內容:`jq -c . ~/ocp4/pull-secret.json`。
> - `sshKey` 欄位的內容:`cat ~/.ssh/id_rsa.pub`,如果沒有 ssh public key,可透過這命令產生:`ssh-keygen -t rsa -P ''`。
> - `additionalTrustBundle` 欄位的內容 : `cat /data/quay/root/quay-rootCA/rootCA.pem`。
> - 透過指令將憑證用正確的格式寫入檔案:
> 1. 安裝 yq
> ```
> sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/local/bin/yq
> sudo chmod +x /usr/local/bin/yq
> ```
> 2. 定義檔案路徑變數,方便閱讀
> ```
> CERT_FILE="/data/quay/root/quay-rootCA/rootCA.pem"
> CONFIG_FILE="$HOME/ocp4/install-config.yaml"
> ```
> 3. 讀取憑證內容並透過 yq 寫入 YAML 檔案
> ```
> CERT_CONTENT=$(cat "$CERT_FILE") yq -i '.additionalTrustBundle = strenv(CERT_CONTENT)' "$CONFIG_FILE"
> ```
> - `imageContentSources.mirrors`:將 `quay.kubeantony.com:8443` 換成你的 quay 的 `FQDN:PORT`
檔案內容如下:
```
apiVersion: v1
baseDomain: kubeantony.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: topgun
networking:
clusterNetwork:
- cidr: 10.244.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 172.20.6.0/16
networkType: OVNKubernetes
serviceNetwork:
- 10.98.0.0/16
platform:
none: {}
pullSecret: '...'
sshKey: '...'
additionalTrustBundlePolicy: Always
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
MIID3zCCAsegAwIBAgIUA/xNrN5qECVnywPZoUuA+VblAwowDQYJKoZIhvcNAQEL
BQAwbTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAlZBMREwDwYDVQQHDAhOZXcgWW9y
azENMAsGA1UECgwEUXVheTERMA8GA1UECwwIRGl2aXNpb24xHDAaBgNVBAMME3F1
YXkua3ViZWFudG9ueS5jb20wHhcNMjUwNzAxMTU1MDEzWhcNMjgwNDIwMTU1MDEz
WjBtMQswCQYDVQQGEwJVUzELMAkGA1UECAwCVkExETAPBgNVBAcMCE5ldyBZb3Jr
MQ0wCwYDVQQKDARRdWF5MREwDwYDVQQLDAhEaXZpc2lvbjEcMBoGA1UEAwwTcXVh
eS5rdWJlYW50b255LmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALPGUbzh8ic5Qz8B3z0G66fyXsTYGYoFf37VXVL8oCxliKWEwz9kDutWyTjWI1Su
3JHFzxjq0gWkeC0vtVmfTFmNtId1zQrXAk6np0HPrgW73msy5UW5c7q98BoAS7Aq
wLDH7NsnUVclBnnn3yozLNRFVF/DrgY8JDhcPoeuFljb0pwMCEwnkYYi7RR+7Tmx
Kkt5hTaraRyH2i/ja6pjNmTtR2TVQKfsKgy5yCZOIcrEjVZMRIdqna1z1EhFdW8R
GGiBZwScfkjimUV8HIXWs+QfS7EpQSmC/CK+dR2IPawVoimyN9pff1U60p23Y3dV
vNf8TH0Crj6zfpaWMix1o3ECAwEAAaN3MHUwCwYDVR0PBAQDAgLkMBMGA1UdJQQM
MAoGCCsGAQUFBwMBMB4GA1UdEQQXMBWCE3F1YXkua3ViZWFudG9ueS5jb20wEgYD
VR0TAQH/BAgwBgEB/wIBATAdBgNVHQ4EFgQUKDUsDN+UCyJaNeA/xv/4/DUrspAw
DQYJKoZIhvcNAQELBQADggEBAIjUZ2aUCapDbfjuEJeOQOIuS4ZfrM0RSAx/kMHl
uV22Wz2bbjLGX8EC9Y1zajYV00vVPC/2sH8hGvnUiznTngY5KSWypsx42BY9eMbN
lCATIuayyByWSdqn9BxZ8E7yzwSm1B9529aVFuuT8yNmEx+Xhe1wtJRYZQBArPZA
rbrouCUqrLm0LNKo5L/rMKlWdYY5QwpzY7UQcENCe4wO8xfFnAOP5uJTrSUp1R1N
Q5JiA+RPoSOZ/MkfTOO4ibW8DlqZt+s3NSpbfHIFnPuc9uK4EoAs7TDEHo53ORct
x5TvHNuXmfqYMIqaDMfw2FlpvzhfZHucI+a/Nb46XcAb4cg=
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- quay.kubeantony.com:8443/openshift/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
- mirrors:
- quay.kubeantony.com:8443/openshift/release-images
source: quay.io/openshift-release-dev/ocp-release
```
### 4.2. 設定 `agent-config.yaml`
```
nano ~/ocp4/agent-config.yaml
```
> 每個環境一定會不同的欄位:
> - `rendezvousIP`,請設為第一台 control-plane node 的 ip
> - `additionalNTPSources`,你環境的 ntp server
> - `hosts.interfaces.macAddress`,請填入每台節點的 mac address
> - `hosts.networkConfig`,請填入相關網路設定
檔案內容如下:
```yaml
apiVersion: v1alpha1
kind: AgentConfig
metadata:
name: topgun
rendezvousIP: 192.168.11.23
additionalNTPSources:
- 192.168.11.11
hosts:
- hostname: master-1
role: master
interfaces:
- name: ens18
macAddress: BC:24:11:99:B8:1B
networkConfig:
interfaces:
- name: ens18
type: ethernet
state: up
mac-address: BC:24:11:99:B8:1B
ipv4:
enabled: true
address:
- ip: 192.168.11.23
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.11.21
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.11.253
next-hop-interface: ens18
table-id: 254
rootDeviceHints:
deviceName: /dev/sda
- hostname: master-2
role: master
interfaces:
- name: ens18
macAddress: BC:24:11:F5:E5:E4
networkConfig:
interfaces:
- name: ens18
type: ethernet
state: up
mac-address: BC:24:11:F5:E5:E4
ipv4:
enabled: true
address:
- ip: 192.168.11.24
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.11.21
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.11.253
next-hop-interface: ens18
table-id: 254
rootDeviceHints:
deviceName: /dev/sda
- hostname: master-3
role: master
interfaces:
- name: ens18
macAddress: BC:24:11:9C:1C:0F
networkConfig:
interfaces:
- name: ens18
type: ethernet
state: up
mac-address: BC:24:11:9C:1C:0F
ipv4:
enabled: true
address:
- ip: 192.168.11.25
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.11.21
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.11.253
next-hop-interface: ens18
table-id: 254
rootDeviceHints:
deviceName: /dev/sda
- hostname: worker-1
role: worker
interfaces:
- name: ens18
macAddress: BC:24:11:28:F0:89
networkConfig:
interfaces:
- name: ens18
type: ethernet
state: up
mac-address: BC:24:11:28:F0:89
ipv4:
enabled: true
address:
- ip: 192.168.11.26
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.11.21
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.11.253
next-hop-interface: ens18
table-id: 254
rootDeviceHints:
deviceName: /dev/sda
- hostname: worker-2
role: worker
interfaces:
- name: ens18
macAddress: BC:24:11:77:1E:9E
networkConfig:
interfaces:
- name: ens18
type: ethernet
state: up
mac-address: BC:24:11:77:1E:9E
ipv4:
enabled: true
address:
- ip: 192.168.11.27
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.11.21
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.11.253
next-hop-interface: ens18
table-id: 254
rootDeviceHints:
deviceName: /dev/sda
- hostname: worker-3
role: worker
interfaces:
- name: ens18
macAddress: BC:24:11:F0:74:F3
networkConfig:
interfaces:
- name: ens18
type: ethernet
state: up
mac-address: BC:24:11:F0:74:F3
ipv4:
enabled: true
address:
- ip: 192.168.11.28
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.11.21
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.11.253
next-hop-interface: ens18
table-id: 254
rootDeviceHints:
deviceName: /dev/sda
```
### 4.3. 產生安裝用 ISO
1. 需要先安裝 nmstae 套件
```
sudo yum install -y nmstate
```
2. 備份設定檔
```
cp agent-config.yaml agent-config.yaml.bk
cp install-config.yaml install-config.yaml.bk
```
3. 再產生 ISO
```
openshift-install --dir ~/ocp4/ agent create image
```
螢幕輸出 :
```
INFO Configuration has 3 master replicas and 3 worker replicas
INFO The rendezvous host IP (node0 IP) is 192.168.11.23
INFO Extracting base ISO from release payload
INFO Base ISO obtained from release and cached at [/home/bigred/.cache/agent/image_cache/coreos-x86_64.iso]
INFO Consuming Install Config from target directory
INFO Consuming Agent Config from target directory
INFO Generated ISO at /home/bigred/ocp4/agent.x86_64.iso.
```
## 5. 開始安裝 RedHat OpenShift
### 5.1. 安裝步驟
1. 將 ISO 上傳至虛擬化平台,並掛載到 VM 上
2. 在虛擬化平台將各節點透過 ISO 開機

3. 追蹤和驗證安裝進度 (在 Bastion 主機執行以下指令)
**確認哪些節點已安裝完畢,並需要重新開機**:
```
openshift-install --dir ~/ocp4/ agent wait-for bootstrap-complete --log-level=debug
```
:::danger
**注意! 以上指令會提示你哪台 VM 已將所有東西都安裝進硬碟裡面,然後進入自動重新開機的階段,當你看到類似的提示訊息後,大約過 30 秒 ~ 2 分鐘內,該主機會自動重開,請確保該 VM 會透過硬碟重新開機,不要再透過 ISO 又開機進去。**
:::
正確安裝的螢幕輸出如下 :
```
...
INFO Host: master-3, reached installation stage Writing image to disk: 100%
INFO Host: worker-3, reached installation stage Waiting for control plane
INFO Bootstrap Kube API Initialized
INFO Host: master-1, reached installation stage Waiting for control plane: Waiting for masters to join bootstrap control plane
INFO Uploaded logs for host master-2 cluster 84a2dfac-6f61-4de2-93e8-185d5e342f02
INFO Host: master-2, reached installation stage Rebooting
INFO Host: master-3, reached installation stage Rebooting
INFO Host: master-1, reached installation stage Waiting for bootkube
INFO Host: master-3, reached installation stage Done
INFO Node master-2 has been rebooted 1 times before completing installation
INFO Node master-3 has been rebooted 1 times before completing installation
INFO Host: worker-2, reached installation stage Rebooting
INFO Host: worker-1, reached installation stage Rebooting
INFO Host: master-1, reached installation stage Waiting for bootkube: waiting for ETCD bootstrap to be complete
INFO Bootstrap configMap status is complete
INFO Bootstrap is complete
INFO cluster bootstrap is complete
```
4. 確認重開機後,OpenShift 自動安裝完成
```
openshift-install --dir ~/ocp4 agent wait-for install-complete --log-level=debug
```
螢幕輸出 :
```
...
INFO Cluster is installed
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run
INFO export KUBECONFIG=/home/bbg/ocp4/auth/kubeconfig
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.topgun.kubeantony.com
INFO Login to the console with user: "kubeadmin", and password: "JyAcY-VDp6D-DYdHp-T4Twr"
```
5. 設定 KubeConfig
```
mkdir ~/.kube && \
cp ~/ocp4/auth/kubeconfig ~/.kube/config
```
6. 為當前使用者設定 oc 指令設定 Bash 自動補全
```
oc completion bash > ~/.oc_completion.sh
echo "source ~/.oc_completion.sh" >> ~/.bash_rc
source ~/.bash_rc
```
7. 檢查叢集節點狀態
```
oc get nodes
```
螢幕輸出
```
NAME STATUS ROLES AGE VERSION
master-1 Ready control-plane,master 33m v1.32.7
master-2 Ready control-plane,master 46m v1.32.7
master-3 Ready control-plane,master 47m v1.32.7
worker-1 Ready worker 40m v1.32.7
worker-2 Ready worker 37m v1.32.7
worker-3 Ready worker 41m v1.32.7
```
8. 檢視整個叢集核心元件是否健康
```
oc get co
```
螢幕輸出 :
```
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.19.10 True False False 28m
baremetal 4.19.10 True False False 45m
cloud-controller-manager 4.19.10 True False False 47m
cloud-credential 4.19.10 True False False 48m
cluster-autoscaler 4.19.10 True False False 45m
config-operator 4.19.10 True False False 46m
console 4.19.10 True False False 32m
control-plane-machine-set 4.19.10 True False False 45m
csi-snapshot-controller 4.19.10 True False False 45m
dns 4.19.10 True False False 45m
etcd 4.19.10 True False False 43m
image-registry 4.19.10 True False False 35m
ingress 4.19.10 True False False 37m
insights 4.19.10 True False False 45m
kube-apiserver 4.19.10 True False False 40m
kube-controller-manager 4.19.10 True False False 41m
kube-scheduler 4.19.10 True False False 43m
kube-storage-version-migrator 4.19.10 True False False 46m
machine-api 4.19.10 True False False 45m
machine-approver 4.19.10 True False False 45m
machine-config 4.19.10 True False False 45m
marketplace 4.19.10 True False False 45m
monitoring 4.19.10 True False False 35m
network 4.19.10 True False False 46m
node-tuning 4.19.10 True False False 35m
olm 4.19.10 True False False 45m
openshift-apiserver 4.19.10 True False False 36m
openshift-controller-manager 4.19.10 True False False 41m
openshift-samples 4.19.10 True False False 31m
operator-lifecycle-manager 4.19.10 True False False 45m
operator-lifecycle-manager-catalog 4.19.10 True False False 45m
operator-lifecycle-manager-packageserver 4.19.10 True False False 36m
service-ca 4.19.10 True False False 46m
storage 4.19.10 True False False 45m
```
### 5.2. Openshihift 完成安裝後一定要做的設定
1. 禁用預設的 OperatorHub 來源,以便使用自訂的鏡像來源
```
oc patch OperatorHub cluster --type json \
-p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
```
2. 列出 `oc-mirror` 指令產生的資訊
```
ls -l $(ls -td ~/ocp4/oc-mirror-workspace/results-*/ | head -n 1)
```
執行結果:
```
total 60
-rwxr-xr-x. 1 bigred bigred 234 Sep 15 17:07 catalogSource-cs-redhat-operator-index.yaml
drwxr-xr-x. 2 bigred bigred 6 Sep 15 17:01 charts
-rwxr-xr-x. 1 bigred bigred 1407 Sep 15 17:07 imageContentSourcePolicy.yaml
-rw-r--r--. 1 bigred bigred 49707 Sep 15 17:07 mapping.txt
drwxr-xr-x. 2 bigred bigred 52 Sep 15 17:07 release-signatures
```
3. 套用映像檔內容來源策略,將映像檔請求重新導向至本地鏡像倉庫
```
oc create -f $(ls -td ~/ocp4/oc-mirror-workspace/results-*/ | head -n 1)imageContentSourcePolicy.yaml
```
4. 建立新的 CatalogSource,讓 Operator Lifecycle Manager (OLM) 從本地鏡像倉庫找到 Operator
```
oc create -f $(ls -td ~/ocp4/oc-mirror-workspace/results-*/ | head -n 1)/catalogSource-cs-redhat-operator-index.yaml
```
5. 匯出全域叢集拉取密鑰 (pull-secret) 到當前目錄下的 `.dockerconfigjson` 檔案
```
oc extract secret/pull-secret -n openshift-config --confirm --to=.
```
6. 從拉取密鑰中移除 `cloud.openshift.com` 的認證資訊
```
jq 'del(.auths["cloud.openshift.com"])' .dockerconfigjson > .new-dockerconfigjson
```
7. 使用更新後的內容(已移除 `cloud.openshift.com`)來更新叢集中的拉取密鑰
```
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.new-dockerconfigjson
```
8. 重新啟動 Insights Operator 以套用新的設定
```
oc -n openshift-insights rollout restart deployment insights-operator
```
9. 檢查 Insights Operator 的 Pod 狀態,確認重啟成功
```
oc get pod -n openshift-insights -l app=insights-operator
```
執行結果:
```
NAME READY STATUS RESTARTS AGE
insights-operator-58bd6cc4c7-lpc2q 1/1 Running 0 5s
```
10. 檢查 Insights Cluster Operator (CO) 的健康狀態,確認其正常運作
```
oc get co insights
```
執行結果:
```
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
insights 4.19.10 True False False 51m
```
## 6. 參考文章
- [Deploying Red Hat OpenShift Operators in a disconnected environment - Red Hat blog](https://www.redhat.com/en/blog/deploying-red-hat-openshift-operators-disconnected-environment)
- [Installing OpenShift in a disconnected network, step-by-step](https://hackmd.io/@johnsimcall/Sk1gG5G6o)
- [How to install Operators in OpenShift on an Air-Gapped environment | oc-mirror](https://medium.com/@santivelazquez/how-to-install-operators-in-openshift-on-an-air-gapped-environment-oc-mirror-480204e06ac6)
- [Disconnected environments - RedHat Docs](https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html-single/disconnected_environments/index#connected-to-disconnected)