# OpenShift 4.10 安裝步驟
前情提要
---
企業數位轉型需求, 需要更有效率且頻繁地針對市場需求去調整軟體功能。2014年Google讓世界知道了kubernetes, 2015年推出v1.0。Kubernetes 也稱 k8s (k~s中間有8個字母)。因為有了k8s, 讓軟體開發development與IT維運operation的邊界模糊了。幾乎將IT所需要的資源融合在k8s平台內, 例如基礎建設架構, 網路架構, 軟體開發, 軟體部署流程, 整和在同一個平台上, 讓兩個團隊有共同溝通基礎進而更有效率的推出符合客戶需求的新軟體。
為什麼選擇OpenShift
---
Google把kubernetes捐給CNCF開源之後, 現今市場上充斥的各種門派,流派,版本, 不過概念相同, 如果需要使用, 可能會花很多時間在套件上的安裝整合,如果發現問題,就只能回覆在社群內, 靜候下個版本的更新, 如果後續更新推出了, 需要重新來過升級部署!
什麼是OpenShift? OpenShift簡稱OCP為紅帽Red Hat的產品, 將kubernetes平台, 驗證, 整合並提供企業等級服務, 整合推出成OpenShift, 企業內部可以透過訂閱OpenShift更快更簡單的導入容器化平台。
為什麼選擇Openshift, 原因很簡單, Openshift把所有k8s會使用到的套件經過內部驗證測試, 進而推出的容器平台, 減少了需要自行驗證測試的時間, 相對於開源版本會更節省時間與具備安全穩定性, 另外OCP更多進階好用的工具, 提供GUI, 提供進階的OC cli, 提供operator hub, 你可以想像比對一下, 在開源的世界, 需要自行安裝所需要工具, 在OCP內透過operator hub 安裝工具套件, 就像在iphone app store一樣簡單一鍵安裝。
終於要進入您比較關心的正題
# OpenShift 4.10
## 1. 說明
本次使用離線安裝user provisioned infrastructure (UPI)意即假設客戶Production環境因為網路與資安考量無法直接連接上網, 則需要離線安裝需要先將安裝套件由其他可連網環境下載後, 移動至安裝環境, 方可安裝, 與客戶既有一些基礎建設, 基礎建設由客戶提供且維運管理。例如:作業系統安裝, DNS與load balance配置等。
或者您也可以選擇透過Full-stack automation, Installer Provisioned Infrastructure (IPI) 在安裝的同時, 會將上述舉例項目自行配字完成, 僅需少部分介入!相較會更簡單快速!
OpenShift 除了可以部署在, 自建私有雲或實體主機上, 也可以支援部署在公有雲, Azure, AWS, IBM cloud。
## 2. 事前準備
* 本次安裝本次安裝OCP 4.10.4
* 注意oc cli版本與installer 版本須為一致
* 另外需要準備load balance 與 DNS 兩者皆為必須, 可使用客戶環境或在bastion上自建
* https://console.redhat.com/openshift/install/metal/user-provisioned
* https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.10.4/
* https://console.redhat.com/openshift/downloads#tool-mirror-registry
* 小技巧~可以用複製連結, 透過網址選擇自己要的版本下載
| 項目 | 版本 | 用途 | 使用在哪|
| -------- | -------- | -------- |-------- |
| RHEL | 8.5 | Bastion | Bastion的作業系統 |
| RHCoreOS | 4.10.3 | Text | Master, Worker, infra etc. |
| pull secret | x | 下載images時需要| bastion |
| oc cli | 4.10.4 | OpenShift CLI | bastion |
| oc install | 4.10.4 | 安裝openshift必要工具 | bastion |
| mirror registry | x | 在離線環境建置一個images registry| bastion(或看需求)|
## 2.1 名詞解釋

| 名詞 | 用途 | 備註 |
| -------- | -------- | -------- |
| Cluster installer/bastion | 發動安裝跳版機, 在這也具備load balance, DNS, web server|
| bootstrap | 安裝時重要的角色, 透過installer 將ocp cluster角色先部署在此, 再透過scale out 延伸到master node|
| Master node | ocp 資源調節重要節點須為三個|
| infra node | options, 將Logs等等之系統服務拆出來到這專屬專用 |
| Worker/compute node | 運行application所在 |
| ignition file | 安裝時透過installer編寫的yaml檔, 後續透過執行此檔案安裝ocp |
| DNS | 網域名稱系統 網址轉ip, ip轉網址 |
| HA Proxy | load balance |
## 2.2 網路規劃設計配置
### 2.2.1 網段規劃
| 用途 | 網段 |
| -------- | -------- |
| OCP subnet CIDR | 172.16.36.0/24
| OCP PoD (CNI) |10.129.0.0/16
| OCP Service network | 172.31.0.0/16
### 2.2.2 關閉selinux
關閉 selinux 因為是測試環境, 故關閉selinux, 如客戶production請跟客戶討論, 如未正常設定haproxy啟動可能會遭遇問題
```bash=
vi /etc/selinux/config
SELINUX=disabled
```
```bash=
setenforce 0
```
### 2.2.3 DNS 配置
在bastion啟動並設定DNS
1. 安裝 Bind
```bash=
yum install bind bind-utils -y
```
2. 設定 Bind
```bash=
vim /etc/named.conf
```
```bash=
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
options {
listen-on port 53 { any; };
# listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
/*
- If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
- If you are building a RECURSIVE (caching) DNS server, you need to enable
recursion.
- If your recursive DNS server has a public IP address, you MUST enable access
control to limit queries to your legitimate users. Failing to do so will
cause your server to become part of large scale DNS amplification
attacks. Implementing BCP38 within your network would greatly
reduce such attack surface
*/
recursion yes;
# dnssec-enable yes;
dnssec-enable no;
# dnssec-validation yes;
dnssec-validation no;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.root.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
/* https://fedoraproject.org/wiki/Changes/CryptoPolicy */
#include "/etc/crypto-policies/back-ends/bind.config";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
zone "ocp4.lab.com" IN {
type master;
file "named.ocp4.lab.com";
};
zone "36.16.172.in-addr.arpa" IN {
type master;
file "rev.36.16.172";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
```
3. 設定 DNS Record
```bash=
vim /var/named/named.ocp4.lab.com
```
```bash=
$TTL 1D
@ IN SOA @ bastion.ocp4.lab.com. (
2020040819 ; serial
3H ; refresh
15M ; retry
1W ; expire
1D ) ; minimum
@ IN NS bastion.ocp4.lab.com.
@ IN A 172.16.36.10
bastion IN A 172.16.36.10
registry IN A 172.16.36.10
api IN A 172.16.36.10
api-int IN A 172.16.36.10
*.apps IN A 172.16.36.10
bootstrap IN A 172.16.36.11
master-1 IN A 172.16.36.21
master-2 IN A 172.16.36.22
master-3 IN A 172.16.36.23
infra-1 IN A 172.16.36.31
infra-2 IN A 172.16.36.32
worker-1 IN A 172.16.36.41
worker-2 IN A 172.16.36.42
worker-3 IN A 172.16.36.43
```
4. 設定 DNS Record - 反解
```bash=
vim /var/named/rev.36.16.172
```
```bash=
$TTL 1D
@ IN SOA @ bastion.ocp4.lab.com. (
2020040819 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS bastion.ocp4.lab.com.
36.16.172.in-addr.arpa IN PTR bastion.ocp4.lab.com
10 IN PTR bastion.ocp4.lab.com.
10 IN PTR registry.ocp4.lab.com.
10 IN PTR api.ocp4.lab.com.
10 IN PTR api-int.ocp4.lab.com.
11 IN PTR bootstrap.ocp4.lab.com.
21 IN PTR master-1.ocp4.lab.com.
22 IN PTR master-2.ocp4.lab.com.
23 IN PTR master-3.ocp4.lab.com.
31 IN PTR infra-1.ocp4.lab.com.
32 IN PTR infra-2.ocp4.lab.com.
41 IN PTR worker-1.ocp4.lab.com.
42 IN PTR worker-2.ocp4.lab.com.
43 IN PTR worker-3.ocp4.lab.com.
```
5. 修改檔案權限
```bash=
chgrp named /var/named/named.ocp4.lab.com && \
chmod 640 /var/named/named.ocp4.lab.com && \
chgrp named /var/named/rev.36.16.172 && \
chmod 640 /var/named/rev.36.16.172
```
6. 調整防火牆與啟動DNS
```bash=
firewall-cmd --add-service=dns --permanent
firewall-cmd --reload
systemctl restart named
```
### 2.2.4 haproxy 配置
1. 安裝 HA Proxy
```bash=
yum install haproxy -y
```
2. 設定 HA Proxy
```bash=
vim /etc/haproxy/haproxy.cfg
```
```bash=
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
listen stats # Define a listen section called "stats"
bind *:8081 # Listen on localhost:<port-number>
mode http
stats enable # Enable stats page
stats hide-version # Hide HAProxy version
stats uri /haproxy_stats # Stats URI
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
frontend openshift-api-server
bind *:6443
default_backend openshift-api-server
mode tcp
option tcplog
backend openshift-api-server
balance source
mode tcp
server master-1 172.16.36.21:6443 check
server master-2 172.16.36.22:6443 check
server master-3 172.16.36.23:6443 check
server bootstrap 172.16.36.11:6443 check
#---------------------------------------------------------------------
frontend machine-config-server
bind *:22623
default_backend machine-config-server
mode tcp
option tcplog
backend machine-config-server
balance source
mode tcp
server master-1 172.16.36.21:22623 check
server master-2 172.16.36.22:22623 check
server master-3 172.16.36.23:22623 check
server bootstrap 172.16.36.11:22623 check
#---------------------------------------------------------------------
frontend ingress-http
bind *:80
default_backend ingress-http
mode tcp
option tcplog
backend ingress-http
balance source
mode tcp
server infra-1 172.16.36.31:80 check
server infra-2 172.16.36.32:80 check
#---------------------------------------------------------------------
frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog
backend ingress-https
balance source
mode tcp
server infra-1 172.16.36.31:443 check
server infra-2 172.16.36.32:443 check
```
3. 設定防火牆
```bash=
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=22623/tcp
firewall-cmd --reload
```
4. 啟動 HA Proxy
```bash=
systemctl restart haproxy
```
5. 透過瀏覽器檢查是否正常啟動haproxy
開啟瀏覽器後輸入 bastion IP::8081/haproxy_stats
### 2.2.5 設定 Httpd
後續master/infra/worker node 安裝檔會透過自建web service curl 取得檔案
1. 安裝 httpd
```bash?=
yum install httpd -y
```
2. 修改 Listen Port
```bash?=
vi /etc/httpd/conf/httpd.conf
```
從預設80port 修改成8080
```bash?=
Listen 8080
```
### 2.2.6 安裝OpenShift cli 在 bastion 上
```bash=
$ tar xvzf openshift-client-linux-4.10.4.tar.gz
$ echo $PATH
sudo cp ./{oc,kubectl} /usr/local/bin/
oc version
```
### 2.2.7 安裝OpenShift install 在 bastion 上
```bash?=
$ tar xvzf openshift-install-linux-4.9.10.tar.gz
$ echo $PATH
sudo cp openshift-install /usr/local/bin/
openshift-install version
```
### 2.2.8 bastion機產生 ssh key
```bash=
$ ssh-keygen --> enter 到底
$ ssh-copy-id --> 方便後續ssh登入不需密碼
```
## 3. 安裝mirror registry(quay)
### 3.1 安裝mirror registry
* 安裝OpenShift 必要項目, 因為離線安裝, 會把mirror registry安裝起來後, 下載安裝清單與套件, 提供後續OpenShift cluster 安裝佈建時所需的套件
* 相對簡單一行指令, 安裝到底, 不過後續還是有幾個認證設定要完成
```bash=
./mirror-registry install --quayHostname $(hostname -f) --initPassword password --ssh-key <~/.ssh/my_id_rsa>
```
安裝裝結束後會提供連線方式與帳號密碼 與後續需要用的重要資訊 如下所示
```bash=
INFO[2022-06-05 11:04:55] Quay installed successfully, permanent data is stored in /etc/quay-install
INFO[2022-06-05 11:04:55] Quay is available at https://bastion.ocp4.lab.com:8443 with credentials (init, password)
```
```bash=
imageContentSources:
- mirrors:
- bastion.ocp4.lab.com:8443/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- bastion.ocp4.lab.com:8443/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
```
需要將(init, password)透過base64編碼
```bash=
$ echo -n 'init:password' | base64 -w0
aW5xxxxxxxxxxxxxZWQ= $
```
接下來需要將已經準備號的pull secret檔轉成base64透過以下指令, 並做一些編輯
```bash=
cat pull-secret.txt | jq . > pull-secret.json
```
經剛剛已經轉換成bae64的帳號密碼以下方格式加入pull-secret.json
注意對齊格式縮排標點符號
```yaml=
"auths": {
"bastion.ocp4.lab.com:8443": {
"auth": "aW5xxxxxxxxxxxxxZWQ="
}
```
安裝mirror registry 會自動生成CA憑證, 須加入信任憑證中
```bash=
$ cp /etc/quay-install/quay-rootCA/rootCA.pem /usr/share/pki/ca-trust-source/anchors/rootCA.cert
$ update-ca-trust
```
驗證是否可以順利登入
```bash=
podman login -u init --authfile pull-secret.json bastion.ocp4.lab.com:8443
Password:
Login Succeeded!
```
### 3.2 下載安裝所需套件
#### 3.2.1 環境準備
建立備份路徑資料夾
```bash=
mkdir -p $HOME/openshift4/registry/images
```
編輯環境變數檔
```bash=
# vim upgrade-env
```
```bash=
export OCP_RELEASE=$(oc version -o json --client | jq -r '.releaseClientVersion')
export LOCAL_REGISTRY="bastion.ocp4.lab.com:8443"
export LOCAL_REPOSITORY="ocp4/openshift4"
export PRODUCT_REPO="openshift-release-dev"
export LOCAL_SECRET_JSON=$HOME/mirror-registry/pull-secret.json
export RELEASE_NAME="ocp-release"
export ARCHITECTURE=x86_64
export REMOVABLE_MEDIA_PATH="$HOME/openshift4/registry/images"
```
匯入環境變數
```bash=
source upgrade-env
```
---
#### 3.2.2開始mirror
* 如果你這台機器是無法連線, 請準備隨身碟; 如果可以連網請直接執行3.2.2.2最後一步驟
##### 3.2.2.1 無法連網
接上隨身碟並透過oc admin 列出安裝套件清單
```bash=
oc adm release mirror -a ${LOCAL_SECRET_JSON} --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run
```
開始下載 到隨身碟
```bash=
oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}
```
移除隨身碟並拿去production環境並接上, 且上傳
```bash=
oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}
```
##### 3.2.2.2 可連網
```bash=
oc adm release mirror -a ${LOCAL_SECRET_JSON} --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}
```
以上已完成mirror registry安裝, 透過瀏覽器登入init帳號密碼, 檢查是否有ocp4 repository
bastion.ocp4.lab.com:8443

---
## 4 進入OCP4安裝環節
### 4.1 設定ignition檔案
記得我們開始安裝的oc install嗎, 接下來會透過該套件, 編輯ignition檔案
1. 設定ignition檔案路徑
```bash=
mkdir -p ~/openshift4/ignition
cd ~/openshift4/ignition
```
2. 編輯install-config.yaml
```bash=
vim install-config.yaml
```
需要將mirror registry, ssh key, CA 資訊加入其中, 完成檔案如下
```bash=
apiVersion: v1
baseDomain: lab.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ocp4
networking:
clusterNetworks:
- cidr: 10.129.0.0/16
hostPrefix: 24
networkType: OpenShiftSDN
serviceNetwork:
- 172.31.0.0/16
platform:
none: {}
'imageContentSources: */此段落為當時安裝mirror registry安裝後之內容
- mirrors:
- bastion.ocp4.lab.com:8443/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- bastion.ocp4.lab.com:8443/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev /*'
pullSecret: '<PULL_SECRET *需使用加入init資訊json檔的並轉回一般格式>'
sshKey: '<SSH_PUB_KEY>'
additionalTrustBundle: '<REGISTRY_CA_PEM>'
```
將pull-secret 轉回一般格式
```bash=
jq -c < pull-secret.json
```
注意以下檔案路徑 cat 出來
* cat ~/.ssh/id_rsa.pub
* cat /etc/quay-install/quay-rootCA/rootCA.pem
看個人需求是否需要備份, 因為執行完後, install-config.yaml會消失轉換成其他檔案
```bash=
cp install-config.yaml install-config.yaml.bak
```
3. 生成manifests檔案
```bash=
openshift-install create manifests --dir=$HOME/openshift4/ignition
```
4. 限制Master 不跑application loading 使用sed 指令
```bash=
sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
```
5. 生成ignition-configs
```bash=
openshift-install create ignition-configs --dir=$HOME/openshift4/ignition
```
6. 編寫安裝檔案 會將其放置web server, 提供bootstrap, master, worker node取用安裝
* Bootstrap
```bash=
vim bootstrap-install
```
```bash=
#!/bin/bash
sudo coreos-installer install /dev/sda \
--insecure-ignition \
--ignition-url=http://172.16.36.10:8080/ignition/bootstrap.ign \
--firstboot-args 'rd.neednet=1' \
--copy-network
```
* Master
```bash=
vim master-install
```
```bash=
#!/bin/bash
sudo coreos-installer install /dev/sda \
--insecure-ignition \
--ignition-url=http://172.16.36.10:8080/ignition/master.ign \
--firstboot-args 'rd.neednet=1' \
--copy-network
```
* Worker
```bash=
vim worker-install
```
```bash=
#!/bin/bash
sudo coreos-installer install /dev/sda \
--insecure-ignition \
--ignition-url=http://172.16.36.10:8080/ignition/worker.ign \
--firstboot-args 'rd.neednet=1' \
--copy-network
```
7. 開始安裝OCP
1. 我的環境是使用RHV開虛擬機器, 需要您先將虛擬機器配置好, 使用光碟開機到作業系統內
2. 透過mutui 設定網卡, ip, DNS, GW. 並且下上網卡
* 有些透過起DHCP server, 綁定mac address 取得預先配發的ip
接下來三個步驟很像, 但著要不要用錯檔案, 順序為 bootstrap -> master -> worker
檢查網路是否正常
```bash=
# 使用nslookup 正反解
ping 172.16.36.10
nslookup 172.16.36.10
nslookup bootstrap.ocp4.lab.com
```
如果網路正常 透過curl output 安裝
bootstrap
```bash=
curl http://172.10.37.60:8080/ignition/bootstrap-install --output bootstrap-install
chmod +x bootstrap-install
./bootstrap-install
```
master
```bash=
curl http://172.10.37.60:8080/ignition/master-install --output master-install
chmod +x bootstrap-install
./master-install
```
worker
```bash=
curl http://172.10.37.60:8080/ignition/worker-install --output worker-install
chmod +x worker-install
./worker-install
```
安裝完成後檢查, 並重開
```bash=
lsblk
```
所有都檢查好之後, 在同一時間按照順序的重開(從虛擬機管理直接重開, 並用硬碟開機)
---
8.安裝系統會輪巡安裝, 您也可以連線到其他節點檢查是否有狀況
```bash=
ssh core@bootstrap.lab.example.com
sudo crictl pods
journalctl -b -f -u release-image.service -u bootkube.service
```
或者回到bastion機器檢查透過以下指令
```bash=
openshift-install wait-for bootstrap-complete --dir=$HOME/openshift4/ignition --log-level debug
```

如上圖, 順利安裝完成, 會提示登入web console的網址帳號密碼, 後續也可以在~/openshift4/ignition/auth/kubeadmin-password 查到
回到bastion 將除了master以外的節點, 同意加入叢集
宣告環境變數
```bash=
export KUBECONFIG=$HOME/openshift4/ignition/auth/kubeconfig
```
檢查哪些節點等待加入中
```bash=
oc get csr
```
同意加入節點
```bash=
oc adm certificate approve <CSR_NAME>
```
8. 檢查環境
```bash=
oc get node
oc get clusterversion
```
9. 開啟瀏覽器, 登入帳號密碼

---
恭喜已經順利安裝完成