# home lab OCP 4.11
## Bastion
### DNS server
``` bash=
$ dnf install -y bind bind-utils
$ vi /etc/named.conf
---
acl clients {0.0.0.0/0;};
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
directory "/var/named";
...
allow-query { localhost; clients; };
...
}
...
include "/etc/named/home.lab.zones";
---
$ vi /etc/named/home.lab.zones
---
//forward zone
zone "home.lab" IN {
type master;
file "home.lab.zone";
allow-update { none; };
};
//backward zone
zone "1.168.192.in-addr.arpa" IN {
type master;
file "1.168.192.zone";
};
---
$ vi /var/named/home.lab.zone
---
$TTL 86400
@ IN SOA dns.home.lab. admin.home.lab (
42 ; serial
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
;
IN NS dns.home.lab.
registry IN A 192.168.1.100
dns IN A 192.168.1.100
api.ocp4 IN A 192.168.1.100
api-int.ocp4 IN A 192.168.1.100
*.apps.ocp4 IN A 192.168.1.100
master-1.ocp4 IN A 192.168.1.101
master-2.ocp4 IN A 192.168.1.102
master-3.ocp4 IN A 192.168.1.103
bootstrap.ocp4 IN A 192.168.1.104
infra-1.ocp4 IN A 192.168.1.105
---
$ chown root:named /var/named/home.lab.zone
$ chmod 640 /var/named/home.lab.zone
$ vi /var/named/1.168.192.zone
---
$TTL 86400
@ IN SOA dns.home.lab. root.home.lab. (
1997022700 ; serial
28800 ; refresh
14400 ; retry
3600000 ; expire
86400 ; minimum
)
IN NS dns.home.lab.
;
101 IN PTR master-1.ocp4.home.lab.
102 IN PTR master-2.ocp4.home.lab.
103 IN PTR master-3.ocp4.home.lab.
104 IN PTR bootstrap.ocp4.home.lab.
105 IN PTR infra-1.ocp4.home.lab.
---
$ chown root:named /var/named/1.168.192.zone
$ chmod 640 /var/named/1.168.192.zone
$ named-checkzone home.lab /var/named/home.lab.zone
$ named-checkzone 1.168.192.in-addr.arpa /var/named/1.168.192.zone
$ systemctl start named
$ systemctl enable named
$ firewall-cmd --add-service=dns --permanent
$ firewall-cmd --reload
```
### NTP server
``` bash=
$ vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (https://www.pool.ntp.org/join.html).
#server time.stdtime.gov.tw
#server tock.stdtime.gov.tw
...
# Allow NTP client access from local network.
allow 192.168.1.0/24
# Serve time even if not synchronized to a time source.
local stratum 3
...
$ systemctl enable chronyd --now
```
### HAProxy
```bash!=
$ dnf install haproxy
$ semanage port --add --type http_port_t --proto tcp 22623
$ semanage port --add --type http_port_t --proto tcp 6443
$ semanage port -l | grep http_port_t
http_port_t tcp 6443, 22623, 80, 81, 443, 488, 8008, 8009, 8443, 9000
$ vi /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend stats
bind *:9000
mode http
log global
maxconn 10
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats show-desc Stats for ocp4 cluster
stats auth admin:ocp4
stats uri /stats
frontend openshift-api-server
bind *:6443
default_backend openshift-api-server
mode tcp
option tcplog
backend openshift-api-server
balance source
mode tcp
server master-1 192.168.1.101:6443 check
server master-2 192.168.1.102:6443 check
server master-3 192.168.1.103:6443 check
#server bootstrap 192.168.1.104:6443 check
#---------------------------------------------------------------------
frontend machine-config-server
bind *:22623
default_backend machine-config-server
mode tcp
option tcplog
backend machine-config-server
balance source
mode tcp
server master-1 192.168.1.101:22623 check
server master-2 192.168.1.102:22623 check
server master-3 192.168.1.103:22623 check
#server bootstrap 192.168.1.104:22623 check
#---------------------------------------------------------------------
frontend ingress-http
bind *:80
default_backend ingress-http
mode tcp
option tcplog
backend ingress-http
balance source
mode tcp
server infra-1 192.168.1.105:80 check
server infra-2 192.168.1.106:80 check
#---------------------------------------------------------------------
frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog
backend ingress-https
balance source
mode tcp
server infra-1 192.168.1.105:443 rcheck
server infra-2 192.168.1.106:443 check
```
### Registry
``` bash=
$ dnf install -y httpd-tools #htpasswd
$ export MAIN_DIR=/opt/registry
$ mkdir -p $MAIN_DIR/{auth,certs,data,conf}
$ htpasswd -bBc $MAIN_DIR/auth/htpasswd admin redhat
$ semanage fcontext -a -t container_file_t '/opt/registry/(auth|certs|data|conf)(/.*)?'
$ semanage fcontext -l | grep '/opt/registry'
$ restorecon -Rv /opt/registry
$ podman run -d -p 5000:5000 --restart=always --name registry \
-v $MAIN_DIR/conf:/etc/docker/registry \
-v $MAIN_DIR/data:/var/lib/registry \
-v $MAIN_DIR/auth:/auth \
-e "REGISTRY_STORAGE_DELETE_ENABLED=true"
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v $MAIN_DIR/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
$ podman generate systemd --new --files --name registry
$ cp -Z container-registry.service /etc/systemd/system
$ systemctl daemon-reload
$ systemctl enable --now container-registry.service
```
### web server
``` bash=
$ dnf install httpd
$ vi /etc/httpd/conf/httpd.conf
...
Listen 8080
...
$ systemctl enable httpd --now
```
### NFS server
#### 安裝nfs相關套件
```shell=
$ dnf install -y nfs-utils rpcbind
```
#### 啟動nfs相關服務
```shell=
$ systemctl enable rpcbind --now
$ systemctl enable nfs-server --now
$ systemctl status rpcbind
$ systemctl status nfs-server
```
#### 目錄建立權限設定
建立nfs share dir
```shell=
$ mkdir /opt/nfs
$ chmod -R 755 /opt/nfs
$ chown -R 1000:1000 /opt/nfs
```
#### nfs service 分享目錄設定
``` bash=
$ vi /etc/exports
/opt/nfs 192.168.1.0/24(rw,sync,no_root_squash,all_squash,anonuid=1000,anongid=1000)
```
執行以下命令套用
```shell=
$ exportfs -arv
$ showmount -e localhost
```
#### 設定SELinux
```shell=
$ setsebool -P nfs_export_all_rw on
$ setsebool -P nfs_export_all_ro on
```
### firewall
``` bash=
$ firewall-cmd --add-service={dns,http,https,ntp,mountd,rpc-bind,nfs} --permanent
$ firewall-cmd --add-port={22623,6443,8080,5000,9000}/tcp --permanent
$ firewall-cmd --reload
$ firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens192
sources:
services: dhcpv6-client dns http https ntp ssh
ports: 22623/tcp 6443/tcp 8080/tcp 5000/tcp 9000/tcp
protocols:
forward: yes
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
```
## ocp 檔案準備
### 同步Redhat ocp image 至本地目錄
>帶有pull-secret.json,及可連線至外網機器,並安裝oc
> rhcos.raw.gz
``` bash=
## 查看registry info
$ oc adm release info -a pull-secret.json quay.io/openshift-release-dev/ocp-release:4.11.21-x86_64
$ mkdir mirror
$ oc adm -a pull-secret.json release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.21-x86_64 --to-dir=mirror/
$ tar zcvf mirror.tar.gz mirror #產生mirror.tar.gz檔
```
### Bastion oc Client 安裝
> openshift-client-linux-4.11.21.tar.gz
> openshift-install-linux-4.11.21.tar.gz
> oc-mirror.tar.gz
> pull-secret.txt
``` bash=
$ tar xvzf openshift-client-linux-4.11.21.tar.gz
$ tar xvzf openshift-install-linux-4.11.21.tar.gz
$ tar xvzf oc-mirror.tar.gz
$ install openshift-install /usr/local/bin/openshift-install
$ install oc /usr/local/bin/oc
$ install kubectl /usr/local/bin/kubectl
$ install oc-mirror /usr/local/bin/oc-mirror
## 設定 Bash Completion ,提供自動補完,執行完,需重新登入 bash。
$ dnf install bash-completion
$ oc completion bash > /etc/bash_completion.d/oc
$ kubectl completion bash > /etc/bash_completion.d/kubectl
$ oc-mirror completion bash > /etc/bash_completion.d/oc-mirror
```
### Bastion同步本地ocp image 至 Private Registry
> 帶入mirror.tar.gz、pull-secret.json
``` bash=
## base64加密自建registry user:pass
$ echo -n admin:redhat | base64
<base64 admin:redhat>
## 依照 pull-secret.json 加入
$ vi pull-secret.json
{
"auths": {
...
"registry.home.lab": {
"auth": "<base64 admin:redhat>"
}
}
}
## 配置同步registry變數
$ LOCAL_SECRET_JSON='pull-secret.json'
$ OCP_RELEASE=4.11.21-x86_64
$ LOCAL_REGISTRY='registry.home.lab:5000'
$ LOCAL_REPOSITORY="ocp4/release"
$ tar xvzf mirror.tar.gz
$ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=mirror/ "file://openshift/release:"${OCP_RELEASE}"*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --insecure
```
### 建立CoreOS node key
``` bash=
$ ssh-keygen -t rsa -b 4096 -N '' -f ~/ocp_ssh_key.pub
```
### 配置安裝源
> 於Redhat官方下載帶入rhcos.raw.gz
``` bash=
## 準備自建registry crt
## 帶入pull-secret.json
## 網域名 -> DOMAIN
## 集群名 -> CLUSTERID
## Registry url -> REGISTRY_URL
## Registry 憑證內容 -> REGISTRY_CRT
## core os key -> OCP_SSH_KEY
## pull-secret.json內容 PULL_SECRETsecret.json
$ vi ~/install-config.yaml
apiVersion: v1
baseDomain: ${DOMAIN} #<- 根據自我環境定義
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ${CLUSTERID} #<- 根據自我環境定義
networking:
clusterNetworks:
- cidr: 10.92.0.0/16 #<- 根據自我環境定義
hostPrefix: 23 #<- 根據自我環境定義
networkType: OpenShiftSDN
serviceNetwork:
- 10.251.0.0/16 #<- 根據自我環境定義
platform:
none: {}
pullSecret: '${PULL_SECRET}'
sshKey: '${OCP_SSH_KEY}'
additionalTrustBundle: |
${ROOT_CA}
imageContentSources:
- mirrors:
- ${REGISTRY_URL}
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- ${REGISTRY_URL}
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
$ export DOMAIN=home.lab
$ export CLUSTERID=ocp4
$ export PULL_SECRET=$(cat ~/pull-secret.json)
$ export OCP_SSH_KEY=$(cat ~/ocp_ssh_key.pub)
$ export ROOT_CA=$(sed 's/^/ /g' /etc/pki/ca-trust/source/anchors/root_CA.crt)
$ export REGISTRY_URL=registry.home.lab:5000/ocp4/release
$ envsubst < install-config.yaml > ./install/install-config.yaml
$ mkdir install
$ cd ./inastll
$ openshift-install create manifests
$ cat manifests/cluster-scheduler-02-config.yml # 檢查mastersSchedulable是否為true
$ sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
$ cat manifests/cluster-scheduler-02-config.yml # 檢查mastersSchedulable為false
$ openshift-install create ignition-configs
$ tree ./ #確認生成以下檔案
./
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
$ mkdir /var/www/html/ocp4
$ cp *.ign /var/www/html/ocp4
$ cat >> /var/www/html/install.sh
#!/bin/sh
set -x
export BASTION_IP=192.168.1.100:8080
sudo coreos-installer install /dev/sda \
--insecure \
--insecure-ignition \
-u http://${BASTION_IP}/ocp4/rhcos.raw.gz \
-I http://${BASTION_IP}/ocp4/${CLUSTER_ROLE}.ign \
--firstboot-args 'rd.neednet=1' \
--copy-network
<ctrl+D>
$ chown apache:apache -R /var/www/html/
$ restorecon -Rv /var/www/html/nvn
```
## 安裝Openshift 集群
### bootstrap 安裝
``` shell=
$ nmtui #配置網路
$ curl -O http://192.168.1.100:8080/install.sh
$ chmod 755 install.sh
$ CLUSTER_ROLE=bootstrap ./install.sh
# 從Bastion ssh bootstrap
$ ssh -i ~/ocp_ssh_key core@192.168.1.101
$ journalctl -b -f -u bootkube.service #看安裝情形
```
### master 安裝
``` shell=
$ nmtui #配置網路
$ curl -O http://192.168.1.100:8080/install.sh
$ chmod 755 install.sh
$ CLUSTER_ROLE=master ./install.sh
```
### 安裝進度
以下步驟,可查看 OCP 安裝的進度
1. 登入bootstrap執行以下指令,查看安裝log
```shell=
$ sudo journalctl -b -f -u release-image.service -u bootkube.service
```
2. 登入bastion使用以下指令,確認 Master 節點已加入叢集
```shell=
$ sudo oc --kubeconfig=?/kubeconfig get node -w
$ openshift-install wait-for bootstrap-complete --dir=./openshift4/ignition --log-level debug
```
### 安裝Worker node
``` shell=
$ nmtui #配置網路
$ curl -O http://192.168.1.100:8080/install.sh
$ chmod 755 install.sh
$ CLUSTER_ROLE=worker ./install.sh
```
### 新增Worker node
>到Bastion node
全部一併 Approve (至少須執行2次以上)
``` shell=
$ oc get csr | grep 'Pending' | awk '{print $1}' | xargs -n 1 oc adm certificate approve #審核Pending csr
```
### 設定bastion ssh 快速登入
>以root使用者為例
``` bash=
$ cp ~/ocp_ssh_key ~/.ssh/
$ cat >> ~/.ssh/config
Host *.ocp4.home.lab
User core
IdentityFile /root/.ssh/ocp_ssh_key
Host 192.168.1.101 192.168.1.102 192.168.1.103 192.168.1.104 192.168.1.105 192.168.1.106 192.168.1.107 192.168.1.108
User core
IdentityFile /root/.ssh/ocp_ssh_key
$ ssh 192.168.1.101 #可直接ssh
```
### 設定 NTP (chrony)
準備 RHCOS 用的 MachineConfig
``` shell=
$ vi ./chrony-template.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: ${MC_ROLE}
name: 99-${MC_ROLE}s-chrony-configuration
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 3.1.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,${CHRONY_BASE64}
mode: 420
overwrite: true
path: /etc/chrony.conf
osImageURL: ""
```
### 套用 Machine Config
>產生Machine Config 並 apply 到 OCP 上
此步驟將會重開對應的 OCP 節點
``` shell=
$ cat >> ./chrony.conf
server 192.168.1.100 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
$ export CHRONY_BASE64=$(cat ./chrony.conf | base64 -w 0)
$ export MC_ROLE="master"
$ envsubst < chrony-template.yaml > chrony-${MC_ROLE}.yaml
$ export MC_ROLE="worker"
$ envsubst < chrony-template.yaml > chrony-${MC_ROLE}.yaml
$ oc apply -f chrony-master.yaml
$ oc apply -f chrony-worker.yaml
```
### 設定 Node Label
#### 設定 infra/router/logger 的 node labels
```shell=
$ oc label node <infra-node> node-role.kubernetes.io/infra=""
$ oc label node <worker-node> node-role.kubernetes.io/app=""
```
#### 設定 Machine Config Pool
> 建立 `infra-mcp.yaml`
```yaml=
# Remember at vi `set paste`
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: infra
spec:
machineConfigSelector:
matchExpressions:
- key: machineconfiguration.openshift.io/role
operator: In
values:
- worker
- infra
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
```
> 建立新的 Machine Config Pool
> **注意,此步驟將會重開對應的 OCP 節點**
```shell=
$ oc apply -f infra-mcp.yaml
$ oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
infra rendered-infra-883f99ee56ac656339dff82b4036ca46 True False False 2 2 2 0 5m12s
master rendered-master-a08cefca086c4f3dd0f97d8f47caae11 True False False 3 3 3 0 14d
worker rendered-worker-883f99ee56ac656339dff82b4036ca46 True False False 0 0 0 0 14d
```
#### 移除多餘的 worker labels
> 記得先設定 Machine Config Pool ,並確認都更新完成
> **此步驟將會重開對應的 OCP 節點**
```shell=
$ oc label node <infra-node> node-role.kubernetes.io/worker-
```
### Router 設定
> 將 router 元件,綁定執行位置到 infra node 上 (nodePlacement) 並設定執行個數 (replicas)
```shell=
$ oc edit IngressController default -n openshift-ingress-operator
...
spec:
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
replicas: 2
$ oc -n openshift-ingress get pod -o wide
```
### Web Console 存取
``` bash=
$ oc -n openshift-console get route console -o=jsonpath='{.spec.host}{"\n"}'
console-openshift-console.apps.ocp4.home.lab
```
### 離線 Operator Marketplace 設定
> **注意,記得修改 yaml 裡面的 URL ,以符合環境**
> **注意,記得修改 yaml 裡面的 URL ,以符合環境**
#### 移除對外的 OperatorHub
> 移除之前
```shell=
$ oc -n openshift-marketplace get po
```

> 移除之後
```shell=
$ oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
$ oc -n openshift-marketplace get po
```

#### 下載外部OperatorHub 相關image
> **可連外網機器先決條件**
> podman version 1.9.3+
> oc-mirror
> pull-secret.json
``` bash=
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.11.21/oc-mirror.tar.gz
$ tar xvzf oc-mirror.tar.gz
$ install oc-mirror /usr/local/bin/oc-mirror
$ mkdir ~/.docker
$ cat pull-secret.json > ~/.docker/config.json
$ oc mirror init > mirror-config.yaml #產生初始化yaml自行修改要mirror的package
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
storageConfig:
local:
path: ./
mirror:
platform:
channels:
- name: stable-4.11
type: ocp
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
packages:
- name: serverless-operator
channels:
- name: stable
additionalImages:
- name: registry.redhat.io/ubi8/ubi:latest
helm: {}
$ oc-mirror --config mirror-config.yaml file://operator-mirror --ignore-history
$ tar zcvf operator-mirror.tar.gz operator-mirror
```
#### 移除對外的 OperatorHub
> 移除之前
```shell=
$ oc -n openshift-marketplace get po
```

> 移除之後
```shell=
$ oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
$ oc -n openshift-marketplace get po
```
#### 建立離線OperatorHub
>operator-mirror.tar.gz
``` bash=
$ tar xvzf operator-mirror.tar.gz
$ cd operator-mirror
$ oc-mirror --from mirror_seq2_000000.tar docker://registry.home.lab:5000/redhat-operator --skip-metadata-check
$ cd oc-mirror-workspace/results-1673542316
$ oc apply -f catalogSource-redhat-operator-index.yaml
$ oc apply -f imageContentSourcePolicy.yaml
```
### 移除對外更新檢查
> 移除 sample image stream
```shell=
$ oc patch configs.samples.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Removed"}}'
$ oc extract secret/pull-secret -n openshift-config --to=.
$ ls -a
.dockerconfigjson
$ vi .dockerconfigjson #Remove the cloud.openshift.com JSON entry.
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.dockerconfigjson
$ oc delete deployment insights-operator -n openshift-insights
```
> 移除 cluster version update
```shell=
$ oc patch clusterversion/version --type=merge --patch '{"spec":{"channel":""}}'
```
### 設定 NFS Provisioner
- [Reference here](https://hackmd.io/cmrZkn_lTs6vyQjx1ipkaw#Nfs-subdir-external-provisioner)
- [github](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
---
## Operator 安裝
### 1. 登入 web console
使用以下網址登入:
[https://console-openshift-console.apps.ocp4.home.lab/](https://console-openshift-console.apps.ocp4.home.lab/)
### 2. 搜尋可使用的 Operator
點選 Operators -> OperatorHub -> My Operator
### 3. 安裝 Operator
安裝以下 Operator:
- Local Storage
- 4.11.0-202212070335 provided by Red Hat
- Loki Operator
- 5.5.5 provided by Red Hat
(後續安裝,先看下個章節再看本節設定)
點選 Install -> 勾選以下選項
- Enable `operator recommended cluster monitoring on this namespace`
- 如果有此選項,才需要勾選
- Approve Strategy -> Manual
### 4. 第一次安裝,需要手動 Approve
點選 Manual approval required -> Approve
---
## Loki 安裝
### Local Storage Operator 安裝
#### 1. 參考文件安裝步驟安裝 Local Storage Operator
> **可參考 [安裝 Operator](#安裝-Operator) 一章**
安裝以下 Operator:
- Local Storage
- 4.6.0-202106010807.p0.git.fa3468d provided by Red Hat
```
https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/persistent-storage-local.html#local-storage-install_persistent-storage-local
```
#### 2. 設定 nodeSelector
```shell=
$ oc annotate ns openshift-local-storage openshift.io/node-selector=""
```
#### 3. 建立 Local Volume
- 建立檔案 `~/ocp_config/efk.localvolume.yaml`
```yaml=
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-efk-disks"
namespace: "openshift-local-storage"
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <logger-node-1> # oc get node, label = logger
- <logger-node-2>
- <logger-node-3>
storageClassDevices:
- storageClassName: "local-efk-sc"
volumeMode: Filesystem
fsType: xfs
devicePaths:
- /dev/sdb
```
- 套用 yaml
```shell=
$ oc apply -f efk.localvolume.yaml
```
- 檢查 Disk Maker
```shell=
$ oc -n openshift-local-storage get po
```

- 檢查 PV
```shell=
$ oc get pv
```

- 檢查 storageclass
```shell=
$ oc get sc
```

---