# Offline Install Neuvector 5.2.0 on Openshift Container Platform 4.12
<style>
.indent-title-1{
margin-left: 1em;
}
.indent-title-2{
margin-left: 2em;
}
.indent-title-3{
margin-left: 3em;
}
</style>
# Preface
<div class="indent-title-1">
本篇文章會介紹,
1. 如何在 OpenShift Container Platform 4.12 上安裝 Neuvector 5.2.0
2. 設定 Neuvector web console 使用自簽憑證 ( Self-Signed Certificate )
可以透過點擊展開以下目錄,選擇想看的內容,跳轉至特定章節
:::warning
:::spoiler 文章目錄
[TOC]
:::
</div>
# 安裝前環境準備
## 1. 確認 Image Registry 的位址,並再修改 Image 的 Tag 後,將 Image 上傳至 Image Registry
- 要確定在叢集中的每一台 Node 都可以至該 Image Registry Pull Image 下來
<div class="indent-title-2">
```
docker login -u <user_name> -p `oc whoami -t` docker-registry.default.svc:5000
docker tag docker.io/neuvector/enforcer:<version> docker-registry.default.svc:5000/neuvector/enforcer:<version>
docker tag docker.io/neuvector/controller:<version> docker-registry.default.svc:5000/neuvector/controller:<version>
docker tag docker.io/neuvector/manager:<version> docker-registry.default.svc:5000/neuvector/manager:<version>
docker tag docker.io/neuvector/scanner docker-registry.default.svc:5000/neuvector/scanner
docker tag docker.io/neuvector/updater docker-registry.default.svc:5000/neuvector/updater
docker push docker-registry.default.svc:5000/neuvector/enforcer:<version>
docker push docker-registry.default.svc:5000/neuvector/controller:<version>
docker push docker-registry.default.svc:5000/neuvector/manager:<version>
docker push docker-registry.default.svc:5000/neuvector/scanner
docker push docker-registry.default.svc:5000/neuvector/updater
docker logout docker-registry.default.svc:5000
```
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
## 2. 檢查環境有無 defaultNodeSelector
<div class="indent-title-1">
```
$ oc get scheduler cluster -o yaml | grep -A10 defaultNodeSelector
```
如果沒有螢幕輸出,則跳過本步驟
螢幕輸出 :
```
spec:
defaultNodeSelector: node-role.kubernetes.io/worker=
```
- 如果有類似以上螢幕輸出,則使用以下解決辦法
### 解決辦法
<div class="indent-title-1">
```
$ oc annotate namespace neuvector openshift.io/node-selector=
```
如果 Neuvector 的 Enforcer DaemonSets 已建立,則須刪除 Pod,讓 Pod 重建
```
$ oc delete pod -l app=neuvector-enforcer-pod
```
如果不做本步驟,你可能會再建立完 Neuvector 後,看到 Enforcer 的 Pod 出現 Pending 的狀況,並且看到類似以下的錯誤訊息
```!
0/11 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/11 nodes are available: 1 Preemption is not helpful for scheduling, 10 No preemption victims found for incoming pod.
```
</div>
</div>
## 3. 檢查 CRI-O Socket 的位置
- 檢查的目的是 Neuvector 的 Controller 和 Enforcer 的 Pod 會需要 mount CRI-O Socket 的位置,所以路徑要對
- 如果 CRI-O Socket 的位置,不是 `/var/run/crio/crio.sock`,則需要修改 Neuvector 的 Yaml 檔
<div class="indent-title-2">
```!
$ for i in $(oc get node -o=jsonpath='{.items[*].metadata.name}')
do
echo
echo "$i"
echo
cat <<EOF | oc debug node/"$i" 2> /dev/null
chroot /host
ps -eaf | grep kubelet | tr ' ' '\n' | grep -w -- '--container-runtime-endpoint'
EOF
done
```
螢幕輸出 :
```!
master.ocp4.example.com
--container-runtime-endpoint=/var/run/crio/crio.sock
worker1.ocp4.example.com
--container-runtime-endpoint=/var/run/crio/crio.sock
worker2.ocp4.example.com
--container-runtime-endpoint=/var/run/crio/crio.sock
```
</div>
## 4. 檢查在叢集中的每一台 Node 上有無額外的 Taints
<div class="indent-title-1">
```!
$ for i in $(oc get node -o=jsonpath='{.items[*].metadata.name}')
do
echo
echo "$i"
echo
if [[ -z $(oc get node "$i" -o=jsonpath='{.spec.taints}') ]]; then
echo " No Taints on the $i"
else
echo " $(oc get node "$i" -o=jsonpath='{.spec.taints}')"
fi
done
```
螢幕輸出 :
```
master.ocp4.example.com
[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]
worker1.ocp4.example.com
No Taints on the worker1.ocp4.example.com
worker2.ocp4.example.com
No Taints on the worker2.ocp4.example.com
```
- 如果有出現除了以上 Taint 以外的 Taints 的話,需要更改 Neuvector 的 Yaml 檔裡面 Enforcer Pod 的部分
<div class="indent-title-2">
<pre>
spec:
template:
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
<font color=red>- effect: NoSchedule
key: mykey
value: myvalue</font>
</pre>
> 如果有額外的 Taints 請在紅字的地方添加。這需要與污點節點上定義的所有污點信息相匹配。否則,Enforcer 的 Pod 將無法在污點節點上部署
</div>
</div>
## 一次檢查全部的腳本
<div class="indent-title-1">
```!
$ curl -s https://raw.githubusercontent.com/braveantony/bash-script/main/precheck.sh | bash
```
:::spoiler Script 內容
```!
#!/bin/bash
# check Env before installing Neuvector on OCP
## check Taint
echo "Starting Check The Taint Info On a Node"
for i in $(oc get node -o=jsonpath='{.items[*].metadata.name}')
do
echo
echo "$i"
echo
if [[ -z $(oc get node "$i" -o=jsonpath='{.spec.taints}') ]]; then
echo " No Taints on the $i"
else
echo " $(oc get node "$i" -o=jsonpath='{.spec.taints}')"
fi
done
echo
echo "Note: All taint info must match to schedule Enforcers on nodes"
echo "=================="
## check CRI-O run-time
echo "Starting Check CRI-O run-time"
for i in $(oc get node -o=jsonpath='{.items[*].metadata.name}')
do
echo
echo "$i"
echo
cat <<EOF | oc debug node/"$i" 2> /dev/null
chroot /host
ps -eaf | grep kubelet | tr ' ' '\n' | grep -w -- '--container-runtime-endpoint'
EOF
done
echo "=================="
## check defaultNodeSelector
echo "Starting Check defaultNodeSelector"
if [[ -z $(oc get scheduler cluster -o yaml | grep defaultNodeSelector) ]]; then
echo " No defaultNodeSelector"
else
oc get scheduler cluster -o yaml | grep -A10 defaultNodeSelector
fi
echo "=================="
echo "End Check"
```
:::
</div>
---
# 開始安裝 Neuvector
## 1. Login as a normal user
<div class="indent-title-1">
```
$ oc login -u <user_name>
```
</div>
</div>
## 2. Create a new project
<div class="indent-title-1">
```
$ oc new-project neuvector
```
</div>
## 3. Login as system:admin account
<div class="indent-title-1">
```
$ oc login -u system:admin
```
</div>
## 4. Create Service Accounts and Grant Access to the Privileged SCC
<div class="indent-title-1">
```!
$ oc create sa controller -n neuvector
$ oc create sa enforcer -n neuvector
$ oc create sa basic -n neuvector
$ oc create sa updater -n neuvector
$ oc -n neuvector adm policy add-scc-to-user privileged -z controller -z enforcer
```
</div>
## 5. Create the custom resources (CRD) for NeuVector security rules. For OpenShift 4.6+ (Kubernetes 1.19+):
<div class="indent-title-1">
```!
$ oc apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.2.0/crd-k8s-1.19.yaml
$ oc apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.2.0/waf-crd-k8s-1.19.yaml
$ oc apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.2.0/dlp-crd-k8s-1.19.yaml
$ oc apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.2.0/admission-crd-k8s-1.19.yaml
```
</div>
## 6. Add read permission to access the kubernetes API and OpenShift RBACs.
<div class="indent-title-1">
```
oc create clusterrole neuvector-binding-app --verb=get,list,watch,update --resource=nodes,pods,services,namespaces
oc create clusterrole neuvector-binding-rbac --verb=get,list,watch --resource=rolebindings.rbac.authorization.k8s.io,roles.rbac.authorization.k8s.io,clusterrolebindings.rbac.authorization.k8s.io,clusterroles.rbac.authorization.k8s.io,imagestreams.image.openshift.io
oc adm policy add-cluster-role-to-user neuvector-binding-app system:serviceaccount:neuvector:controller
oc adm policy add-cluster-role-to-user neuvector-binding-rbac system:serviceaccount:neuvector:controller
oc create clusterrole neuvector-binding-admission --verb=get,list,watch,create,update,delete --resource=validatingwebhookconfigurations,mutatingwebhookconfigurations
oc adm policy add-cluster-role-to-user neuvector-binding-admission system:serviceaccount:neuvector:controller
oc create clusterrole neuvector-binding-customresourcedefinition --verb=watch,create,get,update --resource=customresourcedefinitions
oc adm policy add-cluster-role-to-user neuvector-binding-customresourcedefinition system:serviceaccount:neuvector:controller
oc create clusterrole neuvector-binding-nvsecurityrules --verb=list,delete --resource=nvsecurityrules,nvclustersecurityrules
oc adm policy add-cluster-role-to-user neuvector-binding-nvsecurityrules system:serviceaccount:neuvector:controller
oc adm policy add-cluster-role-to-user view system:serviceaccount:neuvector:controller --rolebinding-name=neuvector-binding-view
oc create clusterrole neuvector-binding-nvwafsecurityrules --verb=list,delete --resource=nvwafsecurityrules
oc adm policy add-cluster-role-to-user neuvector-binding-nvwafsecurityrules system:serviceaccount:neuvector:controller
oc create clusterrole neuvector-binding-nvadmissioncontrolsecurityrules --verb=list,delete --resource=nvadmissioncontrolsecurityrules
oc adm policy add-cluster-role-to-user neuvector-binding-nvadmissioncontrolsecurityrules system:serviceaccount:neuvector:controller
oc create clusterrole neuvector-binding-nvdlpsecurityrules --verb=list,delete --resource=nvdlpsecurityrules
oc adm policy add-cluster-role-to-user neuvector-binding-nvdlpsecurityrules system:serviceaccount:neuvector:controller
oc create role neuvector-binding-scanner --verb=get,patch,update,watch --resource=deployments -n neuvector
oc adm policy add-role-to-user neuvector-binding-scanner system:serviceaccount:neuvector:updater system:serviceaccount:neuvector:controller -n neuvector --role-namespace neuvector
oc create clusterrole neuvector-binding-csp-usages --verb=get,create,update,delete --resource=cspadapterusagerecords
oc adm policy add-cluster-role-to-user neuvector-binding-csp-usages system:serviceaccount:neuvector:controller
oc create clusterrole neuvector-binding-co --verb=get,list --resource=clusteroperators
oc adm policy add-cluster-role-to-user neuvector-binding-co system:serviceaccount:neuvector:enforcer system:serviceaccount:neuvector:controller
```
</div>
## 7. Run the following command to check if the neuvector/controller, neuvector/enforcer and neuvector/updater service accounts are added successfully.
<div class="indent-title-1">
```!
$ oc get ClusterRoleBinding neuvector-binding-app neuvector-binding-rbac neuvector-binding-admission neuvector-binding-customresourcedefinition neuvector-binding-nvsecurityrules neuvector-binding-view neuvector-binding-nvwafsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules neuvector-binding-csp-usages neuvector-binding-co -o wide
```
</div>
## OpenShift Deployment Examples for NeuVector
<div class="indent-title-1">
:::spoiler Sample Yaml File
```
# neuvector yaml version for NeuVector 5.2.x on CRI-O
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-crd-webhook
namespace: neuvector
spec:
ports:
- port: 443
targetPort: 30443
protocol: TCP
name: crd-webhook
type: ClusterIP
selector:
app: neuvector-controller-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-admission-webhook
namespace: neuvector
spec:
ports:
- port: 443
targetPort: 20443
protocol: TCP
name: admission-webhook
type: ClusterIP
selector:
app: neuvector-controller-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-service-webui
namespace: neuvector
spec:
ports:
- port: 8443
name: manager
protocol: TCP
type: ClusterIP
selector:
app: neuvector-manager-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller
namespace: neuvector
spec:
ports:
- port: 18300
protocol: "TCP"
name: "cluster-tcp-18300"
- port: 18301
protocol: "TCP"
name: "cluster-tcp-18301"
- port: 18301
protocol: "UDP"
name: "cluster-udp-18301"
clusterIP: None
selector:
app: neuvector-controller-pod
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: neuvector-route-webui
namespace: neuvector
spec:
to:
kind: Service
name: neuvector-service-webui
port:
targetPort: manager
tls:
termination: passthrough
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neuvector-manager-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-manager-pod
replicas: 1
template:
metadata:
labels:
app: neuvector-manager-pod
spec:
serviceAccountName: basic
serviceAccount: basic
containers:
- name: neuvector-manager-pod
image: image-registry.openshift-image-registry.svc:5000/neuvector/manager:<version>
env:
- name: CTRL_SERVER_IP
value: neuvector-svc-controller.neuvector
restartPolicy: Always
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neuvector-controller-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-controller-pod
minReadySeconds: 60
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 3
template:
metadata:
labels:
app: neuvector-controller-pod
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- neuvector-controller-pod
topologyKey: "kubernetes.io/hostname"
serviceAccountName: controller
serviceAccount: controller
containers:
- name: neuvector-controller-pod
image: image-registry.openshift-image-registry.svc:5000/neuvector/controller:<version>
securityContext:
privileged: true
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /var/neuvector
name: nv-share
readOnly: false
- mountPath: /var/run/crio/crio.sock
name: runtime-sock
readOnly: true
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
- mountPath: /etc/config
name: config-volume
readOnly: true
terminationGracePeriodSeconds: 300
restartPolicy: Always
volumes:
- name: nv-share
hostPath:
path: /var/neuvector
- name: runtime-sock
hostPath:
path: /var/run/crio/crio.sock
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
- name: config-volume
projected:
sources:
- configMap:
name: neuvector-init
optional: true
- secret:
name: neuvector-init
optional: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: neuvector-enforcer-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-enforcer-pod
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: neuvector-enforcer-pod
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
hostPID: true
serviceAccountName: enforcer
serviceAccount: enforcer
containers:
- name: neuvector-enforcer-pod
image: image-registry.openshift-image-registry.svc:5000/neuvector/enforcer:<version>
securityContext:
privileged: true
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /lib/modules
name: modules-vol
readOnly: true
- mountPath: /var/run/crio/crio.sock
name: runtime-sock
readOnly: true
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
terminationGracePeriodSeconds: 1200
restartPolicy: Always
volumes:
- name: modules-vol
hostPath:
path: /lib/modules
- name: runtime-sock
hostPath:
path: /var/run/crio/crio.sock
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neuvector-scanner-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-scanner-pod
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 2
template:
metadata:
labels:
app: neuvector-scanner-pod
spec:
serviceAccountName: basic
serviceAccount: basic
containers:
- name: neuvector-scanner-pod
image: image-registry.openshift-image-registry.svc:5000/neuvector/scanner:latest
imagePullPolicy: Always
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
restartPolicy: Always
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: neuvector-updater-pod
namespace: neuvector
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: neuvector-updater-pod
spec:
serviceAccountName: updater
serviceAccount: updater
containers:
- name: neuvector-updater-pod
image: image-registry.openshift-image-registry.svc:5000/neuvector/updater:latest
imagePullPolicy: Always
command:
- /bin/sh
- -c
- TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $TOKEN" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod'
restartPolicy: Never
```
:::
</div>
</div>
</div>
</div>
## 8. 修改 Sample Yaml 檔
<div class="indent-title-1">
8-1. 修改 Image Registry 的位置
8-2. 修改 CRI-O 的位置,如果 CRI-O Socket 的位置是 `/var/run/crio/crio.sock`,則不需修改
8-3. 新增多餘的 Taints,除了
- `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]`
- `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/controlplane"}]`
8-4. 修改 neuvector-service-webui 這個 Service 的 type 為 `NodePort`
</div>
## 9. 建立 Neuvector
<div class="indent-title-1">
```
$ oc create -f neuvector.yaml
```
螢幕輸出 :
```
service/neuvector-svc-crd-webhook created
service/neuvector-svc-admission-webhook created
service/neuvector-service-webui created
service/neuvector-svc-controller created
route.route.openshift.io/neuvector-route-webui created
deployment.apps/neuvector-manager-pod created
deployment.apps/neuvector-controller-pod created
daemonset.apps/neuvector-enforcer-pod created
deployment.apps/neuvector-scanner-pod created
cronjob.batch/neuvector-updater-pod created
```
</div>
## 10. 檢視 Neuvector 運作狀態
<div class="indent-title-1">
```
$ oc get pods,svc -o wide
```
螢幕輸出 :
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/neuvector-controller-pod-6655ddf6b-kf69d 1/1 Running 0 10m 10.129.1.168 worker2.ocp4.example.com <none> <none>
pod/neuvector-controller-pod-6655ddf6b-l89tf 1/1 Running 0 10m 10.130.0.40 worker1.ocp4.example.com <none> <none>
pod/neuvector-controller-pod-6655ddf6b-ljhbq 1/1 Running 0 10m 10.129.1.170 worker2.ocp4.example.com <none> <none>
pod/neuvector-enforcer-pod-5d2j9 1/1 Running 0 10m 10.130.0.38 worker1.ocp4.example.com <none> <none>
pod/neuvector-enforcer-pod-vc6tk 1/1 Running 0 10m 10.129.1.166 worker2.ocp4.example.com <none> <none>
pod/neuvector-enforcer-pod-vpg56 1/1 Running 0 10m 10.128.0.249 master.ocp4.example.com <none> <none>
pod/neuvector-manager-pod-896b46c9f-bgm8n 1/1 Running 0 10m 10.129.1.169 worker2.ocp4.example.com <none> <none>
pod/neuvector-scanner-pod-56c798bb86-8dh2m 1/1 Running 0 10m 10.130.0.39 worker1.ocp4.example.com <none> <none>
pod/neuvector-scanner-pod-56c798bb86-kvgrh 1/1 Running 0 10m 10.129.1.167 worker2.ocp4.example.com <none> <none>
pod/web 1/1 Running 0 47m 10.129.1.163 worker2.ocp4.example.com <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/neuvector-service-webui NodePort 172.30.139.142 <none> 8443:32744/TCP 10m app=neuvector-manager-pod
service/neuvector-svc-admission-webhook ClusterIP 172.30.188.201 <none> 443/TCP 10m app=neuvector-controller-pod
service/neuvector-svc-controller ClusterIP None <none> 18300/TCP,18301/TCP,18301/UDP 10m app=neuvector-controller-pod
service/neuvector-svc-crd-webhook ClusterIP 172.30.208.182 <none> 443/TCP 10m app=neuvector-controller-p
```
> 看 `neuvector-manager-pod-*-*` 的 Pod 在哪一台 Node 上
> 以及 `neuvector-service-webui` 這個 Service 開的 Port 好是多少
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
## 11.Access Neuvector WebUI
<div class="indent-title-1">
```
https://<worker2.ocp4.example.com 的 IP 位址>:32744
```
- 預設帳號: admin
- 預設密碼: admin

</div>
</div>
## Neuvector Dashboard
<div class="indent-title-1">

</div>
---
# Replacing Self-Signed Certificate
## Prerequest
<div class="indent-title-1">
The NeuVector web console supports 2 different self-signed certificate types, specifically,
- PKCS8 (Private-Key Information Syntax Standard)
- PKCS1 (RSA Cryptography Standard).
The self-signed certificate can be replaced with either of these PKCS types.
### what is difference between "PKCS1" and "PKCS8"?
<div class="indent-title-1">
PKCS is short for "Public Key Cryptography Standards",中文翻譯為 "公鑰加密標準"
- PKCS1,用來定義 RSA 格式的公私鑰,是最基本的格式。
- 通過 `openssl` 生成的金鑰,默認私鑰是 PKCS1 格式,不會輸出公鑰(因為公鑰可以通過私鑰導出)
- PKCS8,是一種用於處理所有算法(而不僅僅是 RSA)的私鑰的標準。
<div class="indent-title-1">
| 私鑰格式 | 支持的加密算法 | 私鑰檔案內容的開頭 |
|---|---|---|
| PKCS#1 | RSA | `—–BEGIN RSA PRIVATE KEY—–` |
| PKCS#8 | RSA、DSA、ECDSA 等 | `—–BEGIN PRIVATE KEY—–` |
</div>
</div>
</div>
## Create Certificate Authority
<div class="indent-title-1">
我們需要建立自己的 root CA certificate,讓瀏覽器信任自簽發憑證。
```
$ mkdir ssl && cd ssl
```
執行以下 `openssl` 指令,以建立 `rootCA.key` 和 `rootCA.crt`。
請將 `SUSE.Lab` 替換為您的網域名稱或 IP 位址。
```
$ openssl req -x509 \
-sha256 -days 356 \
-nodes \
-newkey rsa:2048 \
-subj "/CN=SUSE.Lab/C=TW/L=Taipei" \
-keyout rootCA.key -out rootCA.crt
```
> 等下會使用 `rootCA.key` 和 `rootCA.crt` 來簽署 SSL 憑證。
</div>
## Create Self-Signed Certificates using OpenSSL
### 1. Create the Server Private Key
<div class="indent-title-1">
```
$ openssl genrsa -out server.key 2048
```
</div>
### 2. Create Certificate Signing Request Configuration
- 建立一個 `csr.conf` 檔案,把所有生成 CSR 所需的資訊都放進去。
- 把 `ocp4.example.com` 替換成你自己的網域名稱或 IP 地址。
<div class="indent-title-1">
```
$ cat > csr.conf <<EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[dn]
C = TW
ST = Taiwan
L = Taipei
O = SUSE
OU = IT Department
emailAddress = admin@example.com
CN = ocp4.example.com
[req_ext]
subjectAltName = @alt_names
[alt_names]
DNS.1 = ocp4.example.com
DNS.2 = *.apps.ocp4.example.com
DNS.3 = api.ocp4.example.com
DNS.4 = api-int.ocp4.example.com
IP.1 = 192.168.11.80
EOF
```
</div>
### 3. Generate Certificate Signing Request (CSR) Using Server Private Key
- 執行以下命令生成 `server.csr`
<div class="indent-title-1">
```
$ openssl req -new -key server.key -out server.csr -config csr.conf
```
</div>
### 4. Create a external file
- 執行以下命令建立 `cert.conf` 檔案,目的是透過這個檔案,來產生 SSL 憑證。
- 將 `*.apps.ocp4.example.com` 替換為你的網域名稱或 IP 地址。
<div class="indent-title-1">
```!
$ cat > cert.conf <<EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.apps.ocp4.example.com
EOF
```
</div>
### 5. Generate SSL certificate With self signed CA
<div class="indent-title-1">
執行以下指令產生 SSL 憑證,這個憑證是由我們自己的憑證機構所建立的 `rootCA.crt` 和 `rootCA.key` 簽署的。
```
$ openssl x509 -req \
-in server.csr \
-CA rootCA.crt -CAkey rootCA.key \
-CAcreateserial -out server.crt \
-days 365 \
-sha256 -extfile cert.conf
```
> 上述指令會產生 `server.crt` 憑證,我們將與 `server.key` 一起使用這個憑證,以在 Neuvector web console 中啟用 SSL 功能。
</div>
## Create the secret from the generated key and certificate files
<div class="indent-title-1">
執行以下命令,把要給 Neuvector 用的私鑰和憑證做成 secret
```!
$ oc create secret generic https-cert -n neuvector --from-file=server.key --from-file=server.crt
```
</div>
## Edit the yaml directly for the manager and controller deployments to add the mounts
<div class="indent-title-1">
```!
spec:
template:
spec:
containers:
volumeMounts:
- mountPath: /etc/neuvector/certs/ssl-cert.key
name: cert
readOnly: true
subPath: server.key
- mountPath: /etc/neuvector/certs/ssl-cert.pem
name: cert
readOnly: true
subPath: server.crt
volumes:
- name: cert
secret:
defaultMode: 420
secretName: https-cert
```
:::spoiler 完整的 Yaml File
```
# neuvector yaml version for NeuVector 5.2.x on CRI-O
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-crd-webhook
namespace: neuvector
spec:
ports:
- port: 443
targetPort: 30443
protocol: TCP
name: crd-webhook
type: ClusterIP
selector:
app: neuvector-controller-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-admission-webhook
namespace: neuvector
spec:
ports:
- port: 443
targetPort: 20443
protocol: TCP
name: admission-webhook
type: ClusterIP
selector:
app: neuvector-controller-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-service-webui
namespace: neuvector
spec:
ports:
- port: 443
targetPort: 8443
name: manager
protocol: TCP
type: ClusterIP
selector:
app: neuvector-manager-pod
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller
namespace: neuvector
spec:
ports:
- port: 18300
protocol: "TCP"
name: "cluster-tcp-18300"
- port: 18301
protocol: "TCP"
name: "cluster-tcp-18301"
- port: 18301
protocol: "UDP"
name: "cluster-udp-18301"
clusterIP: None
selector:
app: neuvector-controller-pod
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: neuvector-route-webui
namespace: neuvector
spec:
to:
kind: Service
name: neuvector-service-webui
port:
targetPort: manager
tls:
termination: passthrough
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neuvector-manager-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-manager-pod
replicas: 1
template:
metadata:
labels:
app: neuvector-manager-pod
spec:
serviceAccountName: basic
serviceAccount: basic
containers:
- name: neuvector-manager-pod
image: docker.io/neuvector/manager:5.2.0
env:
- name: CTRL_SERVER_IP
value: neuvector-svc-controller.neuvector
volumeMounts:
- mountPath: /etc/neuvector/certs/ssl-cert.key
name: cert
readOnly: true
subPath: server.key
- mountPath: /etc/neuvector/certs/ssl-cert.pem
name: cert
readOnly: true
subPath: server.crt
volumes:
- name: cert
secret:
defaultMode: 420
secretName: https-cert
restartPolicy: Always
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neuvector-controller-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-controller-pod
minReadySeconds: 60
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 3
template:
metadata:
labels:
app: neuvector-controller-pod
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- neuvector-controller-pod
topologyKey: "kubernetes.io/hostname"
serviceAccountName: controller
serviceAccount: controller
containers:
- name: neuvector-controller-pod
image: docker.io/neuvector/controller:5.2.0
securityContext:
privileged: true
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /var/neuvector
name: nv-share
readOnly: false
- mountPath: /var/run/crio/crio.sock
name: runtime-sock
readOnly: true
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
- mountPath: /etc/config
name: config-volume
readOnly: true
- mountPath: /etc/neuvector/certs/ssl-cert.key
name: cert
readOnly: true
subPath: server.key
- mountPath: /etc/neuvector/certs/ssl-cert.pem
name: cert
readOnly: true
subPath: server.crt
terminationGracePeriodSeconds: 300
restartPolicy: Always
volumes:
- name: nv-share
hostPath:
path: /var/neuvector
- name: runtime-sock
hostPath:
path: /var/run/crio/crio.sock
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
- name: cert
secret:
defaultMode: 420
secretName: https-cert
- name: config-volume
projected:
sources:
- configMap:
name: neuvector-init
optional: true
- secret:
name: neuvector-init
optional: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: neuvector-enforcer-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-enforcer-pod
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: neuvector-enforcer-pod
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
hostPID: true
serviceAccountName: enforcer
serviceAccount: enforcer
containers:
- name: neuvector-enforcer-pod
image: docker.io/neuvector/enforcer:5.2.0
securityContext:
privileged: true
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /lib/modules
name: modules-vol
readOnly: true
- mountPath: /var/run/crio/crio.sock
name: runtime-sock
readOnly: true
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
terminationGracePeriodSeconds: 1200
restartPolicy: Always
volumes:
- name: modules-vol
hostPath:
path: /lib/modules
- name: runtime-sock
hostPath:
path: /var/run/crio/crio.sock
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neuvector-scanner-pod
namespace: neuvector
spec:
selector:
matchLabels:
app: neuvector-scanner-pod
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 2
template:
metadata:
labels:
app: neuvector-scanner-pod
spec:
serviceAccountName: basic
serviceAccount: basic
containers:
- name: neuvector-scanner-pod
image: docker.io/neuvector/scanner:latest
imagePullPolicy: Always
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.neuvector
restartPolicy: Always
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: neuvector-updater-pod
namespace: neuvector
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: neuvector-updater-pod
spec:
serviceAccountName: updater
serviceAccount: updater
containers:
- name: neuvector-updater-pod
image: docker.io/neuvector/updater:latest
imagePullPolicy: Always
command:
- /bin/sh
- -c
- TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $TOKEN" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod'
restartPolicy: Never
```
:::
</div>
## What is Route in OpenShift?
<div class="indent-title-1">
An OpenShift Container Platform route **exposes a service at a host name**, such as `www.example.com`, so that external clients can reach it by name.
### Route Types

### What is a Passthrough Route
<div class="indent-title-1">
```
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: neuvector-route-webui
namespace: neuvector
spec:
to:
kind: Service
name: neuvector-service-webui
port:
targetPort: manager
tls:
termination: passthrough
```
- `spec.tls.termination: passthrough`,With passthrough termination, **encrypted traffic is sent straight to the destination without the router providing TLS termination.** Therefore no key or certificate is required on the route.
</div>
</div>
## 建立 Neuvector
<div class="indent-title-1">
```
$ oc create -f neuvector.yaml
```
執行以下命令檢查 Pod 運行狀態和 route
```
$ oc get pod,route
```
螢幕輸出 :
```
NAME READY STATUS RESTARTS AGE
pod/neuvector-controller-pod-76bbbfc5dc-5t8lk 1/1 Running 0 167m
pod/neuvector-controller-pod-76bbbfc5dc-8znrb 1/1 Running 0 167m
pod/neuvector-controller-pod-76bbbfc5dc-t5tbb 1/1 Running 0 167m
pod/neuvector-enforcer-pod-4kgrn 1/1 Running 0 167m
pod/neuvector-enforcer-pod-kwsrr 1/1 Running 0 167m
pod/neuvector-enforcer-pod-lwnkp 1/1 Running 0 167m
pod/neuvector-manager-pod-6bb5555c7b-cx7lr 1/1 Running 0 167m
pod/neuvector-scanner-pod-56c798bb86-fvp8l 1/1 Running 0 167m
pod/neuvector-scanner-pod-56c798bb86-scqjk 1/1 Running 0 167m
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/neuvector-route-webui neuvector-route-webui-neuvector.apps.ocp4.example.com neuvector-service-webui manager passthrough None
```
</div>
### 在本機測試 Neuvector web console
1. 將 CA 憑證複製到指定目錄
<div class="indent-title-2">
```!
$ sudo cp rootCA.crt /etc/pki/ca-trust/source/anchors/
```
</div>
2. 更新 Linux 系統的 CA (Certificate Authority) 信任我們自己建的 CA 憑證
<div class="indent-title-2">
```!
$ sudo update-ca-trust extract
```
</div>
3. 驗證我們自己建的 CA 憑證是否由可信任的根憑證機構 (CA) 簽發
<div class="indent-title-2">
```!
$ openssl verify /etc/pki/ca-trust/source/anchors/rootCA.crt
```
螢幕輸出 :
```
/etc/pki/ca-trust/source/anchors/rootCA.crt: OK
```
</div>
4. 訪問 Neuvector web console
<div class="indent-title-2">
```!
$ curl https://neuvector-route-webui-neuvector.apps.ocp4.example.com
```
> `oc get route neuvector-route-webui` 可以得到 Domain name
螢幕輸出 :
```!
This and all future requests should be directed to <a href="/index.html?v=2358c7ec6b">this URI</a>.
```
</div>
### 透過 Firefox 瀏覽器連線至 Neuvector web console
- 將 `rootCA.crt` 放入瀏覽器
- `設定` > `隱私權與安全性` > 往下滑點選 `檢視憑證` > `匯入` > 選擇 `rootCA.crt` 後按 `開啟` > `確定`
- 輸入以下網址 : `https://neuvector-route-webui-neuvector.apps.ocp4.example.com/`
<div class="indent-title-2">

</div>
---
# 參考文件
## Neuvector
- [Deploy Separate NeuVector Components with RedHat OpenShift - Neuvector Docs](https://open-docs.neuvector.com/deploying/openshift)
- [Replacing Self-Signed Certificate - Neuvector Docs](https://open-docs.neuvector.com/configuration/console/replacecert)
## Self-Signed Certificate
- [OpenSSL& public key and private key & Certificate](https://ji3g4zo6qi6.medium.com/openssl-public-key-and-private-key-certificate-28b990457496)
- [PKCS#1 and PKCS#8 format for RSA private key](https://stackoverflow.com/questions/48958304/pkcs1-and-pkcs8-format-for-rsa-private-key)
- [簡單了解 PKCS 規範](https://razeen.me/posts/introduce-pkcs/)
- [How to Create Self-Signed Certificates using OpenSSL](https://devopscube.com/create-self-signed-certificates-openssl/)
## OpenShift Route
- [Understanding OpenShift Route](https://medium.com/swlh/understanding-openshift-route-bd973d8a620a)
- [Creating a passthrough route - RedHat Docs](https://docs.openshift.com/container-platform/4.12/networking/routes/secured-routes.html#nw-ingress-creating-a-passthrough-route_secured-routes)