# 設定 Openshift audit log 透過 openshift-logging 拋轉到 Elasticsearch
## 1. 先決條件
- 已在 openshift 叢集中安裝好以下工具
- Elasticsearch + Kibana
- Openshift logging operator
## 2. 開始設定
1. 幫 log collector 建立 service account
```
oc create sa logging-collector -n openshift-logging
```
2. 請將必要的權限指派給服務帳號,以便 collector 能夠收集並轉發日誌。在此範例中,收集器將被授予權限,以同時收集來自基礎架構、應用程式和審計的日誌。
```
oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging
oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
```
3. 建立 elasticsearch tls secret
```
oc -n elastic get secret elasticsearch-es-http-certs-internal -o json | \
jq 'del(.metadata.namespace, .metadata.uid, .metadata.resourceVersion, .metadata.creationTimestamp, .metadata.managedFields, .metadata.ownerReferences, .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration")' | \
oc -n openshift-logging create -f -
```
4. 打開瀏覽器連線並登入 kibana 網站
```
oc -n elastic get route
```
執行結果如下:
```
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
elasticsearch-sample elasticsearch-sample-elastic.apps.topgun.example.com elasticsearch-es-http <all> passthrough/Redirect None
kibana-sample kibana-sample-elastic.apps.topgun.example.com kibana-kb-http <all> passthrough/Redirect None
```
> 我的範例是:`kibana-sample-elastic.apps.topgun.example.com`
5. 在 `左側選單` -> `Management/Dev tools`,執行以下語法,建立一個名為 openshift_log_forwarder 的角色,這個角色只對 `app-*`, `infra-*`, `audit-*` 這些 OpenShift 常用的索引名稱開頭的索引有寫入權限。
```json
PUT /_security/role/openshift_log_forwarder
{
"cluster": [
"monitor",
"manage_index_templates"
],
"indices": [
{
"names": [
"app-*",
"infra-*",
"audit-*"
],
"privileges": [
"write",
"create_index"
],
"allow_restricted_indices": false
}
]
}
```
6. 在 `左側選單` -> `Management/Stack Management` -> `Security/Users` -> `Create user`,建立一個使用者 (例如 ocp-forwarder-user),並將剛才建立的角色 openshift_log_forwarder 指派給它。

7. 設定帳號資訊
- Username: `ocp-forwarder-user`
- Full name: `ocp-forwarder-user`
- Password: `password`
- Confirm password: `password`
- Roles: 將下拉選單展開選擇剛剛自建的 `openshift_log_forwarder` role

8. 建立 elasticsearch user secret yaml 檔
```
cat <<EOF > es-userinfo.yaml
apiVersion: v1
kind: Secret
metadata:
name: elasticsearch-user
namespace: openshift-logging
type: Opaque
stringData:
username: ocp-forwarder-user
password: password
EOF
```
9. 建立 secret
```
oc apply -f es-userinfo.yaml
```
10. 建立 `ClusterLogForwarder` CR YAML 檔
```
cat <<EOF > clf_elasticsearch.yaml
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
namespace: openshift-logging
spec:
managementState: Managed
outputs:
- name: audit-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch-es-http.elastic.svc.cluster.local:9200
version: 8
index: audit-write
authentication:
username:
key: username
secretName: elasticsearch-user
password:
key: password
secretName: elasticsearch-user
tls:
insecureSkipVerify: true
pipelines:
- name: audit
inputRefs:
- audit
outputRefs:
- audit-elasticsearch
serviceAccount:
name: logging-collector
EOF
```
11. 建立 `ClusterLogForwarder`
```
oc apply -f clf_elasticsearch.yaml
```
12. 確認 pod 狀態
```
oc get pods
```
正確執行結果:
```
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-86bf5bc9b6-b27wh 1/1 Running 0 127m
collector-5qvt4 1/1 Running 0 26m
collector-6vpxt 1/1 Running 0 26m
collector-hnzhm 1/1 Running 0 26m
collector-s8z9g 1/1 Running 0 26m
collector-tz2lm 1/1 Running 0 26m
collector-x5l5t 1/1 Running 0 26m
```
13. 檢視 collector pod log 是否有異常
```
oc logs collector-x5l5t
```
正確執行結果:
```
Creating the directory used for persisting Vector state /var/lib/vector/openshift-logging/collector
Starting Vector process...
2025-09-12T09:02:20.158032Z WARN sink{component_kind="sink" component_id=output_audit_elasticsearch component_type=elasticsearch}: vector_core::tls::settings: The `verify_certificate` option is DISABLED, this may lead to security vulnerabilities.
2025-09-12T09:02:20.158081Z WARN sink{component_kind="sink" component_id=output_audit_elasticsearch component_type=elasticsearch}: vector_core::tls::settings: The `verify_hostname` option is DISABLED, this may lead to security vulnerabilities.
2025-09-12T09:02:20.287291Z WARN vector::internal_events::file::source: Currently ignoring file too small to fingerprint. file=/var/log/ovn/acl-audit-log.log
```
14. Lab 內部測試可把資料副本調成 1
在 `左側選單` -> `Management/Dev tools`,執行以下語法
```
PUT /your-index-name/_settings
{
"index": {
"number_of_replicas": 0
}
}
```
15. 打開瀏覽器連到 Kibana UI 檢視是否有收到 log
