---
title: 'ELK install step'
tags: 'elasticsearch'
---
ELK 安裝步驟
===
機器規格
---
Elasticsearch 3台 (cluster)
Kibana (裝在其中一台Elasticsearch)
安裝Elasticsearch
---
```bash=
yum -y install java-1.8.0-openjdk.x86_64
rpm -ivh https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
vi /etc/elasticsearch/elasticsearch.yml
vi /etc/elasticsearch/jvm.options
systemctl enable elasticsearch
```
安裝Kibana
---
```bash=
rpm -ivh https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm
systemctl enable kibana
vi /etc/kibana/kibana.yml
```
安裝search guard
---
```bash=
/usr/share/elasticsearch/bin/elasticsearch-plugin install -b https://releases.floragunn.com/search-guard-6/6.4.0-25.5/search-guard-6-6.4.0-25.5.zip
cd /usr/share/elasticsearch/plugins/search-guard-6/tools/
bash ./install_demo_configuration.sh
/usr/share/kibana/bin/kibana-plugin install https://releases.floragunn.com/search-guard-kibana-plugin/6.4.0-17/search-guard-kibana-plugin-6.4.0-17.zip
```
服務啟動
---
```bash=
systemctl start elasticsearch
systemctl start kibana
```
Elasticsearch 設定檔內容
---
:::info
/etc/elasticsearch/elasticsearch.yml
:::
```yaml=
# ------ Cluster ------
cluster.name: ELK-cluster
# ------ Node ------
node.name: elasticsearch
# ------ Paths ------
path.data: /mnt/disks/elastic_data
path.logs: /var/log/elasticsearch
# ------ Network ------
network.host: 0.0.0.0
http.port: 9200
# ------ Discovery ------
discovery.zen.ping.unicast.hosts: ["ip", "ip", "ip"]
discovery.zen.minimum_master_nodes: 2
# ------ Various ------
ingest.grok.watchdog.interval: 5s
ingest.grok.watchdog.max_execution_time: 5s
# ------ Search Guard ------
searchguard.ssl.transport.pemcert_filepath: esnode.pem
searchguard.ssl.transport.pemkey_filepath: esnode-key.pem
searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: esnode.pem
searchguard.ssl.http.pemkey_filepath: esnode-key.pem
searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem
searchguard.allow_unsafe_democertificates: true
searchguard.allow_default_init_sgindex: true
searchguard.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
searchguard.audit.type: internal_elasticsearch
searchguard.enable_snapshot_restore_privilege: true
searchguard.check_snapshot_restore_write_privileges: true
searchguard.restapi.roles_enabled: ["sg_all_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
xpack.security.enabled: false
searchguard.enterprise_modules_enabled: false
```
jvm.option設定檔內容
---
:::info
/etc/elasticsearch/jvm.options
:::
```yaml=
# Xms represents the initial size of total heap space
-Xms8g
# Xmx represents the maximum size of total heap space
-Xmx8g
```
Kibana設定檔內容
---
:::info
/etc/kibana/kibana.yml
:::
```yaml=
server.host: 0.0.0.0
elasticsearch.url: "https://localhost:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "kibana"
elasticsearch.ssl.verificationMode: none
elasticsearch.requestTimeout: 60000
elasticsearch.requestHeadersWhitelist: ["authorization", "sgtenant" ]
xpack.security.enabled: false
```
K8S 部署設定
===
Filebeat設定檔
---
:::info
ConfigMap
:::
```yaml=
apiVersion: v1
kind: ConfigMap
metadata:
namespace: logger
name: abc-filebeat-config
labels:
version: v1
data:
filebeat.yaml: |
filebeat.config.inputs:
enabled: true
path: /etc/filebeat/filebeat.yaml
reload.enabled: true
reload.period: 10s
filebeat.inputs:
- type: log
fields:
source: abc.nginx.access
paths:
- /home/durian/abc/nginx/s.*.log
- /home/durian/abc/nginx/m.*.log
- type: log
fields:
source: abc.production.access
paths:
- /home/durian/abc/prod/production.access.log
output.elasticsearch:
hosts: '${ES_HOSTS}'
indices:
- index: "abc.nginx.access-%{+yyyy.MM.dd}"
when.equals:
fields.source: abc.nginx.access
- index: "abc.production.access-%{+yyyy.MM.dd}"
when.equals:
fields.source: abc.production.access
pipelines:
- pipeline: abc.nginx.access
when.equals:
fields.source: abc.nginx.access
- pipeline: abc.production.access
when.equals:
fields.source: abc.production.access
```
:::info
Daemonset
:::
```yaml=
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: logger
name: filebeat-abc
labels:
app: filebeat-abc
spec:
selector:
matchLabels:
app: filebeat-abc
template:
metadata:
labels:
app: filebeat-abc
spec:
nodeSelector:
app: abc
initContainers:
- image: busybox
name: filebeat-permission-fix
command:
- sh
- -c
- |
chmod -R 777 /var/lib/filebeat
volumeMounts:
- name: filebeat-log
mountPath: /var/log/filebeat
- name: filebeat-lib
mountPath: /var/lib/filebeat
containers:
- name: filebeat-abc
image: elastic/filebeat:7.6.0
env:
- name: ES_HOSTS
value: "10.0.0.1,10.0.0.2,10.0.0.3"
args: [
"-c", "/etc/filebeat/filebeat.yaml",
"-path.data", "/var/lib/filebeat",
"-e",
]
resources:
limits:
cpu: "300m"
requests:
cpu: "100m"
volumeMounts:
- name: pod-log
mountPath: /home/durian
- name: filebeat-lib
mountPath: /var/lib/filebeat
- name: filebeat-log
mountPath: /var/log/filebeat
- name: filebeat-config
mountPath: /etc/filebeat
terminationGracePeriodSeconds: 30
volumes:
- name: pod-log
hostPath:
path: /home/durian
type: DirectoryOrCreate
- name: filebeat-lib
hostPath:
path: /var/lib/filebeat
type: DirectoryOrCreate
- name: filebeat-log
hostPath:
path: /var/log/filebeat
type: DirectoryOrCreate
- name: filebeat-config
configMap:
name: abc-filebeat-config
```
ELK 詳細設定
===
Elasticsearch pipeline設定
---
:::info
GET _ingest/pipeline/abc.nginx.access
:::
```json=
{
"abc.nginx.access" : {
"processors" : [
{
"dissect" : {
"field" : "message",
"pattern" : "%{remote_addr} - %{remote_user} [%{@timestamp}] \"%{request_method} %{request} HTTP/%{httpversion}\" %{status} %{bytes_sent} \"%{http_referer}\" \"%{http_user_agent}\" \"%{http_x_forwarded_for}\" \"%{request_time}\" \"%{upstream_response_time}\" \"%{http_x_request_id}\" \"%{http_host}\""
}
},
{
"dissect" : {
"field" : "request",
"pattern" : "%{request_url}?%{request_param}",
"ignore_failure" : true
}
},
{
"kv" : {
"field" : "request_param",
"target_field" : "request_params",
"field_split" : "&",
"value_split" : "=",
"ignore_failure" : true
}
},
{
"remove" : {
"field" : [
"request_param"
],
"ignore_failure" : true
}
},
{
"date" : {
"field" : "@timestamp",
"target_field" : "@timestamp",
"formats" : [
"dd/MMM/YYYY:H:m:s +0800"
],
"timezone" : "Asia/Taipei"
}
},
{
"convert" : {
"field" : "status",
"type" : "integer"
}
},
{
"convert" : {
"field" : "request_time",
"type" : "float"
}
},
{
"convert" : {
"field" : "upstream_response_time",
"type" : "float"
}
}
],
"on_failure" : [
{
"set" : {
"field" : "error.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}
]
}
}
```
:::info
PUT _ingest/pipeline/abc.nginx.access
:::
```json=
{
"processors" : [
{
"dissect" : {
"field" : "message",
"pattern" : "%{remote_addr} - %{remote_user} [%{@timestamp}] \"%{request_method} %{request} HTTP/%{httpversion}\" %{status} %{bytes_sent} \"%{http_referer}\" \"%{http_user_agent}\" \"%{http_x_forwarded_for}\" \"%{request_time}\" \"%{upstream_response_time}\" \"%{http_x_request_id}\" \"%{http_host}\""
}
},
{
"dissect" : {
"field" : "request",
"pattern" : "%{request_url}?%{request_param}",
"ignore_failure" : true
}
},
{
"kv" : {
"field" : "request_param",
"target_field" : "request_params",
"field_split" : "&",
"value_split" : "=",
"ignore_failure" : true
}
},
{
"remove" : {
"field" : [
"request_param"
],
"ignore_failure" : true
}
},
{
"date" : {
"field" : "@timestamp",
"target_field" : "@timestamp",
"formats" : [
"dd/MMM/YYYY:H:m:s +0800"
],
"timezone" : "Asia/Taipei"
}
},
{
"convert" : {
"field" : "status",
"type" : "integer"
}
},
{
"convert" : {
"field" : "request_time",
"type" : "float"
}
},
{
"convert" : {
"field" : "upstream_response_time",
"type" : "float"
}
}
],
"on_failure" : [
{
"set" : {
"field" : "error.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}
]
}
```
### 參考網址
[grok-patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns)