# ELK 筆記
###### tags: `ELK` `Ctbc`

LOG存放在LOG資料夾
## Logstash
[Logstash下載](https://www.elastic.co/downloads/logstash)

```consloe!
tar xzvf filebeat-8.7.0-linux-x86_64.tar.gz
```
建一個 conf檔 first-pipeline.conf 放在logstash-8.7.0資料夾內
```yaml
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{TIME:TIME}\| %{WORD:STATUS}\| *%{WORD:LABEL}\| *%{NUMBER:CODE}\|%{GREEDYDATA:MESSAGE}"}
}
}
output {
stdout { codec => rubydebug }
}
# output {
# elasticsearch {
# hosts => ["http://localhost:9200"]
# index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
# #user => "elastic"
# #password => "changeme"
# }
# }
```
### Output Post方法
```console!
output {
http {
http_method => "post"
url => "http://172.24.34.250:8088/api/elk2angelia/connectSource"
}
}
```
### 刪除或新增Output欄位
```console!
mutate {
remove_field => ["@version","input","host","tags","log","event","agent","ecs"]
}
mutate {
add_field => {
"NotifyMethod" => "Angelia,Teams,Sms,Mail"
}
```
### Exception 用的 multiline
```console!
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
```

https://blog.51cto.com/NIO4444/3840846
### Log filter example
```regular
%{TIME:TIME}\| *%{WORD:STATUS}\| *%{WORD:LABEL}\| *%{NUMBER:CODE}\|%{GREEDYDATA:MESSAGE}
```
### 日期替換為Log原始時間 (原始Log只有時間沒有日期,用Ruby產生)
```console
grok {
match => {
"message" => "%{TIME:logtime}\| *%{LOGLEVEL:level}\|"
}
}
ruby{
code => 'event.set("today_date", Time.now.strftime("%Y-%m-%d"))'
}
mutate {
add_field => {
"logdate" => "%{today_date} %{logtime}"
}
}
date {
timezone => "Asia/Taipei"
match => ["logdate", "yyyy-MM-dd HH:mm:ss.SSS", "yyyy-MM-dd HH:mm:ss,SSS"]
remove_field => ["today_date","logdate"]
}
```
### Log filter工具
https://grokdebugger.com/
https://grokconstructor.appspot.com/do/match
cd /bin
Logstash啟動
```console!
./logstash -f first-pipeline.conf
./logstash -f first-pipeline.conf --config.reload.automatic
```
## Filebeat
[Filebeat下載](https://www.elastic.co/downloads/beats/filebeat)
```console!
tar xzvf filebeat-8.7.0-linux-x86_64.tar.gz
```
修改filebeat.yml
```yaml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
enabled: true
paths:
- /home/elkusr/Log/*.log
# filestream is an input for collecting log messages from files.
- type: filestream
# Unique ID among all inputs, an ID is required.
id: my-filestream-id
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
loadbalance: true
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
```
多個port做備援
```yaml!
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044","localhost:5045"]
loadbalance: true
```
filebeat啟動
```console!
./filebeat -e
./filebeat -e -c filebeat.yml -d "publish"
```
## 結果
之後更動Log資料夾裡的Log檔會在Logstash印出來,或是更改傳至Elasticsearch,修改Logstash的Output,另外可加上filter。
### 新增Log

### Filebeat抓到新增Log

### Logstash成功印出Log

## Elasticsearch
### 啟動
安裝完後到bin底下啟動
```console=
./elasticsearch
```
get url:http://localhost:9200
### 無法啟動問題

received plaintext http traffic on an https channel, closing connection Netty4HttpChannel
解決
是因為開啟了 ssl 認證。
在 ES/config/elasticsearch.yml 文件中把 xpack.security.http.ssl:enabled 設置成 false 即可

### 密碼設置
記得要先啟動Elasticsearch後再設置
在binㄉ底下
./elasticsearch-setup-passwords auto 自動
./elasticsearch-setup-passwords interactive 手動
完成後網頁輸入帳號elastic與密碼

### Query DSL
```Query DSL
curl http://localhost:9200
curl -X GET "localhost:9200/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match": {"CODE":{"query":"83"}}
}
}
'
```
## Kibana
```consloe
tar xzvf kibana-8.7.1-linux-x86_64.tar.gz
```
xpack.security.enabled: true
server.port: 5601
server.host: "server.host"
elasticsearch.hosts: ["http://server.host:9200"]
elasticsearch.username: "kibana" # 7.9.2版:"kibana_system"
elasticsearch.password: "密碼"
./kibana --allow-root
http://localhost:5601/
## 參考資料
官方:
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html
喬叔帶你上手 Elastic Stack - 探索與實踐 Observability:
https://ithelp.ithome.com.tw/articles/10274299
https://ithelp.ithome.com.tw/articles/10274944
https://ithelp.ithome.com.tw/articles/10275576
[elk] 教學與介紹:
https://medium.com/%E7%A8%8B%E5%BC%8F%E4%B9%BE%E8%B2%A8/elk-%E6%95%99%E5%AD%B8%E8%88%87%E4%BB%8B%E7%B4%B9-c54af6f06e61
ELK 實作分散式log採集系統:
https://lufor129.medium.com/elk-%E5%AF%A6%E4%BD%9C%E5%88%86%E6%95%A3%E5%BC%8Flog%E6%8E%A1%E9%9B%86%E7%B3%BB%E7%B5%B1-d3e729624af4
[ELK教學]最新CentOS ElasticSearch、Logstash、Kibana、Filebeat ,快速安裝(照著貼上就對了):
https://medium.com/@d101201007/centos7-elk-filebeat-%E6%8C%87%E4%BB%A4%E5%AE%89%E8%A3%9D-%E7%85%A7%E8%91%97%E8%B2%BC%E4%B8%8A%E5%B0%B1%E5%B0%8D%E4%BA%86-73f456381491
深分頁問題:
https://kucw.github.io/blog/2018/6/elasticsearch-scroll/