--- title: ELK tags: ELK --- # ELK > [name=陳信安] > [time=TUE, OCT 29, 2020 11:00 AM] --- # Agenda * 何謂ELK * Grok * 安裝ELK * 操作方式 --- ## 何謂ELK --- ELK是由elastic這間公司所開發(Elastic search、Logstash、Kibana)出來一套日誌監控、分析的工具 --- ### Elasticsearch ElasticSearch是一個分佈式全文搜索引擎,使用Java所開發是當前流行的企業級搜索引擎,能夠達到即時搜索、穩定、可靠與快速。主要是用來完成日誌檢索、分析與儲存工作 --- ### Logstash Logstash可以用來收集日誌、轉換日誌、解析日誌並將處理後的日誌提供給Elasticsearch儲存。主要是用來解析日誌 --- ### Kibana Kibana可以將日誌轉成為各種圖表,為使用者提供強大的日誌視覺化。主要是用來顯示日誌介面 --- ### ELK架構圖  --- ## Grok Logstash可以將日誌資訊進行過濾轉換後,依照指的地方進行儲存。 在進行儲存前會經過三個步驟(Inputs、Filters、Outputs),將不同種類傳來的資料進行過濾轉送。  --- [Grok](https://grokdebug.herokuapp.com/)是將日誌資訊解析為結構化和可查詢內容。 --- 接下來透過編寫規則讓Logstash能過濾出我們要的日誌內容 點選Discover貼上```ERROR [2020-10-29 12:34:56] 我是內容```進行過濾後會發現這個字串有2欄被識別出來  --- 接下來點選Debugger把剛剛從Discover識別出來的參數```%{CISCO_REASON}%{SYSLOG5424SD} 我是內容```貼上並點選Go ---  可以看到前2欄被grok拆解成json --- 接下來將剩下的第3欄再進一步識別```我是內容```,點選Pattern後選擇grok-patterns可以看到很多的規則,這邊使用```GREEDYDATA .*```  --- 再次點選Debugger輸入```%{GREEDYDATA:message}```就可以看到第3欄被識別出來了  --- ## 安裝ELK [Cloud Elastic](https://cloud.elastic.co/registration?elektra=downloads-overview&storm=elasticsearch)  --- 選擇ELK模式  --- 選擇雲端與ELK版本後點擊右下角Create deployment  --- 點擊左邊的Elasticsearch / Copy endpoint(拋送至ES的路徑)  --- ELK硬體配置  --- 開啟Kibana介面  --- 點選Explore on my own  --- build logstash image * Dockerfile ```= FROM logstash:7.9.3 COPY conf.d /etc/logstash/conf.d CMD ["-f", "/etc/logstash/conf.d"] ``` --- * logstash.conf ``` input { redis { host => "redis" port => 6379 data_type => "list" key => "log" password => "abcdqazwsxedc" } } filter { if [fields][service] == "customlog" { grok { match => ["message", "%{TIMESTAMP_ISO8601:[@metadata][timestamp]} %{NUMBER:threadid} %{LOGLEVEL:loglevel} %{NOTSPACE:logger} %{GREEDYDATA:message}"] overwrite => [ "message" ] } date { match => [ "[@metadata][timestamp]", "YYYY-MM-dd HH:mm:ss.SSS" ] timezone => "UTC" } mutate { convert => { "threadid" => "integer" } add_field => { "hostname" => "%{[beat][hostname]}" "servertype" => "%{[fields][servertype]]}" "[@metadata][env]" => "%{[fields][env]]}" } remove_field => ["beat", "fields"] } } } output { elasticsearch { hosts => ["cloud elastic url"] user => "cloud elastic user" password => "cloud elastic password" index => "%{[fields][env]}_%{[fields][service]}-%{+YYYY.MM.dd}" } } ``` --- 安裝windows版filebeat [官網](https://www.elastic.co/downloads/beats/filebeat)  --- 使用powershell系統管理員模式切換到Filebeat路徑下進行安裝```.\install-service-filebeat.ps1```  --- 啟動服務```net start filebeat```  --- * filebeat.yml 留意日誌路徑與日誌輸出目標  ---  --- 運行redis與logstash ```= version: "3" services: logstash: container_name: logstash image: logstash:7.9.3 ports: - 5044:5044 restart: always environment: LOG_LEVEL: error networks: service_net: ipv4_address: 172.22.238.11 depends_on: - elasticsearch redis: container_name: redis image: redis:3.2.4 entrypoint: redis-server --maxmemory "4gb" --appendonly yes --requirepass abcdqazwsxedc ports: - 6379:6379 restart: always volumes: - /data/redis:/data networks: service_net: ipv4_address: 172.22.238.12 networks: service_net: driver: bridge ipam: config: - subnet: 172.22.238.0/24 ``` --- 使用[Redis Desktop Manager](https://github.com/uglide/RedisDesktopManager)確認Log是否有存入暫存區  --- ## 操作方式 新增index 點選左上角Manage spaces -> Kibana-Index Patterns  --- Create index pattern  --- 輸入testev_customlog-* 後點選 Next step  --- 選擇@timestamp 後 點選Create index pattern  --- 建立完成  --- 回到kibana的Discover介面就可以看到剛剛新增的index  --- 常用指令 |指令|說明| |-|-|-| |GET _cat/indices/?v|查看index細節 |GET _cat/nodes/?v|確認叢集狀態 |DELETE 【index名稱】-【日期】|刪除index --- 點擊左手邊的Dev Tools  --- 查看index 
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up