--- title: ELK tags: ELK --- # ELK > [name=陳信安] > [time=TUE, OCT 29, 2020 11:00 AM] --- # Agenda * 何謂ELK * Grok * 安裝ELK * 操作方式 --- ## 何謂ELK --- ELK是由elastic這間公司所開發(Elastic search、Logstash、Kibana)出來一套日誌監控、分析的工具 --- ### Elasticsearch ElasticSearch是一個分佈式全文搜索引擎,使用Java所開發是當前流行的企業級搜索引擎,能夠達到即時搜索、穩定、可靠與快速。主要是用來完成日誌檢索、分析與儲存工作 --- ### Logstash Logstash可以用來收集日誌、轉換日誌、解析日誌並將處理後的日誌提供給Elasticsearch儲存。主要是用來解析日誌 --- ### Kibana Kibana可以將日誌轉成為各種圖表,為使用者提供強大的日誌視覺化。主要是用來顯示日誌介面 --- ### ELK架構圖 ![](https://i.imgur.com/egXht0y.png) --- ## Grok Logstash可以將日誌資訊進行過濾轉換後,依照指的地方進行儲存。 在進行儲存前會經過三個步驟(Inputs、Filters、Outputs),將不同種類傳來的資料進行過濾轉送。 ![](https://i.imgur.com/VW2Is1T.png) --- [Grok](https://grokdebug.herokuapp.com/)是將日誌資訊解析為結構化和可查詢內容。 --- 接下來透過編寫規則讓Logstash能過濾出我們要的日誌內容 點選Discover貼上```ERROR [2020-10-29 12:34:56] 我是內容```進行過濾後會發現這個字串有2欄被識別出來 ![](https://i.imgur.com/SUYKvkL.png) --- 接下來點選Debugger把剛剛從Discover識別出來的參數```%{CISCO_REASON}%{SYSLOG5424SD} 我是內容```貼上並點選Go --- ![](https://i.imgur.com/VSgcsj0.png) 可以看到前2欄被grok拆解成json --- 接下來將剩下的第3欄再進一步識別```我是內容```,點選Pattern後選擇grok-patterns可以看到很多的規則,這邊使用```GREEDYDATA .*``` ![](https://i.imgur.com/irvVtff.png) --- 再次點選Debugger輸入```%{GREEDYDATA:message}```就可以看到第3欄被識別出來了 ![](https://i.imgur.com/OduUBeH.png) --- ## 安裝ELK [Cloud Elastic](https://cloud.elastic.co/registration?elektra=downloads-overview&storm=elasticsearch) ![](https://i.imgur.com/4VpoE3M.png) --- 選擇ELK模式 ![](https://i.imgur.com/JJwHarB.png) --- 選擇雲端與ELK版本後點擊右下角Create deployment ![](https://i.imgur.com/x2mG9Od.png) --- 點擊左邊的Elasticsearch / Copy endpoint(拋送至ES的路徑) ![](https://i.imgur.com/qPgQ0ql.png) --- ELK硬體配置 ![](https://i.imgur.com/64SOiIx.png) --- 開啟Kibana介面 ![](https://i.imgur.com/A4bNUP5.png) --- 點選Explore on my own ![](https://i.imgur.com/2zqaBoX.png) --- build logstash image * Dockerfile ```= FROM logstash:7.9.3 COPY conf.d /etc/logstash/conf.d CMD ["-f", "/etc/logstash/conf.d"] ``` --- * logstash.conf ``` input { redis { host => "redis" port => 6379 data_type => "list" key => "log" password => "abcdqazwsxedc" } } filter { if [fields][service] == "customlog" { grok { match => ["message", "%{TIMESTAMP_ISO8601:[@metadata][timestamp]} %{NUMBER:threadid} %{LOGLEVEL:loglevel} %{NOTSPACE:logger} %{GREEDYDATA:message}"] overwrite => [ "message" ] } date { match => [ "[@metadata][timestamp]", "YYYY-MM-dd HH:mm:ss.SSS" ] timezone => "UTC" } mutate { convert => { "threadid" => "integer" } add_field => { "hostname" => "%{[beat][hostname]}" "servertype" => "%{[fields][servertype]]}" "[@metadata][env]" => "%{[fields][env]]}" } remove_field => ["beat", "fields"] } } } output { elasticsearch { hosts => ["cloud elastic url"] user => "cloud elastic user" password => "cloud elastic password" index => "%{[fields][env]}_%{[fields][service]}-%{+YYYY.MM.dd}" } } ``` --- 安裝windows版filebeat [官網](https://www.elastic.co/downloads/beats/filebeat) ![](https://i.imgur.com/aQnKOIw.png) --- 使用powershell系統管理員模式切換到Filebeat路徑下進行安裝```.\install-service-filebeat.ps1``` ![](https://i.imgur.com/QMp224N.png) --- 啟動服務```net start filebeat``` ![](https://i.imgur.com/HMM0xG4.png) --- * filebeat.yml 留意日誌路徑與日誌輸出目標 ![](https://i.imgur.com/o2Inflt.png) --- ![](https://i.imgur.com/RHcpx86.png) --- 運行redis與logstash ```= version: "3" services: logstash: container_name: logstash image: logstash:7.9.3 ports: - 5044:5044 restart: always environment: LOG_LEVEL: error networks: service_net: ipv4_address: 172.22.238.11 depends_on: - elasticsearch redis: container_name: redis image: redis:3.2.4 entrypoint: redis-server --maxmemory "4gb" --appendonly yes --requirepass abcdqazwsxedc ports: - 6379:6379 restart: always volumes: - /data/redis:/data networks: service_net: ipv4_address: 172.22.238.12 networks: service_net: driver: bridge ipam: config: - subnet: 172.22.238.0/24 ``` --- 使用[Redis Desktop Manager](https://github.com/uglide/RedisDesktopManager)確認Log是否有存入暫存區 ![](https://i.imgur.com/gCdrQbf.png) --- ## 操作方式 新增index 點選左上角Manage spaces -> Kibana-Index Patterns ![](https://i.imgur.com/jEnBFjM.png) --- Create index pattern ![](https://i.imgur.com/NYkYFis.png) --- 輸入testev_customlog-* 後點選 Next step ![](https://i.imgur.com/BUGbPYc.png) --- 選擇@timestamp 後 點選Create index pattern ![](https://i.imgur.com/oXrLWze.png) --- 建立完成 ![](https://i.imgur.com/zP7EG7S.png) --- 回到kibana的Discover介面就可以看到剛剛新增的index ![](https://i.imgur.com/SbMN9QW.png) --- 常用指令 |指令|說明| |-|-|-| |GET _cat/indices/?v|查看index細節 |GET _cat/nodes/?v|確認叢集狀態 |DELETE 【index名稱】-【日期】|刪除index --- 點擊左手邊的Dev Tools ![](https://i.imgur.com/YrtlFOa.png) --- 查看index ![](https://i.imgur.com/HCgTPzS.png)