# 用 Docker 跑 Kafka: 建 Topic/放資料/讀資料 ###### tags: `Data Engineering` ###### 更新日期: 2025-09-09 ## 啟用 Kafka 服務 ### 1. 撰寫 yaml 檔: docker-compose.yml ``` services: kafka: image: 'bitnami/kafka:latest' ports: - "9092:9092" - "9093:9093" environment: - TZ=Asia/Taipei - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER - KAFKA_BROKER_ID=1 - KAFKA_CFG_NODE_ID=1 - KAFKA_CFG_PROCESS_ROLES=broker,controller - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka:9094 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,SASL_PLAINTEXT://:9093,CONTROLLER://:9094 - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9093 - KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN - KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN - KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN - KAFKA_CFG_SECURITY_INTER_BROKER_PROTOCOL=SASL_PLAINTEXT - KAFKA_JAAS_ENABLED=true - KAFKA_CLIENT_USERS=${KAFKA_CLIENT_USERS:-infolink} - KAFKA_CLIENT_PASSWORDS=${KAFKA_CLIENT_PASSWORDS:-29048382} - KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true - KAFKA_ENABLE_KRAFT=yes # volumes: # # - ~/Documents/KafkaCluster/kraft:/bitnami/kafka:rw # - ~/Documents/KafkaData:/bitnami/kafka:rw # - # - ./kafka/scripts/run-init.sh:/docker-entrypoint-initdb.d/run-init.sh:ro # - ./kafka/scripts/init-topics.sh:/init-topics.sh:ro volumes: - kafka_data:/bitnami/kafka # - ./kafka/scripts/run-init.sh:/docker-entrypoint-initdb.d/run-init.sh:ro # - ./kafka/scripts/init-topics.sh:/init-topics.sh:ro restart: always # links: # - fluentd # logging: # driver: ${LOGGING_DRIVER:-json-file} # options: # fluentd-address: localhost:24224 # fluentd-async: "true" # tag: "{{.Name}}" networks: - fep-net fep: # image: infolink/fep:latest build: fep privileged: true environment: - TZ=Asia/Taipei volumes: - ./fep/fep.properties:/usr/app/fep.properties - ./fep/log4j2.xml:/usr/app/log4j2.xml - ./fep/log:/usr/app/log - ./fep/license:/usr/app/license - /sbin/dmidecode:/usr/sbin/dmidecode - /dev/mem:/dev/mem - /sys:/sys - /dev:/dev ports: - ${SERVER_PORT:-80}:80 # healthcheck: # test: curl --fail http://localhost:80 || exit 1 # interval: 60s # retries: 5 # start_period: 20s # timeout: 10s restart: always networks: - fep-net networks: fep-net: driver: bridge volumes: # zookeeper_data: # driver: local kafka_data: driver: local ``` ### 2. cd 到 yaml 檔案的資料夾 ### 3. docker compose up -d 先找到 Kafka 容器 ID 或名字: docker ps 進入容器 (指定為 "Kafka" 的容器 ID) bash: docker exec -it 8fbc467b6e79 bash 列出可用工具: ls /opt/bitnami/kafka/bin ## 在裡面創一個topic,然後放資料進去 ### 1. 建立 topic 在 Kafka bash 裡輸入: /opt/bitnami/kafka/bin/kafka-topics.sh \ --create \ --topic mytopic \ --bootstrap-server localhost:9092 \ --partitions 1 \ --replication-factor 1 列出所有 topics (檢查剛剛建立的 topic 是否存在): /opt/bitnami/kafka/bin/kafka-topics.sh \ --list \ --bootstrap-server localhost:9092 指令:kafka-topics.sh --list --bootstrap-server localhost:9092 ### 2. 發送資料到 Topic (Producer) Producer:送訊息 (放資料/寫入資料) 到 mytopic /opt/bitnami/kafka/bin/kafka-console-producer.sh \ --bootstrap-server localhost:9092 \ --topic mytopic > 之後就可以輸入文字,每按 Enter 送一筆;Ctrl+C 結束 ### 3. 檢視資料 (讀取資料) /opt/bitnami/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic mytopic \ --from-beginning > 看到 Producer 剛打的每一行 ## Kafka 的架構 ### 概念: Kafka 架構主要元件 1. Producer 負責把資料寫入 Kafka topic 例如:應用程式、感測器、log 收集系統 2. Broker Kafka 伺服器本體 你的 docker-compose 裡啟了一個 broker(bitnami/kafka) 3. Topic 資料的分類單位 每個 topic 可以切分成多個 partition,提升吞吐量 4. Consumer 從 topic 讀取資料的應用程式 可以單獨存在,或是組成 Consumer Group(分工讀取同一個 topic) 5. Zookeeper / KRaft(你用的是 KRaft 模式) 早期 Kafka 需要 Zookeeper 管理叢集 新版 Kafka 可以用 KRaft(Kafka Raft),不需要外部 Zookeeper 6. Connect / Stream **Kafka Connect**:把資料匯出到 DB / Elasticsearch / Data Lake **Kafka Streams**:在 topic 之間做即時運算、轉換 ### 看一下 Kafka 的架構: 做法 * 1. 畫一張 Kafka 架構圖  * 2. 用指令確認現狀 看 topics: kafka-topics.sh --list --bootstrap-server localhost:9092 看 consumer group: kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list FEP1
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up