--- tags: Kafka title: Kafka_technical_manual --- # [Kafka_QuickStart (include Download)](https://kafka.apache.org/quickstart) ### step1: [download](https://www.apache.org/dyn/closer.cgi?path=/kafka/2.6.0/kafka_2.13-2.6.0.tgz) ``` $ tar -xzf kafka_2.13-2.6.0.tgz $ cd kafka_2.13-2.6.0 ``` ### step2: (1.)Start "ZooKeeper" Server `$ bin/zookeeper-server-start.sh config/zookeeper.properties` > [name=WithourSounded] bin/"zookerper"-server-start.sh (2.)Start "Kafka" Server `$ bin/kafka-server-start.sh config/server.properties` > [name=WithoutSounded] bin/"kafka"-server-start.sh ### step3: Create Kafka Topic `$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092` > [name=WithoutSounded]Checking the Topic > `bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092` ### step4: Write some Events `$ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092` ``` I love NDHU I love PCDM I love CSIE ``` `Ctrl-C` to stop the producer client anytime ### step5: Read the events `$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092` ### step5.5: Check the Topic ` $ bin/kafka-topic.sh --list --bootstrap-server localhost:9092 ` ### step6: (SKIP) IMPORT/EXPORT YOUR DATA AS STREAMS OF EVENTS WITH KAFKA CONNECT ### step7: (SKIP) PROCESS YOUR EVENTS WITH KAFKA STREAMS ### step8: (1.) Stop producer and consumer clients w/ `ctrl-c` (2.) Stop Kafka Broker w/ `ctrl-c` (3.) Stop ZooKeeper w/ `ctrl-c` (4.) Delete any data of ur local kafka envirnment I meant ANY `$ rm -rf /tmp/kafka-logs /tmp/zookeeper` ### Others: ``` cluster/kafka/bin/zookeeper-server-start.sh cluster/kafka/config/zookeeper.properties cluster/kafka/bin/kafka-server-start.sh cluster/kafka/config/server.properties start-all.sh start-master.sh pcdm@master:~/cluster/program/testing$ python3 consumer.py pcdm@master:~/cluster/program/testing$ python3 produceFile1.py pcdm@master:~/cluster/program/testing$ python3 produceFile2.py pcdm@master:~/cluster/program/testing$ python3 produceFile3.py pcdm@master:~$ cluster/spark/bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 cluster/program/playingStructureStreaming/kafka/structured_kafka_wordcount.py localhost:9092 subscribe Topic1 ``` ``` listeners=LISTENER_LAN://0.0.0.0:9092,LISTENER_WAN://0.0.0.0:9094 advertised.listeners=LISTENER_LAN://192.168.1.10:9092,LISTENER_WAN://PUBLIC_IP_ADDRESS_HERE:9094 listener.security.protocol.map: LISTENER_LAN:PLAINTEXT,LISTENER_WAN:PLAINTEXT inter.broker.listener.name=LISTENER_LAN ``` ``` cluster/kafka/bin/kafka-producer-perf-test.sh --topic pcdm --num-records 1000000 --record-size 200 --throughput 1 --producer-props bootstrap.servers=localhost:9092 ```