# πŸš€ **Kafka Tutorial – Part 4: Kafka Topics, Partitions & Replication Deep Dive – Mastering Scalability, Durability & Data Management** **#ApacheKafka #KafkaTopics #Partitioning #Replication #DataRetention #LogCompaction #KafkaAdmin #KafkaTutorial #DataEngineering #StreamingArchitecture** --- ## πŸ”Ή **Table of Contents** 1. [Recap of Part 3](#recap-of-part-3) 2. [What Are Kafka Topics? A Deep Revisit](#what-are-kafka-topics-a-deep-revisit) 3. [Understanding Partitions: The Engine of Parallelism](#understanding-partitions-the-engine-of-parallelism) 4. [How to Choose the Right Number of Partitions](#how-to-choose-the-right-number-of-partitions) 5. [Replication: Ensuring Fault Tolerance & High Availability](#replication-ensuring-fault-tolerance--high-availability) 6. [In-Sync Replicas (ISR) & Leader Election](#in-sync-replicas-isr--leader-election) 7. [Critical Topic-Level Configurations](#critical-topic-level-configurations) 8. [Data Retention Policies: Time vs Size](#data-retention-policies-time-vs-size) 9. [Log Compaction vs Log Deletion](#log-compaction-vs-log-deletion) 10. [Managing Topics with Kafka CLI & Admin API](#managing-topics-with-kafka-cli--admin-api) 11. [Reassigning Partitions & Expanding Topics](#reassigning-partitions--expanding-topics) 12. [Monitoring Topics & Brokers](#monitoring-topics--brokers) 13. [Common Pitfalls & Best Practices](#common-pitfalls--best-practices) 14. [Visualizing Topic Architecture (Diagram)](#visualizing-topic-architecture-diagram) 15. [Summary & What’s Next in Part 5](#summary--whats-next-in-part-5) --- ## πŸ” **1. Recap of Part 3** In **Part 3**, we mastered **Kafka Consumers**: - Built robust consumers in **Java and Python**. - Understood **consumer groups**, **rebalancing**, and **partition assignment**. - Learned about **offsets**, **auto vs manual commits**, and **at-least-once delivery**. - Handled **failures**, **duplicates**, and **consumer lag**. - Explored performance tuning and monitoring. Now, in **Part 4**, we go **deep into the storage layer** of Kafka: **Topics, Partitions, and Replication**. You’ll learn how Kafka ensures **durability**, **scalability**, and **fault tolerance** β€” and how to **configure topics like a pro**. Let’s dive in! --- ## πŸ“ **2. What Are Kafka Topics? A Deep Revisit** A **topic** is more than just a "category" for messages β€” it's a **distributed, replicated, immutable commit log**. > πŸ”Ή Think of a topic as a **"table"** in a distributed database, but optimized for **append-only, high-throughput writes**. Each topic has: - A **name** (e.g., `user-events`) - **Partitions** (e.g., 6) - **Replication factor** (e.g., 3) - **Configuration** (retention, cleanup, etc.) ### πŸ”§ Topic Anatomy: ``` Topic: user-events β”œβ”€β”€ Partition 0 β”‚ β”œβ”€β”€ Leader: Broker 0 β”‚ β”œβ”€β”€ Replicas: [0, 1, 2] β”‚ β”œβ”€β”€ ISR: [0, 1] β”‚ └── Data: [msg0, msg1, msg2, ...] (stored on disk) β”œβ”€β”€ Partition 1 β”‚ β”œβ”€β”€ Leader: Broker 1 β”‚ β”œβ”€β”€ Replicas: [1, 2, 0] β”‚ β”œβ”€β”€ ISR: [1, 2] β”‚ └── Data: [msg0, msg1, ...] └── ... ``` > βœ… Topics are **the foundation** of Kafka’s scalability and durability. --- ## πŸ”€ **3. Understanding Partitions: The Engine of Parallelism** ### πŸ”Ή What is a Partition? A **partition** is an **ordered, immutable sequence of messages** that is continually appended to. > βœ… Each partition is a **file on disk**, stored in the broker’s `log.dirs`. Key properties: - Messages in a partition are **ordered**. - Each message has a unique **offset** (0, 1, 2, ...). - Partitions are **independently replicated**. ### πŸ”Ή Why Partitions Matter | Benefit | Description | |--------|-------------| | **Scalability** | More partitions = more parallelism (producers & consumers) | | **Throughput** | Multiple producers/consumers can write/read simultaneously | | **Load Distribution** | Partitions spread across brokers | > ⚠️ But **too many partitions** can hurt performance β€” we’ll cover trade-offs. --- ## πŸ”’ **4. How to Choose the Right Number of Partitions** This is one of the **most important design decisions** in Kafka. ### βœ… **Factors to Consider** | Factor | Guidance | |-------|---------| | **Throughput Needs** | More partitions = higher max throughput | | **Consumer Parallelism** | 1 consumer per partition max (in a group) | | **Broker Resources** | Each partition uses file handles, memory, network | | **Rebalance Time** | More partitions = longer rebalances | | **ZooKeeper Load** | Metadata scales with partition count | ### πŸ“Š **Rule of Thumb: Start Small, Scale Later** | Use Case | Recommended Partitions | |--------|------------------------| | Dev / Testing | 1–3 | | Medium App (10K msg/sec) | 6–12 | | High Throughput (100K+ msg/sec) | 24–100+ | > βœ… You can **increase partitions later**, but **never decrease**. --- ### πŸ›‘ **Pitfall: Over-Partitioning** Too many partitions cause: - High memory usage on brokers. - Slower leader elections. - Longer consumer rebalances. - Increased ZooKeeper load. > πŸ“Œ **LinkedIn recommendation**: No more than **2000 partitions per broker**. --- ## πŸ” **5. Replication: Ensuring Fault Tolerance & High Availability** Kafka replicates partitions across brokers to **prevent data loss**. ### πŸ”Ή **Replication Factor** Number of copies of each partition. ```bash --replication-factor 3 ``` > βœ… **Production: Always use 3** (1 leader + 2 followers). With RF=3: - One **leader** (handles reads/writes). - Two **followers** (replicate data). If leader fails β†’ a follower becomes leader. --- ## πŸ”„ **6. In-Sync Replicas (ISR) & Leader Election** ### πŸ”Ή **What is ISR?** **In-Sync Replicas (ISR)** are replicas that are **up-to-date** with the leader. - If a follower falls behind (network issue), it’s removed from ISR. - Only ISR members can become leader. ### πŸ”Ή **Example:** ``` Partition 0: Leader: Broker 0 Replicas: [0, 1, 2] ISR: [0, 1] ← Broker 2 is lagging ``` If Broker 0 fails β†’ **Broker 1 becomes leader** (only ISR members eligible). > βœ… This prevents **data loss** during failover. --- ### βš™οΈ **Critical Replication Settings** | Config | Purpose | |-------|--------| | `replica.lag.time.max.ms` | Max time a follower can be behind (default: 30s) | | `min.insync.replicas` | Min ISR size for acks=all to succeed (e.g., 2) | | `unclean.leader.election.enable` | Allow non-ISR replicas to become leader (❌ **set to false in prod**) | > βœ… **Best Practice**: > ```properties > min.insync.replicas=2 > unclean.leader.election.enable=false > ``` This ensures **durability** even during broker failures. --- ## βš™οΈ **7. Critical Topic-Level Configurations** You can set configs **per topic** or globally in `server.properties`. ### πŸ”Ή **Essential Topic Configs** | Config | Default | Meaning | |-------|--------|--------| | `retention.ms` | 604800000 (7 days) | How long to keep messages | | `retention.bytes` | -1 (unlimited) | Max size per partition | | `cleanup.policy` | delete | delete or compact | | `segment.bytes` | 1GB | Size of log segment file | | `max.message.bytes` | 1MB | Max message size | | `compression.type` | producer | none, gzip, snappy, zstd | --- ### βœ… **Example: Create a Topic with Custom Configs** ```bash bin/kafka-topics.sh --create \ --topic user-profiles \ --bootstrap-server localhost:9092 \ --partitions 6 \ --replication-factor 3 \ --config retention.ms=259200000 \ # 3 days --config cleanup.policy=compact \ # Log compaction --config segment.bytes=536870912 # 512MB segments ``` > βœ… This topic retains data for **3 days** and uses **log compaction**. --- ## πŸ—‘οΈ **8. Data Retention Policies: Time vs Size** Kafka deletes old data based on: - **Time**: `retention.ms` - **Size**: `retention.bytes` ### πŸ”Ή **Retention by Time (Default)** ```properties retention.ms=604800000 # 7 days ``` Old segments are deleted when they exceed retention. --- ### πŸ”Ή **Retention by Size** ```properties retention.bytes=1073741824 # 1GB per partition ``` Useful when disk space is limited. > βœ… You can combine both β€” whichever comes first wins. --- ## 🧼 **9. Log Compaction vs Log Deletion** ### πŸ”Ή **Log Deletion (Default)** Messages are deleted after `retention.ms`. > βœ… Good for **event streams** (e.g., clickstream, logs). ``` Partition: click-events [click1] β†’ [click2] β†’ [click3] β†’ [click4] β†’ ... β†’ DELETED after 7 days ``` --- ### πŸ”Ή **Log Compaction** Kafka keeps **only the latest value** for each key. > βœ… Perfect for **state-based data** (e.g., user profiles, inventory). #### Example: Topic: `user-profiles` with key = `user_id` ``` Offset | Key | Value -------|----------|------------------- 0 | user1 | {name: "Alice", status: "active"} 1 | user2 | {name: "Bob", status: "inactive"} 2 | user1 | {name: "Alice", status: "premium"} ← Only this remains ``` After compaction: - `user1`: latest value - `user2`: latest value > βœ… Consumers see **final state** of each key. Set: ```properties cleanup.policy=compact ``` --- ### πŸ†š **When to Use Which?** | Use Case | Policy | |--------|--------| | Event Logs, Clickstreams | `delete` | | User Profiles, Configs | `compact` | | Both: Keep recent + latest | `compact,delete` | > βœ… Example: `cleanup.policy=compact,delete` β†’ compact **and** delete after 7 days. --- ## πŸ› οΈ **10. Managing Topics with Kafka CLI & Admin API** ### βœ… **CLI: List, Describe, Delete Topics** ```bash # List topics bin/kafka-topics.sh --list --bootstrap-server localhost:9092 # Describe topic bin/kafka-topics.sh --describe --topic user-events --bootstrap-server localhost:9092 ``` Output: ``` Topic: user-events PartitionCount: 3 ReplicationFactor: 3 Topic: user-events Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1 Topic: user-events Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2 ``` ```bash # Delete topic bin/kafka-topics.sh --delete --topic old-topic --bootstrap-server localhost:9092 ``` > ⚠️ `delete.topic.enable=true` must be set in `server.properties`. --- ### βœ… **Java Admin API: Programmatic Control** ```java import org.apache.kafka.clients.admin.*; import java.util.*; AdminClient admin = AdminClient.create(Collections.singletonMap( "bootstrap.servers", "localhost:9092" )); // Create topic NewTopic topic = new NewTopic("new-events", 6, (short)3); admin.createTopics(Collections.singletonList(topic)); // Describe topic DescribeTopicsResult result = admin.describeTopics(Arrays.asList("user-events")); Map<String, TopicDescription> desc = result.all().get(); desc.forEach((k, v) -> System.out.println(k + ": " + v)); ``` > βœ… Use Admin API for automation and CI/CD. --- ## πŸ” **11. Reassigning Partitions & Expanding Topics** ### βœ… **Increase Partitions** You can **increase** partitions (but not decrease). ```bash bin/kafka-topics.sh --alter \ --topic user-events \ --partitions 12 \ --bootstrap-server localhost:9092 ``` > ⚠️ **Warning**: Changing partitions **breaks key-based ordering**. --- ### βœ… **Reassign Partitions (Move Data)** When adding new brokers, redistribute partitions. #### Step 1: Generate reassignment plan ```bash bin/kafka-reassign-partitions.sh --generate \ --topics-to-move-json-file topics.json \ --broker-list "0,1,2,3" \ --zookeeper localhost:2181 ``` `topics.json`: ```json { "version": 1, "topics": [ { "topic": "user-events" } ] } ``` #### Step 2: Execute reassignment ```bash bin/kafka-reassign-partitions.sh --execute --reassignment-json-file reassign.json --zookeeper localhost:2181 ``` > βœ… This moves partitions to balance load. --- ## πŸ“Š **12. Monitoring Topics & Brokers** ### βœ… **Key Metrics to Monitor** | Metric | Tool | Why It Matters | |-------|------|---------------| | **Under-replicated partitions** | `kafka-topics.sh --describe` | Indicates replication issues | | **Active controllers** | JMX / Prometheus | Should be 1 cluster-wide | | **Request rate** | Kafka Exporter + Grafana | Detect traffic spikes | | **Disk usage** | `df -h` / Prometheus | Avoid running out of space | | **Consumer lag** | `kafka-consumer-groups.sh` | Processing delays | --- ### βœ… **Check Under-Replicated Partitions** ```bash bin/kafka-topics.sh --describe --under-replicated-partitions --bootstrap-server localhost:9092 ``` Output: ``` Topic: user-events Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1 ``` If `Isr` has fewer than `replication.factor`, investigate! --- ## ⚠️ **13. Common Pitfalls & Best Practices** ### ❌ **Pitfall 1: Too Many Small Partitions** - High overhead per partition. - Slows down recovery. βœ… **Fix**: Size partitions to be **1–10GB** each. --- ### ❌ **Pitfall 2: Using `unclean.leader.election.enable=true`** - Allows **data loss** during failover. βœ… **Fix**: Set to `false` in production. --- ### ❌ **Pitfall 3: Ignoring Disk Space** - Kafka stores data on disk. - Full disk β†’ broker crashes. βœ… Monitor disk and set `log.retention.check.interval.ms=300000` (5 min). --- ### βœ… **Best Practice 1: Use 3x Replication** - Survives 1 broker failure. --- ### βœ… **Best Practice 2: Use `min.insync.replicas=2`** - With `acks=all`, ensures durability. --- ### βœ… **Best Practice 3: Plan Partitions Ahead** - Hard to change later. - Aim for **future scalability**. --- ### βœ… **Best Practice 4: Use Log Compaction for State Topics** - Avoids reprocessing all events. --- ## πŸ–ΌοΈ **14. Visualizing Topic Architecture (Diagram)** ``` Topic: user-events (6 partitions, RF=3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Partition 0: Leader=B0, Replicas=[B0,B1,B2], ISR=[B0,B1] β”‚ β”‚ Partition 1: Leader=B1, Replicas=[B1,B2,B0], ISR=[B1,B2] β”‚ β”‚ Partition 2: Leader=B2, Replicas=[B2,B0,B1], ISR=[B2,B0] β”‚ β”‚ ... β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β–² β–² β–² β”‚ β”‚ β”‚ +-----+----+ +--+---+ +-----+-----+ | Broker 0 | |Broker 1| | Broker 2 | | (Disk) | |(Disk) | | (Disk) | +----------+ +--------+ +-----------+ ``` > πŸ” Data is **replicated**, **partitioned**, and **durable**. --- ## 🏁 **15. Summary & What’s Next in Part 5** ### βœ… **What You’ve Learned in Part 4** - How **partitions** enable scalability and parallelism. - How **replication** ensures fault tolerance. - The role of **ISR** and safe leader election. - **Topic configurations** for retention, compaction, and performance. - How to **manage topics** via CLI and Admin API. - **Log compaction** vs deletion for different use cases. - Monitoring and best practices for production. --- ### πŸ”œ **What’s Coming in Part 5: Kafka Schema Management & Serialization (Avro, Protobuf, JSON Schema)** In the next part, we’ll explore: - πŸ“ **Why Schemas Matter** in event-driven systems. - πŸ”„ **Schema Evolution**: Backward, Forward, Full Compatibility. - πŸ› οΈ **Confluent Schema Registry** setup and usage. - πŸ§ͺ **Avro, Protobuf, JSON Schema** comparison. - πŸ“¦ **Serializers/Deserializers** with Schema Registry. - 🧩 **Integration with Kafka Producers & Consumers**. - πŸ” **Schema Validation & Governance**. > πŸ“Œ **#KafkaSchema #Avro #Protobuf #SchemaRegistry #EventSourcing #KafkaTutorial** --- ## πŸ™Œ Final Words You’re now a **Kafka Storage Expert**. You understand how data is **organized, replicated, and retained** β€” the backbone of any reliable streaming system. > πŸ’¬ **"A well-designed topic isn’t just scalable β€” it’s self-healing, durable, and future-proof."** In **Part 5**, we tackle **data structure and schema management** β€” because in the real world, data isn’t just bytes… it’s **meaning**. --- πŸ“Œ **Pro Tip**: Document your topic designs. Include partition count, retention, and compaction policy. πŸ” **Share this guide** with your team to build **production-grade Kafka architectures**. --- πŸ“· **Image: Log Compaction vs Deletion** *(Imagine two columns: one showing deletion over time, one showing compaction keeping latest per key)* ``` Log Deletion: Log Compaction: [msg1] [user1: v1] [msg2] [user2: v1] [msg3] β†’ DELETED [user1: v2] β†’ [user1: v2] [msg4] [user2: v2] β†’ [user2: v2] ``` --- βœ… **You're now ready for Part 5!** We're diving into **Kafka Schema Management** β€” the key to **evolvable, reliable, and interoperable** data systems. #KafkaTutorial #LearnKafka #KafkaTopics #Partitioning #Replication #DataRetention #LogCompaction #ApacheKafka #DataEngineering #StreamingPlatform