# Redis
{%preview https://www.hellointerview.com/learn/system-design/deep-dives/redis %}
## ⚡ 為什麼「記憶體儲存」比「磁碟儲存」快?
| 項目 | 記憶體(RAM) | 磁碟(SSD/HDD) |
|------|----------------|------------------|
| **存取速度** | 奈秒(ns)等級 | 微秒(μs)到毫秒(ms)等級 |
| **I/O 操作** | 無需機械動作,直接存取 | 需要尋址、讀取、寫入等步驟 |
| **延遲** | 極低 | 相對高 |
| **適合用途** | 快取、即時任務、暫存資料 | 永久儲存、歷史紀錄、交易資料 |
記憶體是直接由 CPU 存取的,而磁碟則需要透過 I/O 操作,這就是速度差異的根本原因。
---
## 💥 為什麼記憶體儲存會有「資料遺失」風險?
記憶體是**揮發性儲存**,當系統重啟、崩潰或斷電時,資料會消失。這就是為什麼 Redis(預設儲存在記憶體)會有 durability 的疑慮。
### Redis 的持久化選項:
| 模式 | 說明 | 資料遺失風險 |
|------|------|----------------|
| **RDB(快照)** | 定期儲存資料快照到磁碟 | 可能遺失最近一次快照後的資料 |
| **AOF(Append-Only File)** | 每次寫入都記錄到磁碟 | 可設定每秒同步,但仍可能遺失幾秒資料 |
| **MemoryDB(AWS)** | Redis 相容,但支援磁碟持久化 | 提供更高的 durability,速度略慢 |
---
## 🧠 為什麼還是會選 Redis?
因為在某些場景中,**速度比持久性更重要**,例如:
- 任務排程(如 Celery)
- 快取查詢結果
- 暫存 retry 計畫(可接受少量資料遺失)
- 即時分析(如 dashboard)
如果你需要 **高 durability**,可以考慮:
- 使用 Redis + AOF(每秒同步)
- 使用 MemoryDB(AWS)
- 或將關鍵資料同步到 MongoDB / PostgreSQL 等持久性資料庫
---
### ✅ 總結:
| 儲存方式 | 優點 | 缺點 | 適合場景 |
|----------|------|------|-----------|
| **記憶體(Redis)** | 快速、低延遲 | 資料可能遺失 | 暫存、快取、任務排程 |
| **磁碟(MongoDB/PostgreSQL)** | 高持久性 | 存取速度慢 | 永久儲存、交易紀錄 |
---
Great question — this statement highlights a **core truth about how Redis works in a clustered or sharded environment**, and why **key naming = data distribution strategy** in Redis.
Let’s unpack it step by step:
---
## **Why is key choice important in Redis clusters?**
Because in **Redis Cluster mode**, **keys are distributed across nodes based on a hash slot algorithm**. That means:
> 🧠 The **key name directly determines which Redis node stores the data.**
So, the way you **design or prefix** your keys will **control data locality, performance, and scalability**.
---
## ⚙️ How does Redis Cluster store keys?
* Redis Cluster uses **CRC16(key) % 16384** to compute a **hash slot** for each key.
* Redis has **16,384 slots** evenly distributed across nodes in the cluster.
* Each node is responsible for a portion of those slots.
### ✅ Example:
```bash
Key: "user:123" → CRC16("user:123") → slot 7650 → Node A
Key: "session:123" → slot 13422 → Node B
```
Thus:
> 👇 If two keys hash to different slots → they are stored on different nodes.
---
## 🧩 What problems can happen?
### ❌ 1. Keys for the same logical entity get split
Suppose you store user session info:
```plaintext
"user:123:name" → Node A
"user:123:session" → Node B
```
If they land on different nodes, and you want to **read/write them together (like in a transaction)** — Redis cluster **cannot guarantee atomicity** unless they are **on the same node**.
---
## ✅ 🔒 Redis offers a solution: **Hash Tags**
You can **control slot placement** with `{curly brackets}`:
```plaintext
"user:{123}:name" → slot X
"user:{123}:session" → slot X ✅ same slot!
```
Redis uses **only the content inside `{}`** for hashing.
> ✅ So, keys with `{123}` in the name are **guaranteed to land on the same node**, enabling you to do things like `MGET`, `MULTI`, etc.
---
## 💡 Summary: Why key naming = architecture
| Concern | Effect |
| ----------------------------- | ----------------------------------------------- |
| Poor key naming | Data lands on scattered nodes → inefficient ops |
| Good key grouping (hash tags) | Enables atomic ops, fast MGET/MSET |
| Predictable key patterns | Helps debug, scale, shard, and monitor easily |
| Namespace design (`:`) | Makes logical grouping and inspection easier |
---
## ✅ Best Practices for Key Design in Redis Cluster
1. **Use consistent prefixes**: e.g. `user:{user_id}:profile`
2. **Use hash tags**: `{user_id}` to group related keys together
3. **Avoid high-cardinality randomness in prefixes** (e.g. UUIDs in prefix)
4. **Include semantic meaning**: `session:{user_id}:last_seen` > `xyz123`
---
## 👀 Why this matters in your case (crawler & retry system)
If you’re using Redis to:
* Store retry state per URL
* Track session queue state
* Manage concurrent crawl locks per domain
Then **key design affects whether related retry keys land on the same Redis node**, which determines:
* Whether `MGET`, `INCR`, or `WATCH/MULTI/EXEC` will work
* How efficiently you can scan/retry or clean up jobs
> TL;DR: **The keys are your sharding strategy** in Redis. That’s why their design is not just a naming issue — it's a **scaling and correctness issue**.
---
## One node, then all slots in one node, right?
✅ Correct — **if you have a single Redis node**, then:
> 🔐 **All 16,384 hash slots are assigned to that one node.**
This means:
* **No sharding or partitioning happens.**
* All keys — regardless of their names or hash slot values — will be stored on **the same machine**.
* The `{}` hash tags are **irrelevant** in this case, because **all keys land on the same place anyway**.
---
## 🔍 But then… Why care about key naming?
Even on a single node, good key naming is still important for:
| Concern | Why it matters |
| --------------------------------- | ------------------------------------------------------------------ |
| **Readability & maintainability** | Easy to debug and inspect in CLI / UI |
| **Future scaling** | You're future-proofing for Redis Cluster mode |
| **Namespacing** | Avoid collisions across domains (e.g. `user:123` vs `session:123`) |
| **Data modeling** | Helps separate different logical entities |
> 🧠 Think of it as good hygiene now, and **critical design** later when you scale.
---
## 🧠 Pro Tip
If you **expect to scale to Redis Cluster in the future**, **design your keys as if they will be sharded**, for example:
```plaintext
"article:{article_id}:content"
"article:{article_id}:metadata"
"retry:{url_hash}:attempts"
```
That way:
* Your **code is ready** for future cluster migration
* You can later **control co-location** of related keys across nodes
---
## 📌 Redis 作為 Cache 的典型用法
> Redis 最常見的使用情境就是當「快取系統」。
### ✅ 重點 1:Redis 是 key-value 快取(HashMap)
Redis 的基本結構是:
```plaintext
key => value
```
當你把 Redis 當成快取時,就代表你會把資料(例如:查詢結果、HTML 頁面、API 回傳)**存進 Redis 的 key-value 中**。
Redis 天生就像是「分散式的快取 HashMap」,可以:
* 快速存取
* 自動分佈(如果你用 Redis Cluster)
---
### ✅ 重點 2:Redis 容易擴充
當你資料越來越多,只要「加更多 Redis 節點」就能分攤負載和儲存空間。
> 💡 如果用 Redis Cluster,資料會依據 key 自動分散到各個節點(前面提到的 hash slot 機制)。
---
## ⏱️ TTL(Time To Live)
快取的另一個關鍵是「過期時間」:
```plaintext
set("page:/docs/abc", html, ex=3600)
```
### ✅ Redis 保證:
* 過了 TTL 的 key 會自動被刪除
* 你不會讀到過期的值(即使刪除是懶刪除)
這樣做可以:
* 控制 Redis 的記憶體佔用
* 保證只快取新鮮資料
* 自動淘汰老舊項目,避免爆記憶體
---
## 🔥 Hot Key 問題(熱門鍵)
這段特別提到了一個 **快取界的通病**:
> Redis 雖然強大,但 **無法解決 hot key 問題**,這也是 Memcached、DynamoDB 都會遇到的。
### ❓什麼是 Hot Key?
當大量使用者在同一時間都查詢同一個 key:
```plaintext
get("article:123456")
```
* Redis 會一直從記憶體讀這個 key
* 網路流量、CPU、記憶體都壓在這個 key 上
* 形成瓶頸(單點壓力)
---
## 🧠 中文總結整理
| 概念 | 中文說明 |
| ---------- | ---------------------------------------------------- |
| Redis 作為快取 | Redis 是一個高效的 key-value 快取系統,非常適合拿來當作 API、網頁、查詢結果的緩存。 |
| 容易擴充 | Redis Cluster 可以輕鬆加入節點,分散 key 負載,不需要太複雜的設計。 |
| TTL 控管快取大小 | 每筆資料設置過期時間,Redis 保證不會回傳過期資料,也會根據 TTL 淘汰資料。 |
| Hot Key 問題 | 當所有人都查詢同一筆資料,會造成 Redis 的單點壓力,這是所有快取系統都難以解的問題。 |
---
## Redis 作為分散式鎖的用途
### 🔹 原文:
> Redis as a Distributed Lock
> Another common use of Redis in system design settings is as a distributed lock.
✅ **中文:**
Redis 的另一個常見用途是當作「**分散式鎖**」。在系統設計中,這是一種防止多個程序「同時修改資料」的手段。
---
### 🔹 原文:
> Occasionally we have data in our system and we need to maintain consistency during updates (e.g. the very common Design Ticketmaster system design question), or we need to make sure multiple people aren't performing an action at the same time (e.g. Design Uber).
✅ **中文:**
有時候,系統中的某些資料在更新時必須保持一致性(例如設計售票系統 Ticketmaster),或者我們需要確保同一時間只有一個人可以執行某個動作(例如叫車系統 Uber)——這些場景都需要使用「鎖」。
---
### 🔹 原文:
> Most databases (including Redis) will offer some consistency guarantees. If your core database can provide consistency, don't rely on a distributed lock which may introduce extra complexity and issues.
✅ **中文:**
大多數資料庫(包括 Redis)本身就有某種一致性保證。所以**如果你主要的資料庫已經能保證一致性,就不一定需要用 Redis 鎖**,因為鎖的機制反而會增加系統的複雜度與潛在問題。
🧠 **補充:**
Redis 分散式鎖不是萬靈丹,**只有當你跨機器/跨流程時真的需要協調,才建議使用。**
---
### 🔹 原文:
> Your interviewer will likely ask you to think through the edge cases in order to make sure you really understand the concept.
✅ **中文:**
如果這是面試問題,面試官可能會問你許多 **極端情況(edge cases)**,確保你真正理解「什麼時候需要用鎖,以及鎖怎麼正確地用」。
---
## ✅ 簡單 Redis 鎖機制(INCR + TTL)
### 🔹 原文:
> A very simple distributed lock with a timeout might use the atomic increment (INCR) with a TTL.
> When we want to try to acquire the lock, we run INCR.
> If the response is 1 (i.e. we own the lock), we proceed.
> If the response is > 1 (i.e. someone else has the lock), we wait and retry again later.
> When we're done with the lock, we can DEL the key so that other processes can make use of it.
✅ **中文:**
一個簡單的 Redis 分散式鎖可以用 `INCR` + TTL 組成:
* 使用 `INCR` 指令取得一個遞增的數字:
* 若回傳值是 `1`,代表你是第一個人鎖住該資源 → 成功取得鎖。
* 若回傳值 > 1,代表已有其他人鎖住 → 你必須等一等,稍後再 retry。
* 同時設定 TTL,防止某人鎖住之後 **忘了釋放(DEL)**,導致鎖永遠卡住。
* 完成操作後使用 `DEL` 把該 key 刪除,其他人才能重新取得鎖。
---
## 🔐 更強壯的做法:Redlock(Redundancy + Safety)
### 🔹 原文:
> More sophisticated locks in Redis can use the Redlock algorithm.
✅ **中文:**
如果你需要一種更安全、更容錯的 Redis 鎖,那就使用 **Redlock 演算法**。
🔧 Redlock 是由 Redis 創辦人 [Antirez 提出](https://redis.io/docs/latest/develop/use/patterns/distributed-locks/#the-redlock-algorithm),具備:
* 多節點鎖容錯(即使有節點失敗,仍可運作)
* 防止時鐘漂移、鎖卡死等問題
* 適合跨數據中心、高可用的鎖場景
---
## ✅ 中文總結:Redis 分散式鎖核心觀念
| 概念 | 解釋 |
| ---------- | ------------------------------------------ |
| **何時需要鎖?** | 當多個用戶同時修改資源、或需保持操作一致性(例如搶票、搶單) |
| **簡單實作** | `INCR` + TTL + `DEL` |
| **進階實作** | 使用 Redlock,適合跨節點協調 |
| **需注意事項** | 若核心 DB 已有鎖機制,避免多餘的 Redis 鎖造成複雜度與錯誤 |
| **面試中重點** | 清楚說明 retry 機制、鎖失敗處理、TTL、失敗恢復方案等 edge cases |
---