owned this note
owned this note
Published
Linked with GitHub
# Building a Lightning-Fast JSON-RPC Server in Rust 🦀
## A Journey Through Async Rust, Network Protocols, and System Design
> Imagine you're building a blockchain node, a cryptocurrency wallet backend, or a distributed system. You need different parts of your application, or even different applications, to talk to each other. But here's the catch: some services run on your local machine, others on remote servers, and they all need to communicate reliably, efficiently, and securely.
> This is where **RPC (Remote Procedure Call)** comes in. It lets you call functions on remote machines as if they were local. Think of it like calling your friend on the phone instead of walking to their house, same conversation, just faster and more convenient..
> JSON-RPC 2.0 is a stateless, lightweight remote procedure call protocol. It defines:
> - Request format: method, params, id
> - Response format: result/error, id
> - Batch requests: Multiple requests in one shot
> - Error codes: Standardized error reporting
> Why JSON-RPC over REST?
> - Method names are explicit (no guessing HTTP verbs)
> - Batch requests are native
> - Strict error handling
> - Single endpoint (no route proliferation)
> - Stateless by design
> I built **[DiceRPC](https://github.com/dicethedev/DiceRPC)** - a lightweight, production-ready JSON-RPC 2.0 framework in Rust that supports both [**TCP**](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) and [**HTTP**](https://en.wikipedia.org/wiki/HTTP) transports, complete with authentication, metrics, batch requests, and graceful shutdown. It's fast, type-safe, and actually fun to use.
> **Tech Stack:** Rust, Tokio (async runtime), Axum (HTTP), JSON-RPC 2.0.
---
# Understanding the Problem
## What is JSON-RPC?
Imagine you have a robot friend. You want to ask your robot to do things for you:
- "Hey robot, what time is it?"
- "Hey robot, add 2 + 2"
- "Hey robot, save this picture"
But you can't just talk to the robot, you need to send **messages** in a special format that the robot understands.
[**JSON-RPC**](https://www.json.org/json-en.html) is like a rulebook for writing these messages. It tells you:
```json
// You asking the robot
{
"jsonrpc": "2.0",
"method": "add",
"params": {"a": 2, "b": 2},
"id": 1
}
// Robot's answer
{
"jsonrpc": "2.0",
"result": 4,
"id": 1
}
Simple, right? That's JSON-RPC!
```
Most existing RPC frameworks are either:
**Too heavy:** Enterprise solutions with features you don't need
**Too rigid:** Lock you into specific transport layers or architectures
**Not Rust-native:** Built for other languages, awkward to use in Rust
I wanted something **lightweight**, **flexible**, and **Rust-first**. Something I could understand completely, customize freely, and use to learn Rust's powerful features.
So, I built **DiceRPC**, a complete **JSON-RPC 2.0** framework in Rust that supports both **TCP** and **HTTP** transports, authentication, metrics, batch requests, and graceful shutdown. Think of it as a fast, type-safe postman for programs to talk to each other.
## What I Built (The Big Picture)
**DiceRPC** is a complete framework that lets you:
1. **Create a server** that receives these robot messages
2. **Send messages** over the internet (HTTP) or direct connections (TCP)
3. **Handle many requests at once** (like a restaurant serving multiple customers)
4. **Keep things secure** with API keys
5. **Monitor everything** with built-in metrics
## DiceRPC Architecture Overview
```mermaid
graph TB
subgraph Client["Client Layer"]
CLI[CLI Client]
EXT[External Clients]
end
subgraph Transport["Transport Layer"]
TCP[TCP Transport]
HTTP[HTTP Transport]
end
subgraph Middleware["Middleware Layer"]
AUTH[Authentication]
BATCH[Batch Handler]
end
subgraph Core["Core RPC Engine"]
ROUTER[Request Router]
REGISTRY[Handler Registry]
EXEC[Async Executor]
end
subgraph Logic["Business Logic"]
H1[ping]
H2[get_balance]
H3[transfer]
H4[get_transaction]
end
subgraph State["State Management"]
STORE[StateStore]
MEM[In-Memory DB]
end
subgraph Observability["Observability"]
METRICS[Metrics]
LOGS[Logging]
end
CLI --> TCP
EXT --> HTTP
EXT --> TCP
TCP --> BATCH
HTTP --> BATCH
BATCH --> AUTH
AUTH --> ROUTER
ROUTER --> REGISTRY
REGISTRY --> EXEC
EXEC --> H1
EXEC --> H2
EXEC --> H3
EXEC --> H4
H2 --> STORE
H3 --> STORE
H4 --> STORE
STORE --> MEM
ROUTER --> METRICS
AUTH --> LOGS
style TCP fill:#e1f5ff
style HTTP fill:#e1f5ff
style AUTH fill:#fff3e0
style ROUTER fill:#f3e5f5
style STORE fill:#e8f5e9
style METRICS fill:#fce4ec
```
## Layer by Layer
1. **Client Layer**
**CLI Client:** Built-in command-line tool for testing RPC methods
**External Clients:** Any application connecting via TCP/HTTP to make RPC calls
2. **Transport Layer**
**TCP Transport:** Raw TCP socket connections with length-prefixed framing (Port 4000)
**HTTP Transport:** REST-like JSON-RPC over HTTP using Axum (Port 3000)
```rust
// TCP: Binary framed protocol
[4-byte length][JSON payload]
// HTTP: Standard REST
POST /rpc
Content-Type: application/json
```
3. **Middleware Layer**
**Authentication:** Validates API keys before processing (supports params or headers)
**Batch Handler:** Processes single requests or batches of requests concurrently
4. **Core RPC Engine**
**Request Router:** Matches incoming method names to registered handlers
**Handler Registry:** Thread-safe HashMap storing all registered RPC methods
**Async Executor:** Runs handlers concurrently using Tokio async runtime
5. **Business Logic**
**ping:** Health check endpoint (returns "pong")
**get_balance:** Fetches account balance from state
**transfer:** Moves funds between accounts atomically
**get_transaction:** Retrieves transaction details by ID
6. **State Management**
**StateStore:** In-memory storage for accounts and transactions
**In-Memory DB:** RwLock-protected HashMaps for concurrent read/write access.
7. **Observability**
**Metrics:** Tracks requests, errors, latency, and method call counts
**Logging:** Request/response logging with tracing for debugging
## Why I Built This
I wanted to understand somethings like:
- **How do microservices communicate?** (Netflix, Uber - they all use RPC)
- **How does Rust handle concurrency?** (async/await, Arc, locks)
- **How to build production-grade systems?** (metrics, auth, graceful shutdown)
### Why Not Use Existing Frameworks?
I could have used [`jsonrpsee`](https://docs.rs/jsonrpsee) or [`tarpc`](https://github.com/google/tarpc), but:
- **Learning** - Building from scratch = deep understanding
- **Control** - Every design decision is mine
- **Portfolio** - Shows I can build complex systems
## The Core Components
### **The RPC Server**
Think of it as a vending machine. You put in a code (method name), add some money (params), and get your snack (result)
```rust
pub struct RpcServer {
handlers: RwLock<HashMap<String, Arc<Handler>>>,
}
impl RpcServer {
pub async fn register<F, Fut>(&self, method: &str, f: F)
where
F: Fn(Value) -> Fut + Send + Sync + 'static,
Fut: Future<Output = Result<Value, RpcErrorObj>> + Send + 'static,
{
// Store handler in thread-safe map
let handler = Arc::new(move |params| Box::pin(f(params)));
self.handlers.write().await.insert(method.to_string(), handler);
}
pub async fn handle_request(&self, req: RpcRequest) -> RpcResponse {
// Look up handler, call it, return response
}
}
```
Using `RwLock<HashMap>` allows concurrent reads (many requests) but exclusive writes (handler registration). Perfect for read-heavy workloads.
### **Transport Abstraction**
**Why both TCP and HTTP?**
| Feature | TCP | HTTP |
|------------------|--------------|-----------------|
| Speed | ⚡⚡⚡⚡⚡ | ⚡⚡⚡ |
| Browser support | NO | YES |
| Binary data | YES | Meh |
| Debugging | Hard | Easy (curl) |
| Load balancing | Complex | Easy |
| Use case | Backend-to-backend | Web clients |
The sweetest part is that: Both transports share the same RPC core!
```rust
// TCP Server
let config = TcpServerConfig::new(addr, server)
.with_auth(auth)
.with_metrics(metrics);
run_with_framing(config).await?;
// HTTP Server
HttpTransport::new(server)
.with_auth(auth)
.with_metrics(metrics)
.serve(addr).await?;
```
Same logic, different transport. That's modularity!
### 3. Framing Protocol
**The Problem:** TCP is a byte stream, not a message stream. How do you know where one message ends and another begins?
**Bad Solution:** Use newlines (`\n`) as delimiters
- Breaks with binary data
- Hard to handle large messages
- Vulnerable to attacks
**Good Solution:** Length-prefixed framing
```
[4 bytes: message length][message payload]
```
```rust
pub async fn write_frame<W>(writer: &mut W, data: &[u8]) -> Result<()>
where
W: AsyncWriteExt + Unpin,
{
let len = data.len() as u32;
writer.write_all(&len.to_be_bytes()).await?; // Write length
writer.write_all(data).await?; // Write payload
Ok(())
}
```
Why this rocks for me?:
- Works with any data (binary or text)
- Simple to implement
- Efficient (no scanning for delimiters)
- Prevents denial-of-service (size limits)
### Authentication Middleware
**The strategy Pattern in Action:**
```rust
pub enum AuthStrategy {
None, // YOLO mode
ApiKeyInParams, // Key in JSON params
ApiKeyInHeader, // Key in HTTP headers
}
pub struct AuthMiddleware {
strategy: AuthStrategy,
valid_keys: Arc<RwLock<HashSet<String>>>,
}
```
**Beautiful part:** The RPC server doesn't know about auth! It's injected via trait:
```pub trait AuthenticatedServer {
async fn handle_authenticated_request(
&self,
req: RpcRequest,
auth: &AuthMiddleware,
) -> RpcResponse;
}
```
### Metrics System
**What gets measured gets managed:**
```rust
pub struct Metrics {
total_requests: AtomicU64, // Lock-free counters!
total_success: AtomicU64,
total_errors: AtomicU64,
avg_duration_us: Arc<RwLock<u64>>,
method_counts: Arc<RwLock<HashMap<String, u64>>>,
}
The Magic: AtomicU64 for counters = zero-cost concurrency
```
**Request Tracer Pattern:**
```rust
let tracer = RequestTracer::new(method, metrics);
// ... process request ...
if success {
tracer.success().await;
} else {
tracer.error("reason").await;
}
The tracer automatically records timing on drop. That's RAII baby
```
## Core Features
1. **Batch Requests**
Send multiple RPCs in one HTTP call:
```json
curl -X POST http://localhost:3000/rpc -d '[
{"jsonrpc":"2.0","method":"ping","params":{},"id":1},
{"jsonrpc":"2.0","method":"get_balance","params":{"address":"0xAlice"},"id":2},
{"jsonrpc":"2.0","method":"transfer","params":{"from":"0xAlice","to":"0xBob","amount":100},"id":3}
]'
# All processed concurrently, returned as array
```
This is how the implementation is done:
```rust
match batch {
BatchRequest::Batch(requests) => {
// Process all concurrently
let futures: Vec<_> = requests
.into_iter()
.map(|req| server.handle_request(req))
.collect();
let responses = join_all(futures).await;
BatchResponse::Batch(responses)
}
}
```
2. **Built-in Metrics**
Every request is tracked automatically:
```json
curl http://localhost:3000/metrics
# Response:
{
"total_requests": 12450,
"total_success": 12380,
"total_errors": 70,
"avg_duration_us": 234,
"method_counts": {
"ping": 5000,
"get_balance": 4200,
"transfer": 3250
}
}
```
**How it works:**
```rust
pub struct RequestTracer {
method: String,
start: Instant,
metrics: Arc<Metrics>,
}
impl RequestTracer {
pub async fn success(self) {
let duration = self.start.elapsed();
self.metrics.record_success();
self.metrics.record_duration(duration).await;
}
}
// Usage in handlers
let tracer = RequestTracer::new("transfer", metrics);
// ... process request ...
tracer.success().await; // or tracer.error("reason").await
```
3. **Graceful Shutdown**
No more dropped connections when stopping the server:
```rust
// Listen for CTRL+C
let shutdown = Arc::new(ShutdownCoordinator::new());
tokio::spawn(async move {
shutdown.wait_for_signal().await;
});
// Main loop
loop {
tokio::select! {
accept_result = listener.accept() => {
// Handle connection
}
_ = shutdown_rx.recv() => {
info!("Shutting down gracefully...");
break; // Exit loop, finish existing connections
}
}
}
```
4. **Type-Sfe Error Handling**
Errors are properly typed and match JSON-RPC spec:
```rust
pub struct RpcErrorObj {
pub code: i64,
pub message: String,
pub data: Option<Value>,
}
// Standard error codes
pub const PARSE_ERROR: i64 = -32700;
pub const INVALID_REQUEST: i64 = -32600;
pub const METHOD_NOT_FOUND: i64 = -32601;
pub const INVALID_PARAMS: i64 = -32602;
```
5. **Pluggable Authentication**
```rust
// Add auth to any transport
let auth = Arc::new(AuthMiddleware::new(
AuthStrategy::ApiKeyInParams
));
auth.add_key("secret-key-123").await;
HttpTransport::new(server)
.with_auth(auth)
.serve("0.0.0.0:3000").await?;
```
# What Rust Taught Me?
1. **Ownership Makes You Think Harder (In a Good Way)**
Before Rust:
```bash
def process_request(request):
data = request.data
# Who owns 'data'? Is it copied? Shared?
# Will it be freed? When?
return transform(data)
```
After Rust:
```bash
fn process_request(request: Request) -> Result<Response> {
let data = request.data; // data moved
// Compiler: "request is now invalid"
// Me: "Oh right, I need to think about this"
Ok(transform(data))
}
```
**What is the Lesson for me:** The borrow checker isn't fighting you, it's teaching you to think about data flow.
2. **Fearless Concurrency is Real** This code is safe. No data races. Guaranteed by the compiler:
```rust
let server = Arc::new(RpcServer::new());
// Spawn 1000 tasks, all accessing server
for i in 0..1000 {
let server = server.clone(); // Arc: shared ownership
tokio::spawn(async move {
server.handle_request(req).await;
});
}
```
**Why is works:**
- `Arc` provides shared ownership (thread-safe reference counting)
- `RwLock` allows multiple readers OR writer
- Compiler ensures no data races at compile time
In other languages: You'd need runtime checks, mutexes, and prayers.. lol
3. **Error Handling as a First-Class Citizen**
```rust
fn transfer(from: &str, to: &str, amount: u64)
-> Result<Transaction, TransferError>
{
// Compiler forces you to handle ALL error cases
if amount == 0 {
return Err(TransferError::InvalidAmount);
}
// ...
}
```
**What is the Lesson for me:** `Result<T, E>` makes errors explicit and forces handling.
4. **Zero-Cost Abstractions Are Not Marketing**
```rust
// This high-level code...
let filtered: Vec<_> = numbers
.iter()
.filter(|x| *x > 10)
.map(|x| x * 2)
.collect();
// ...compiles to the same assembly as:
let mut filtered = Vec::new();
for x in &numbers {
if *x > 10 {
filtered.push(x * 2);
}
}
```
5. **The Type System is Your Friend**
```rust
// This won't compile (good!)
let server: Arc<RpcServer> = Arc::new(RpcServer::new());
let handler = server; // Move
let another = server; // ERROR: server was moved
// This will compile
let server: Arc<RpcServer> = Arc::new(RpcServer::new());
let handler = server.clone(); // Clone the Arc
let another = server.clone(); // Clone again - both valid
```
**What is the Lesson for me:** If it compiles, it's probably correct. No more "works on my machine" bugs.
# Technical Deep Dives
## How Async Handlers Work
The magic behind type-safe async handlers:
```rust
// Users write simple functions
server.register("ping", |params| async move {
Ok(json!("pong"))
}).await;
// But we store them as trait objects
pub type HandlerFuture = Pin<Box<dyn Future<Output = Result<Value>> + Send>>;
pub type Handler = dyn Fn(Value) -> HandlerFuture + Send + Sync;
// The registration wraps user functions
pub async fn register<F, Fut>(&self, method: &str, f: F)
where
F: Fn(Value) -> Fut + Send + Sync + 'static,
Fut: Future<Output = Result<Value>> + Send + 'static,
{
let handler: Arc<Handler> = Arc::new(move |params| {
Box::pin(f(params)) // Wrap future in Pin<Box<>>
});
self.handlers.write().await.insert(method.to_string(), handler);
}
```
**Why this matters:**
- Users write simple async functions
- Framework handles all the complexity
- Type-safe at compile time
- No runtime overhead
## The Framing Protocol
Length-prefixed framing explained:
```rust
pub async fn write_frame<W>(writer: &mut W, data: &[u8])
-> Result<()>
where
W: AsyncWriteExt + Unpin,
{
// Write 4-byte length prefix (big-endian)
let len = data.len() as u32;
writer.write_all(&len.to_be_bytes()).await?;
// Write payload
writer.write_all(data).await?;
Ok(())
}
pub async fn read_frame<R>(reader: &mut R) -> Result<Vec<u8>>
where
R: AsyncReadExt + Unpin,
{
// Read 4-byte length
let mut len_bytes = [0u8; 4];
reader.read_exact(&mut len_bytes).await?;
let len = u32::from_be_bytes(len_bytes) as usize;
// Read payload
let mut payload = vec![0u8; len];
reader.read_exact(&mut payload).await?;
Ok(payload)
}
```
### Advantages over newline-delimited:
1.** Binary safe:** Can send any data
2. **No escaping:** JSON strings with newlines work fine
3. **Efficient:** No scanning, exact reads
4. **Predictable:** Always know how much to read
## The Batch Request Parser
How we handle both single and batch requests seamlessly:
```rust
#[derive(Debug, Deserialize)]
#[serde(untagged)] // Try variants in order
pub enum BatchRequest {
Single(RpcRequest),
Batch(Vec<RpcRequest>),
}
// Serde automatically tries:
// 1. Parse as array → Batch
// 2. Parse as object → Single
// Usage:
let batch = serde_json::from_str(json_str)?;
match batch {
BatchRequest::Single(req) => {
// Handle one
}
BatchRequest::Batch(reqs) => {
// Handle many
}
}
```
## Challenges & Solutions
### Challenge 1: Generic Handler Registration
**Problem:** How to accept any async function as a handler?
```rust
// Want to support all of these:
server.register("ping", |_| async { Ok(json!("pong")) });
server.register("echo", |p| async move { Ok(p) });
server.register("complex", |p| some_async_fn(p));
```
**Solution:** Generic trait bounds + type erasure
```rust
pub async fn register<F, Fut>(&self, method: &str, f: F)
where
F: Fn(Value) -> Fut + Send + Sync + 'static,
Fut: Future<Output = Result<Value>> + Send + 'static,
{
// Type-erase to Handler trait object
let handler: Arc<Handler> = Arc::new(move |params| {
Box::pin(f(params))
});
self.handlers.write().await.insert(method.to_string(), handler);
}
```
### Challenge 2: Metrics Without Performance Hit
**Problem:** Don't want metrics to slow down every request
**Solution:** Lock-free atomic operations + async recording
```rust
pub struct Metrics {
total_requests: AtomicU64, // No locks!
total_success: AtomicU64,
// ... but these need locks (less frequent)
method_counts: Arc<RwLock<HashMap<String, u64>>>,
}
impl Metrics {
pub fn record_request(&self) {
// Fast: no locks, just atomic increment
self.total_requests.fetch_add(1, Ordering::Relaxed);
}
pub async fn record_method(&self, method: &str) {
// Slower but infrequent: lock and update map
let mut counts = self.method_counts.write().await;
*counts.entry(method.to_string()).or_insert(0) += 1;
}
}
```
### Challenge 3: Graceful Shutdown with Active Connections
**Problem:** Don't drop connections mid-request when shutting down
**Solution:** Broadcast channel + tokio::select!
```rust
pub struct ShutdownCoordinator {
tx: broadcast::Sender<()>,
}
impl ShutdownCoordinator {
pub async fn wait_for_signal(&self) {
signal::ctrl_c().await.unwrap();
let _ = self.tx.send(()); // Notify all listeners
}
}
// In server loop
let mut shutdown_rx = shutdown.subscribe();
loop {
tokio::select! {
Ok((socket, _)) = listener.accept() => {
// Handle new connection
}
_ = shutdown_rx.recv() => {
info!("Received shutdown signal");
break; // Exit loop, active connections finish
}
}
}
```
### Challenge 4: Feature-Gated HTTP vs TCP
**Problem:** Not everyone needs both transports
**Solution:** Cargo features + conditional compilation
```toml
[features]
default = ["tcp"]
tcp = []
http = ["dep:axum", "dep:tower", "dep:tower-http"]
full = ["tcp", "http"]
```
```rust
#[cfg(feature = "http")]
pub mod http_transport;
#[cfg(feature = "tcp")]
pub mod tcp;
```
**Result:** Users only compile what they use!
## What I'd Do Differently?
### Things That Worked Well
1. **Starting with TCP first** - Simpler, taught fundamentals
2. **Writing tests early** - Caught bugs before they cascaded
3. **Using Axum for HTTP** - Amazing ergonomics
4. **Length-prefixed framing** - More robust than newlines
5. **Metrics from day 1** - Invaluable for debugging
### Things I'd Change
1. **Use tower middleware earlier** - Middleware layering is cleaner
2. **Add structured logging** - Plain logs get messy fast
3. **Write more benchmarks** - Performance should be measured, not guessed
4. **Add OpenAPI/Swagger** - HTTP API docs are important
5. **Connection pooling** - For the TCP client
# Getting Started with DiceRPC
### Installation
```bash
# Clone the repository
git clone https://github.com/dicethedev/DiceRPC
cd DiceRPC
# Check Rust version (requires 1.70+)
rustc --version
# Build the project
cargo build --release
# Run tests
cargo test
# Build with all features
cargo build --release --features full
```
### Project Structure
```
DiceRpc/
├── src/
│ ├── main.rs # CLI entry point
│ ├── lib.rs # Library exports
│ ├── rpc.rs # Core RPC server
│ ├── state.rs # State management
│ ├── client/ # TCP client
│ ├── middleware/ # Auth & middleware
│ ├── server/ # Server implementations
│ ├── transport/ # TCP & HTTP transports
│ └── util/ # Batch processing, etc.
├── examples/ # Example programs
├── tests/ # Integration tests
└── Cargo.toml
```
## Running DiceRPC
### Mode 1: Basic TCP Server (No Auth, No Metrics)
#### Perfect for: Learning, testing, simple use cases
```bash
cargo run -- server --addr 127.0.0.1:4000
```
**Output:**
```
DiceRPC server listening on 127.0.0.1:4000
```
**Test it**
```bash
# In another terminal
cargo run -- client --method ping
# Or with parameters
cargo run -- client --method get_balance --params '{"address":"0x123"}'
```
### Mode 2: Advanced TCP Server (Framed + Auth + Metrics)
#### Production, high-throughput scenarios
```bash
cargo run --features tcp -- tcp-server --addr 127.0.0.1:4000 --auth
```
**Output:**
```
╔══════════════════════════════════════╗
║ DiceRPC Server Started ║
╚══════════════════════════════════════╝
Transport: TCP (Framed)
Address: 127.0.0.1:4000
Authentication enabled. Valid keys: dev-key-123, prod-key-456
Features enabled:
Length-prefixed framing
Metrics collection
Persistent state
Authentication
Ready to accept connections
```
**Features**
- Length-prefixed framing (handles large messages)
- API key authentication
- Real-time metrics
- Graceful shutdown (Ctrl+C)
- Batch request support
### Mode 3: HTTP Server (REST API Style)
#### Web clients, browser integration, microservices
```rust
cargo run --features http -- http-server --addr 127.0.0.1:3000 --auth
```
**Output:**
```
╔══════════════════════════════════════╗
║ DiceRPC Server Started ║
╚══════════════════════════════════════╝
Transport: HTTP
Address: 127.0.0.1:3000
Authentication enabled. Valid keys: dev-key-123, prod-key-456
Features enabled:
HTTP/REST transport
Metrics collection
Persistent state
Batch request support
Authentication
Endpoints:
POST http://127.0.0.1:3000/
POST http://127.0.0.1:3000/rpc
GET http://127.0.0.1:3000/metrics
GET http://127.0.0.1:3000/health
```
## Practical Examples
### Example 1: Simple Ping-Pong
**Start server:**
```rust
cargo run -- server
```
**Client call:**
```rust
cargo run -- client --method ping
```
**Response:**
```json
{
"jsonrpc": "2.0",
"result": "pong",
"id": 1
}
```
**With curl (HTTP mode):**
```json
curl -X POST http://127.0.0.1:3000/rpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "ping",
"params": {},
"id": 1
}'
```
### Example 2: Account Balance System
#### Scenario: Building a simple banking system
**Start HTTP server with state:**
```rust
cargo run --features http --example http_with_state
```
The server automatically initializes demo accounts:
```bash
0xAlice: 10000
0xBob: 5000
0xCharlie: 7500
```
**Check Alice's balance:**
```rust
curl -X POST http://127.0.0.1:3000/rpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "get_balance",
"params": {"address": "0xAlice"},
"id": 1
}' | jq
```
**Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"address": "0xAlice",
"balance": "10000"
},
"id": 1
}
```
**Transfer Money**
```json
curl -X POST http://127.0.0.1:3000/rpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "transfer",
"params": {
"from": "0xAlice",
"to": "0xBob",
"amount": 1000
},
"id": 2
}' | jq
```
**Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"txid": "b6e1a47b-9cf1-42f1-b087-30d44c48e4f3",
"from": "0xAlice",
"to": "0xBob",
"amount": 1000,
"status": "pending"
},
"id": 2
}
```
**Verify the transfer:**
```json
# Check Alice's new balance
curl -X POST http://127.0.0.1:3000/rpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "get_balance",
"params": {"address": "0xAlice"},
"id": 3
}' | jq
# Result: balance is now 9000
# Check Bob's new balance
curl -X POST http://127.0.0.1:3000/rpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "get_balance",
"params": {"address": "0xBob"},
"id": 4
}' | jq
# Result: balance is now 6000
```
## Running the Examples
DiceRPC comes with comprehensive examples:
**Basic Examples**
```bash
# Basic TCP server
cargo run --example tcp_basic
# TCP with state
cargo run --example tcp_with_state --features tcp
# TCP with framing
cargo run --example tcp_framed --features tcp
# TCP with authentication
cargo run --example tcp_with_auth --features tcp
# Full-featured TCP
cargo run --example tcp_full_featured --features tcp
```
**HTTP Examples**
```bash
# Basic HTTP server
cargo run --example http_basic --features http
# HTTP with state
cargo run --example http_with_state --features http
# HTTP with authentication
cargo run --example http_with_auth --features http
# HTTP batch requests demo
cargo run --example http_batch_requests --features http
# Full-featured HTTP
cargo run --example http_full_featured --features http
```
## Configuration Options
**Environment Vraibles**
```bash
# API keys (comma-separated)
export API_KEYS="key1,key2,key3"
# Bind address for HTTP
export HTTP_ADDR="0.0.0.0:3000"
# Bind address for TCP
export TCP_ADDR="0.0.0.0:4000"
# Logging level
export RUST_LOG="dice_rpc=debug,tower_http=debug"
# Run with env vars
cargo run --features http -- http-server
```
**Command Line Options**
```bash
# Help
cargo run -- --help
# TCP server with custom address
cargo run -- server --addr 0.0.0.0:5000
# HTTP server with auth
cargo run --features http -- http-server --addr 0.0.0.0:8080 --auth
# TCP server with framing and auth
cargo run --features tcp -- tcp-server --addr 0.0.0.0:9000 --auth
# Client with custom address
cargo run -- client --addr 127.0.0.1:5000 --method ping
```
# Real-World Scenario where you can use DiceRPC? Microservice Communication
**Scenario:** You have a microservices architecture where services need to communicate.
**Service 1:** **Payment Service (TCP)**
```bash
// payment_service.rs
use dice_rpc::*;
use std::sync::Arc;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let server = Arc::new(RpcServer::new());
let state = Arc::new(state::StateStore::new());
// Initialize accounts
state.set_balance("merchant_123", 0).await;
state.set_balance("customer_456", 10000).await;
server::handlers::register_stateful_handlers(&server, state).await;
// Run TCP server for fast inter-service communication
let config = transport::tcp::TcpServerConfig::new("0.0.0.0:4000", server);
transport::tcp::run_with_framing(config).await?;
Ok(())
}
```
**Service 2:** **Web Gateway (HTTP)**
```bash
// web_gateway.rs
use dice_rpc::*;
use std::sync::Arc;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let server = Arc::new(RpcServer::new());
// Register handlers that proxy to payment service
server.register("make_payment", |params| async move {
// Call payment service via TCP
// Return result to web client
Ok(json!({"status": "success"}))
}).await;
// Expose HTTP for web clients
transport::HttpTransport::new(server)
.serve("0.0.0.0:3000")
.await?;
Ok(())
}
```
### Architecture
```mermaid
flowchart LR
Browser["Browser"] -->|"HTTP"| WebGateway["Web Gateway<br/>(HTTP)"]
WebGateway -->|"TCP (fast)"| PaymentService["Payment Service<br/>(TCP)"]
```
**Benefits:**
- HTTP for public-facing API (easy to use)
- TCP for internal services (fast, efficient)
- Same code, different transports
- Built-in auth, metrics, and monitoring
# What's Next? Future Improvements
**Immediate Improvements**
- Persistent Storage - Replace in-memory with SQLite/PostgreSQL
- TLS Support - Encrypted connections
- WebSocket Transport - For real-time bidirectional communication
- Prometheus Metrics - Standard monitoring format
- Rate Limiting - Prevent abuse
- TLS support
- Rate limiting middleware
- CLI tool improvements
**Advanced Features**
- Pluggable Serialization - MessagePack, CBOR
- Streaming Responses - For large datasets
- Service Discovery - Consul/etcd integration
- Load Balancing Support - Multiple server instances
- gRPC bridge
- Prometheus metrics export
# Final thoughts
Building **DiceRPC** taught me that Rust isn't just about speed, it's about confidence.
Every time I refactor, the compiler catches my mistakes. Every time I add concurrency, the type system ensures safety. Every time I deploy, I sleep soundly knowing there won't be data races or memory leaks.
**Is Rust hard?** Yes.
**Is it worth it?** Absolutely.
Would I do it again? **In a heartbeat**.
### For Rust Beginners
1. **Async Rust is hard but powerful** - Stick with it
2. **Types are your friends** - Lean into the type system
3. **Compiler errors are helping you** - Read them carefully
4. **Arc/Mutex/RwLock are your concurrency toolkit** - Learn when to use each
### For Experienced Developers
1. **Rust isn't just fast, it's correct** - Less debugging at 3 AM
2. **Zero-cost abstractions are real** - Write beautiful code without guilt
3. **The ecosystem is maturing** - Tokio, Axum, Serde are production-ready
4. **Rust forces good design** - The borrow checker makes you think
### For System Designers
1. **Modularity from day one** - Separate transport from logic
2. **Feature flags are essential** - Don't force dependencies
3. **Metrics aren't optional** - Build observability in
4. **Graceful degradation matters** - Handle errors explicitly
# Links & references
- [Rust — install](https://www.rust-lang.org/tools/install)
- [Axum GitHub](https://docs.rs/axum)
- [Tokio](https://tokio.rs/tokio/tutorial)
- [Aysnc Book](https://rust-lang.github.io/async-book/)
- [JSON-RPC 2.0 Specification](https://www.jsonrpc.org/specification)
**Similar Projects:**
[jsonrpc-core](https://github.com/paritytech/jsonrpc) - More features, more complex
[tarpc](https://github.com/google/tarpc) - Different approach
[tonic](https://github.com/hyperium/tonic) - gRPC for Rust
**All Available Methods**
| Method | Parameters | Returns | Description |
|---------------------|------------------------------------|---------------------------------------------------|-----------------------------------|
| `ping` | `None` | `"pong"` | Health check |
| `get_balance` | `address: string` | `{ address, balance }` | Get account balances |
| `set_balance` | `address: string, balance: u64` | `{ address, balance, success }` | Set account balance |
| `transfer` | `from: string, to: string, amount: u64` | `{ txid, from, to, amount, status }` | Transfer funds |
| `get_transaction` | `txid: string` | `{ txid, from, to, amount, timestamp, status }` | Get transaction details |
| `confirm_transaction` | `txid: string` | `{ txid, status, success }` | Confirm pending transaction |
| `get_transactions` | `address: string` | `{ address, transactions[] }` | Get all transactions for address |
| `list_accounts` | `None` | `{ accounts[], count }` | List all accounts |
**Error Codes**
| Code | Meaning |
|---------|----------------------------------|
| -32700 | Parse error |
| -32600 | Invalid request |
| -32601 | Method not found |
| -32602 | Invalid params |
| -32603 | Internal error |
| -32001 | Auth error (invalid key) |
| -32002 | Auth required |
| -32000 | Application error |
## Let's Connect!
Built something cool with DiceRPC? Want to contribute? Just want to chat about Rust?
GitHub: @dicethedev
Twitter: @dicethedev
Email: dicethedev@gmail.com