Aptos Architecture Deep Dive: Consensus, Execution, and Storage
===============================================================
Introduction
------------
Aptos is a high-performance Layer-1 blockchain launched in 2022 by the former Meta Diem team.
Written in Rust, Aptos is designed around **low latency**, **high throughput**, and **strong safety guarantees**, targeting use cases such as real-time trading, payments, and on-chain financial infrastructure
Aptos overview
| | | | |
| --- | --- | --- | --- |
| Network Performance | Network Scale | On-Chain Activity | User Cost & Accessibility |
| Live TPS: ~50<br><br>Historical Peak TPS: 22,000+<br><br>Testnet Peak TPS: 30,000+ <br>Finality: Sub-second<br><br>Block Time: ~70ms | Active Validators: ~140<br><br>Geographic Coverage: 23+ countries, 54+ cities<br><br>Network Reliability: High global distribution | Total Accounts: 110M<br><br>Daily Active Users Peak: 15M+<br><br>Smart Contracts Deployed: 3,000+<br><br>Stablecoin Flows: Weekly inflows peaked at $1.58B | Average Gas Fee: ~$0.001 – $0.01 per transaction<br><br>User Experience: Consistently low fees even during high network load |
Rather than optimizing a single component, Aptos takes a **full-stack approach**, rethinking:
- Consensus and ordering
- Execution and parallelism
- Storage and state scalability
- Account and asset models
This post provides a **system-level overview** of the Aptos on-chain architecture, focusing on **consensus, execution, and storage**, with an emphasis on _why_ these design choices matter.
---
High-Level Architecture
-----------------------
At a high level, an Aptos validator is organized as a **modular pipeline**:
`Clients -> P2P Network -> Mempool & Data Availability ->Consensus (Ordering) ->Execution-> Storage`
Each layer is designed to scale independently and to pipeline with others, enabling sub-second finality and high throughput under load

---
Consensus Layer: From Batches to Finality
-----------------------------------------
### Transaction Flow Overview
In Aptos, consensus is not just about blocks—it is about **ordering transaction batches with strong availability guarantees**.
The simplified flow is:
`Mempool → Quorum Store (batching + availability) → Consensus (ordering) → Execution`
This separation allows Aptos to decouple **data availability** from **ordering**, which is critical for scalability

---
### Quorum Store: Data Availability First
**Quorum Store (QS)** replaces traditional leader-centric mempools.
Each validator:
- Locally batches transactions
- Broadcasts batch metadata
- Collects **2f+1 signatures** to form an _availability proof_
Only batches that reach quorum are eligible for ordering.
Key properties:
- No single leader bottleneck
- Data availability is guaranteed _before_ ordering
- Validators vote on **batch IDs**, not raw transactions
Internally, batch references form a **DAG across rounds**, which consensus later linearizes
Aptos overview
.
---
### Raptr & VelociRaptr: Prefix Consensus
Aptos’s latest consensus generation is **Raptr**, with **VelociRaptr** as an optimized pipeline version.
Instead of committing blocks one-by-one, Raptr introduces **Prefix Consensus**:
- Validators propose blocks that reference batch IDs
- Validators vote
- From overlapping votes, a **common prefix** is extracted
- This prefix is _immediately finalized_
Important implications:
- Finality does **not** wait for a specific block
- Uncommitted tail blocks can be discarded safely
- Prefixes **never roll back**, thanks to quorum intersection
VelociRaptr further pipelines:
- Proposal
- Voting
- Ordering
- Execution
This reduces block proposal time to the **tens of milliseconds range** (proposal time, not full network latency)
Aptos overview
.
---
Execution Layer: Parallelism as a First-Class Citizen
-----------------------------------------------------
The execution layer is where Aptos gains most of its raw throughput.
### Core Components
- **Move VM** – secure, resource-oriented virtual machine
- **Block-STM** – optimistic parallel execution engine
- **Shardines** – sharded execution (early stage)
Aptos overview
---
### Block-STM: Optimistic Parallel Execution
Block-STM executes transactions **in parallel**, assuming conflicts are rare:
1. Transactions are executed concurrently across threads
2. Each transaction tracks its read/write set
3. Conflicts are detected after execution
4. Conflicting transactions are selectively re-executed
5. Final commit order is deterministic
Key advantages:
- Near-linear scaling with CPU cores
- No need for developers to declare access lists
- Deterministic results across all validators
This design is especially effective for heterogeneous workloads (DeFi, payments, gaming)
Aptos overview
.
---
### Shardines: Scaling Beyond a Single Node
While Block-STM scales **vertically** within a node, **Shardines** targets **horizontal execution scaling**.
Shardines introduces:
- Multiple executor shards per validator
- Each shard runs its own Block-STM instance
- A coordinator partitions transactions
- A state aggregator merges results
Early test results show:
- ~35–40k TPS per shard
- > 1M TPS aggregate throughput
- Near-linear scalability with shard count
Shardines is **validator-internal sharding**, not cross-validator sharding, and is currently in early testing stages
Account Model & Move VM
-----------------------
Aptos uses **Resource-Oriented Accounts**, powered by the Move language.
### Key Principles
- All on-chain assets are **Resources**
- Resources cannot be copied, dropped, or forged
- Only the defining module can create or destroy them
- The Move VM enforces these rules at the bytecode level
This eliminates entire classes of bugs common in EVM-based token contracts (e.g., balance inconsistencies, reentrancy-driven asset duplication)
Each account logically contains:
- Address
- Resources (typed state)
- Modules (code)
This tight coupling between assets and types also enables more efficient parallel execution.
---
Storage Layer: Scaling State Safely
-----------------------------------
### Core Components
- **State View** (multi-version)
- **Jellyfish Merkle Tree (JMT)**
- **RocksDB**
- **State Pruner**
The JMT provides:
- Efficient proofs
- Versioned state access
- Deterministic state roots per block
Aptos overview
---
### Storage Sharding (AIP-97)
To address long-term state growth, Aptos introduces **Storage Sharding**:
- Logical partitioning into **256 shards**
- Each shard maintains its own subtree
- Root hash is derived from shard roots
This improves:
- Write amplification
- Parallel reads
- Long-term state sustainability
Storage sharding is transparent to execution and consensus
Aptos overview
.
---
Toward Aptos 2.0
----------------
Aptos’s longer-term vision is a **real-time on-chain trading engine**, enabled by:
- Ultra-low-latency consensus (Archon / Raptr evolution)
- Namespaces for resource isolation
- Sub-30ms proposal times
- Deep execution and storage pipelining
Aptos overview
---
Conclusion
----------
Aptos stands out not because of a single optimization, but because of **tight integration across layers**:
- DAG-based data availability
- Prefix-based consensus finality
- Optimistic parallel execution
- Resource-safe programming model
- Sharded execution and storage roadmaps
Together, these design choices position Aptos as one of the most ambitious attempts at building a **globally scalable, low-latency Layer-1**.