# 3 - dCBOR Deep Dive: Why "almost" deterministic isn't enough *This is a collaborative post with Rebecca Turner, she was amazing in reviewing, compiling and improving all the text! Thank you Rebecca!!!* In the previous posts exploring data serialization for Web3 ([Part 1: JSON vs CBOR](https://hackmd.io/@leonardocustodio/json-vs-cbor) and [Part 2: An Intro to dCBOR](https://hackmd.io/@leonardocustodio/deterministic-data-intro-to-dcbor), we introduced dCBOR and touched on the high-level need for deterministic data in blockchains. Today in part 3, we want to go deeper, and explore some concrete examples that show **why** determinism is non-negotiable for blockchain applications, and why dCBOR's strict rules exist to solve real-world engineering problems \- not just for abstract mathematical purity. This article covers: * the core problem with non-determinism * some real-world examples: * breakdowns with distributed consensus * failures verifying digital signatures * deduplication and storage loss in content-addressable systems * cache invalidation * breaking Gordian Envelopes. * how the above informs dCBOR’s strict rules * what this means for us as developers. ## 1\. The core problem: bytes, not objects As we saw in the first post, logically identical data can produce completely different hashes when serialized differently. In blockchain systems, **we sign bytes, not objects**. If the bytes drift, the signature breaks. This creates a class of bugs that are a nightmare to debug: they appear intermittent, platform-dependent, version-dependent, and often non-reproducible. The root cause lies not in application logic but in a layer that developers typically assume to be transparent and reliable. Let's look at five real-world scenarios where this breaks systems. ## 2\. Why does this matter? Real-world failure scenarios ### 2.1. Distributed consensus breakdowns: transaction ordering In a blockchain system (i.e. where nodes must independently verify, hash, then reach consensus on the state of transactions), non-determinism introduces transaction ordering disagreements. **What should happen:** ``` Transaction Data (logical): { "from": "0xAlice", "to": "0xBob", "amount": 100 } All nodes compute hash: 0xabcd1234... (deterministic) All nodes agree ✓ ``` **What happens with non-deterministic serialization:** ``` Node A serializes as: {"from": "0xAlice", "to": "0xBob", "amount": 100} Hash: 0xabcd1234... Node B's CBOR library (different version or language): Encodes the same logical data but in a different byte sequence Because the implementation doesn't enforce map key ordering Hash: 0xdef56789... Node C's serialization (different implementation): Hash: 0x98765432... ``` **The result:** Nodes compute different hashes for the same transaction. They cannot agree on whether a given transaction is in the block. Some nodes accept the block, others reject it. The consensus mechanism breaks. The blockchain forks. This has indeed [happened in blockchain systems](https://github.com/akircanski/coinbugs) where serialization rules weren't strict enough. ### 2.2. Digital signature verification failures Imagine: - Alice signs a transaction using Signing Service A (which uses dCBOR) - The signature is computed as: `Signature = Sign(Hash(Serialize(transaction_data)))` - Bob tries to verify the signature using Verification Service B (which uses less strict CBOR) **What should happen:** ``` Alice: data = {sender: Alice, amount: 100} canonicalized = dCBOR.encode(data) // Deterministic hash = SHA256(canonicalized) // e.g., 0xabcd1234 signature = Sign(private_key, hash) Bob (verifies with same rules): data = {sender: Alice, amount: 100} canonicalized = dCBOR.encode(data) // SAME canonicalization hash = SHA256(canonicalized) // e.g., 0xabcd1234 (SAME!) Verify(public_key, signature, hash) // ✓ SUCCESS ``` **What happens with non-deterministic serialization:** ``` Alice (using deterministic rules): serialized = encode(data) // Gets specific byte sequence A hash = SHA256(sequence A) // 0xabcd1234 signature = Sign(private_key, hash) Bob (using different library/version): serialized = encode(same data) // Gets DIFFERENT byte sequence B! hash = SHA256(sequence B) // 0xdef56789 (DIFFERENT!) Verify(public_key, signature, 0xdef56789) // ✗ FAILS! ``` **The result:** A perfectly valid transaction signed by Alice cannot be verified by Bob's system. In a blockchain context the transaction is rejected as invalid, user funds might be lost, and the entire transaction validity model breaks. This failure is particularly dangerous because it can be exploited, not just occur accidentally. ### 2.3. Deduplication and storage loss in content-addressable systems In a blockchain system that uses IPFS or similar content-addressable storage, multiple users store what is logically identical Envelope data. The system should deduplicate (store only once, reference many times). **What should happen:** ``` User A stores Envelope: {id: 1, metadata: {created: 2023}} hash = SHA256(dCBOR.encode(envelope)) // 0xabcd1234 stored_at = "ipfs://Qm..." (keyed by 0xabcd1234) User B stores same Envelope: {id: 1, metadata: {created: 2023}} hash = SHA256(dCBOR.encode(envelope)) // 0xabcd1234 (SAME!) storage_check: "ipfs://Qm..." already exists references_count = 2, storage_used = 1x Total storage: 1 copy, 2 references ✓ ``` **What happens with non-deterministic serialization:** ``` User A stores: {id: 1, metadata: {created: 2023}} Serialized by System 1: 0xabcd1234... Hash: QmAAA... User B stores same logical data: Serialized by System 2: 0xdef56789... Hash: QmBBB... (DIFFERENT!) storage_check: doesn't find QmAAA... stores as new entry Total storage: 2 copies of identical data Storage wasted: 100% ``` **The result:** This failure mode affects not just correctness but economics and resource efficiency. - Storage space is wasted by redundant copies - Bandwidth is wasted transmitting duplicates - Deduplication is completely ineffective - In large-scale systems, this adds significant cost. ### 2.4. Cache invalidation and computation waste: query cache can't find previous results A blockchain system maintains a cache of computed query results. The cache key is a hash of the query parameters, so the same query reuses cached results. **What should happen:** ``` Query 1: get_balance(user: Alice, block: 100) Serialize: produces same bytes every time Hash: 0xabcd1234 Cache miss: compute balance Store result in cache[0xabcd1234] = 1000 tokens Query 2: get_balance(user: Alice, block: 100) // Same query Serialize: produces SAME bytes Hash: 0xabcd1234 Cache hit: return 1000 tokens ✓ (no computation needed) Cache effectiveness: 50% hit rate ✓ ``` **What happens with non-deterministic serialization:** ``` Query 1: get_balance(user: Alice, block: 100) Serialized as: {...} Hash: 0xabcd1234 Cache miss: compute balance Store in cache[0xabcd1234] = 1000 Query 2: get_balance(user: Alice, block: 100) // Same query Serialized differently by different system: {...} Hash: 0xdef56789 (DIFFERENT!) Cache miss: recompute balance Store in cache[0xdef56789] = 1000 Query 3: get_balance(user: Alice, block: 100) // Still same query Hash: 0x98765432 (DIFFERENT again!) Cache miss: recompute yet again Now we have cache entries for 0xabcd, 0xdef5, 0x9876... all duplicate results Cache effectiveness: 0% hit rate ✗ ``` **The result:** This is a subtler failure that causes performance degradation rather than correctness issues. - Every identical query is computed multiple times; cache becomes useless - Computational load increases proportionally; system performance degrades - For a high-traffic blockchain system, this could mean 10x-100x increase in CPU usage. ### 2.5. Breaking Gordian Envelopes: elided data cannot be verified **Setup:** - Alice creates a Gordian Envelope with sensitive financial data - She computes cryptographic digests of each component - She can later elide (redact) components while proving they were in the original - Bob wants to verify the proof **What should happen (simplified):** ``` Original Envelope tree: Root hash = hash(metadata + contents) Contents = hash(transaction A) + hash(transaction B) + hash(transaction C) Alice elides transaction B, shares: - transaction A (unredacted) - hash(transaction B) (proof it was there) - transaction C (unredacted) - Root hash (to prove against) Bob verifies: - Computes hash of transaction A → matches expected - Uses provided hash(transaction B) - Computes hash of transaction C → matches expected - Rebuilds root hash from all three → should match provided root hash - ✓ Verification succeeds, proves B was original ``` **What happens with non-deterministic serialization:** ``` Alice creates envelope with deterministic encoding Computes hashes: hash(A), hash(B), hash(C), root_hash She sends to Bob, but Bob's system uses less strict CBOR When Bob receives transaction A and tries to verify: Bob's system re-encodes transaction A → Different byte sequence! Bob computes hash → doesn't match expected hash Verification fails: either data was tampered with, or proof is invalid Bob cannot prove data integrity, even though data is legitimate ``` **The result:** This failure mode is specific to systems like Gordian Envelope that depend on cryptographic proofs for selective disclosure. For privacy-preserving systems: - Privacy mechanism breaks down - Users cannot selectively disclose information with cryptographic proofs - Either transactions cannot be verified (false negatives), or - Legitimate transactions fail verification (false positives) - The entire Gordian Envelope system becomes unusable. ## 3\. Why dCBOR's Strict Rules Exist To support these use cases, dCBOR is opinionated. It doesn't just suggest "good practices"; it invalidates choices. Each rule is there because **each rule prevents a specific category of failures at scale**. Based on the *CBOR, dCBOR, and Gordian Envelope Book*, here are the critical constraints: | Rule | What It Prevents | | :---- | :---- | | **Numeric Reduction** (2.0 → 2\) | Cache misses from equivalent numbers with different encodings | | **Mandatory Map Key Sorting** | Consensus failure, signature verification failure, deduplication breakdown | | **NFC String Normalization** | Hash mismatches from Unicode-equivalent strings | | **No Indefinite Lengths** | Arrays/maps encoded differently with/without length prefix | | **Restricted Simple Values** | Cross-implementation ambiguity in reserved values | | **Mandatory Decoder Validation** | Systems accepting non-deterministic encodings | ## 4\. Conclusion Determinism isn't just about data hygiene; it's the bedrock of verifiable computing. Using dCBOR provides: 1. **Correctness guarantees** \- Cryptographic operations work as designed 2. **Interoperability** \- Systems in different languages/versions agree 3. **Efficiency** \- Caching and deduplication work correctly 4. **Debuggability** \- Non-determinism cannot be the root cause of failures 5. **Scalability** \- These guarantees hold as systems grow Watch this space for the next post in this series where we'll explore **Gordian Envelopes** and how they leverage dCBOR's deterministic guarantees to build privacy-preserving, verifiable data structures.