Try   HackMD

DA Architecture of Twine L2

Introduction

Aim of this research is to design the DA solution of Twine catering to it's unique needs, model will first develop and implement the DA layer for Twine, using both Etheurem L1 and Celestia, and the extend the modification to incorporate a volition kind of model enabling contract deployers and account addresses to enjoy the liberty of choosing their own preferred DA while maintaining a system that can optimize for both security, cost and efficiency at a granular level. The article will incorporate all the details needed for actual implementation and code modifications required in Twine's node and sequencer, which api's to interact including eth-client's and celestia client's, high level construct of appropiate zk-circuits required to construct proof of Data availability and L2 execution, state management and database changes to be introduced in node.

High level overview

A very high level overview can be:

  1. Posting of transaction data to the selected DA Layer.
  2. Gathring required proofs and data once the data is posted, from the DA layer.
  3. sending the above gathered data with public inputs, along with proof of correct state transition to Settlement contract on L1, which will verify the correctness of data availaibility along with proper rollup execution,
    image

Data posting

Data posting to Celestia

If the sequencer selects txn that needs to be posted to Celestia DA layer, then Twine node has to modify our user transaction in this way:
Twine node will have to convert the user transaction to a format called blob transactions (BlobTx). Blob transactions contain two components, a standard Cosmos-SDK transaction called MsgPayForBlobs and one or more Blobs of data, this conversion process can be handled internally by our node.
Once we have received the transaction, then Twine's execution client will process these transaction, generating necessary state-updates and the new state root once we have state-updates, new-stateroot and transaction data, Zk proof of execution correctness will have to be generated, this will be required for submission to L1 settlement contract for STF verifications(as we're a Zk rollup), also we need to store this Zk proof on our DA(it will be required by Twine full nodes in future for re-execution and verification during sync), once all transactions have been executed and we have the necessary state updates we can continue with DA-posting and settlement.
expected transaction format received from user through Twine's RPC:

// User transaction received by Twine node
#[derive(Debug, Clone)]
pub struct TwineTransaction {
   // Basic transaction fields
   pub nonce: u64,
   pub from: Address, // [u8; 20]
   pub to: Address,
   pub value: U256,
   pub data: Vec<u8>,
   pub gas_limit: u64,
   pub gas_price: U256,
   pub chain_id: u64,

   // L2 specific fields 
   pub l2_gas_limit: u64,
   pub l2_gas_price: U256,
   pub gas_tip_cap: U256,
   pub gas_fee_cap: U256,
   pub access_list: Vec<AccessTuple>,

   // Signature values
   pub v: U256,
   pub r: U256,
   pub s: U256,

   // Hash of transaction
   pub hash: H256, // [u8; 32]
}

#[derive(Debug)]
pub struct AccessTuple {
   pub address: Address,
   pub storage_keys: Vec<H256>,
}

// When batched for Celestia posting
#[derive(Debug)]
pub struct BatchForDA {
   // Block metadata
   pub block_number: u64,
   pub timestamp: u64,

   // Transactions
   pub txs: Vec<L2Transaction>,

   // State Updates from transaction execution 
   pub state_updates: Vec<StateUpdate>,

   // ZK Proofs
   pub zk_proofs: Vec<ZKProof>,

   // Batch metadata
   pub batch_root: H256,
   pub state_root: H256, 
   pub receipts_root: H256,
}

#[derive(Debug)]
pub struct StateUpdate {
   pub address: Address,
   pub storage_key: H256,
   pub storage_val: H256,
   pub old_balance: U256,
   pub new_balance: U256, 
   pub nonce: u64,
   pub code_hash: H256,
}

#[derive(Debug)]
pub struct ZKProof {
   // Public inputs
   pub old_state_root: H256,
   pub new_state_root: H256,
   pub transactions_root: H256,

   // The actual proof
   pub proof: Vec<u8>,

   // Verification key
   pub verification_key: Vec<u8>,
}

// Common types
pub type Address = [u8; 20];
pub type H256 = [u8; 32];
pub type U256 = primitive_types::U256;

Once node has received TwineTransaction, the sequencer can process it as described earlier and generate state updates and zk proofs:

impl Sequencer {
   pub async fn produce_block(&mut self, txs: Vec<L2Transaction>) -> Result<Block, Error> {
       // 1. Process transactions & generate proofs
       let (state_updates, new_state_root) = self.process_transactions(&txs)?;
       let zk_proof = self.generate_zk_proof(&txs, &state_updates, new_state_root)?;

       // 2. Post data to Celestia
       let data = BatchForDA {
           block_number: self.blocks.len() as u64 + 1,
           timestamp: get_current_timestamp(),
           txs,
           state_updates,
           zk_proofs: vec![zk_proof],
           batch_root: compute_batch_root(&txs),
           state_root: new_state_root,
           receipts_root: H256::default(), // Compute from tx receipts
       };

       let span = self.celestia_client
           .submit_block_data(data.clone())
           .await?;

       // 3. Create and sign header
       let header = Header {
           height: (self.blocks.len() as u64) + 1,
           previous_hash: self.blocks.last()
               .map(|b| b.hash())
               .unwrap_or_default(),
           namespace: self.namespace.clone(),
           span,
           state_root: new_state_root,
           sequencer_signature: None, // Will be set below
       };

       // Sign the header
       let signature = self.key.sign(&header.sign_bytes())?;
       let mut signed_header = header;
       signed_header.sequencer_signature = Some(signature);

       let block = Block {
           data,
           header: signed_header,
       };

       Ok(block)
   }


// Supporting types and structs
#[derive(Debug, Clone)]
pub struct Block {
   pub data: BatchForDA,
   pub header: Header,
}

#[derive(Debug, Clone)]
pub struct Header {
   pub height: u64,
   pub previous_hash: H256,
    // celestia specific information
   pub namespace: Vec<u8>,
    // blobstream specifid information
   pub span: Span,
   pub state_root: H256,
   pub sequencer_signature: Option<Signature>,
}

#[derive(Debug, Clone)]
pub struct Span {
   pub celestia_height: u64,
   pub data_share_start: u64,
   pub data_share_len: u64,
}

Celestia specific rules:

Once we have submitted batch of transactions and their blob data, we need to get Span from celestia nodes, goal of these structures is to locate the data in the Celestia block so that we can prove that data's inclusion via Blobstream during settlement, this Span along with namespace will be included in Header, Header will be posted to Ethereum.
Header collected here will be crucial for DA verifications in the L1 settlement contract. This verification can only be performed when commitments from the Celestia validator set (the data root tuple roots) are relayed to the blobstream contracts on Ethereum. Once the blobstream contracts are updated, the sequencer can start with the process of L1 settlement.

L1 settlement:
Twine's settlement on Ethereum will be extended to not only verifying state changes but now also DA verifications:
Verifications needed on settlement contract:

  • L2 specific verifications:
    1. Twine's state changes verification(no changes here)
  • Celestia verifications:
    1. Verify that the sequence of spans is valid, i.e., is part of the Celestia block referenced by its height.
    2. Verifying the proof of rollup data to data root.

Celestia verifications:

  1. verification of validity of sequence of spans:
    By construction, if the sequence of spans refers to a certain location in the square, that location is the data. This location can be in the reserved namespaces, the parity bytes, etc. What matters is that it's part of the square. So to prove that the sequence of spans is invalid, i.e., refers to data that is not available on Celestia, it is necessary and sufficient to show that the sequence of spans doesn't belong to the Celestia block, i.e., the span is out of bounds.

    We could create this proof via generating a binary Merkle proof of any row/column to the Celestia data root.
    This can be done by querying celestia nodes using client using transaction_inclusion_proof endpoint, this will be done by Twine node before hand before sending this proof to L1 settlement contract, some conversion of this proof will be required to be usable by DAVerifier solidity library, This proof will provide the total which is the number of rows/columns in the extended data square. This can be used to calculate the square size. The computeSquareSizeFromRowProof method in the DAVerifier library allows calculating the square size from a row proof or a share proof.

    Then, we will use that information to check if the provided share index, in the header, is out of the square size bounds. In order words, we will check if the startIndex and the startIndex + dataLen are included in the range [0, 4*square_size].

    ​​​​import {DAVerifier} from "@blobstream/lib/verifier/DAVerifier.sol";
    ​​​​import {BinaryMerkleProof} from "@blobstream/lib/tree/binary/BinaryMerkleProof.sol";
    
    ​​​​contract L1Settlement {
    ​​​​    // .....other verifications.....
    
    ​​​​    // DAVerifier library instance
    ​​​​    DAVerifier public verifier;
    
    ​​​​    struct Span {
    ​​​​        uint256 celestiaHeight;
    ​​​​        uint256 startIndex;
    ​​​​        uint256 dataLen;
    ​​​​    }
    
    ​​​​    function verifySpanValidity(
    ​​​​        Span calldata span,
    ​​​​        bytes32[] calldata rowData,
    ​​​​        BinaryMerkleProof calldata rowProof,
    ​​​​        bytes32 dataRoot
    ​​​​    ) public view returns (bool) {
    ​​​​        // 1. Verify the row proof is valid against data root
    ​​​​        require(
    ​​​​            verifier.verifyRowInclusion(
    ​​​​                rowData,
    ​​​​                rowProof,
    ​​​​                dataRoot
    ​​​​            ),
    ​​​​            "Invalid row proof"
    ​​​​        );
    
    ​​​​        // 2. Calculate square size from the row proof
    ​​​​        uint256 squareSize = verifier.computeSquareSizeFromRowProof(
    ​​​​            rowProof.numLeaves  // total number of rows/columns
    ​​​​        );
    
    ​​​​        // 3. Calculate maximum valid index (4 * square_size)
    ​​​​        uint256 maxValidIndex = 4 * squareSize;
    
    ​​​​        // 4. Verify span bounds
    ​​​​        if (span.startIndex >= maxValidIndex) {
    ​​​​            return false;
    ​​​​        }
    
    ​​​​        if (span.startIndex + span.dataLen > maxValidIndex) {
    ​​​​            return false;
    ​​​​        }
    
    ​​​​        return true;
    ​​​​    }
    
    ​​​​    //....other verifications....
    ​​​​}
    
  2. verifying proof of rollup data to data-root
    this part will need to verify following three proofs on verifier contract:

    • Prove that the data root tuple is committed to by the Blobstream smart contract: To prove the data root is committed to by the Blobstream smart contract, we will need to provide a Merkle proof of the data root tuple to a data root tuple root. This can be created using the data_root_inclusion_proof query, this proof along with public inputs like celestiaBlockHeight and tuple will have to passed to blobstream.verifyAttestation function, which will be called by L1 settlement contract.

    • Prove that data is part of the data root(data inclusion): we will need to provide two proofs: a namespace Merkle proof of the data to a row root. This could be done via proving the shares that contain the data to the row root using a namespace Merkle proof. And, a binary Merkle proof of the row root to the data root. These proofs can be generated using the ProveShares query.
      Once Namespace Merkle proofs (share → row root) and Binary Merkle proof (row root → data root) has been obtained by querying celestia nodes, we will verify it in our L1Settlement contract.

    • Prove that data is in sequence of spans that is it is actually present at the location in celestia, pointed by span which we already have: To prove that the data is part of the rollup sequence of spans, we take the authenticated share proofs obtained earlier and use the shares begin/end key to define the shares' positions in the row. Then, we use the row proof to get the row index in the extended Celestia square and get the index of the share in row major order, finally, we can compare the computed index with the sequence of spans, and be sure that the data/shares is part of the rollup data. these computations mentioned will be conducted in Zk-circuit, and the generated proof of correct location will be verified in out L1Settlement contract.

      ​​​​​​​​contract L1Settlement {
      ​​​​​​​​    // Blobstream contract for DA verification
      ​​​​​​​​    IDAOracle public blobstream;
      ​​​​​​​​    
      ​​​​​​​​    // Generated verifier contract from your circuit
      ​​​​​​​​    IGroth16Verifier public verifier;
      ​​​​​​​​
      ​​​​​​​​    
      ​​​​​​​​    //.....other verifications....
      ​​​​​​​​    
      ​​​​​​​​    struct Span {
      ​​​​​​​​        uint256 startIndex;
      ​​​​​​​​        uint256 dataLen;
      ​​​​​​​​        uint256 celestiaHeight;
      ​​​​​​​​       }
      
      ​​​​​​​​    function verifyDataInclusionAndLocation(
      ​​​​​​​​        bytes[] memory shareData,
      ​​​​​​​​        BinaryMerkleProof memory dataTupleProof
      ​​​​​​​​        DataRootTuple calldata tuple,
      ​​​​​​​​        NamespaceMerkleMultiproof[] memory shareProofs,  // NMT proofs
      ​​​​​​​​        NamespaceNode[] memory rowRoots,                 // Row roots
      ​​​​​​​​        BinaryMerkleProof[] memory rowProofs,           // Binary proofs
      ​​​​​​​​        bytes32 dataRoot,                                // Celestia block's data root
      ​​​​​​​​        Span memory claimedSpan
      ​​​​​​​​    ) public view returns (bool) {
      ​​​​​​​​        
      ​​​​​​​​        // 1. Verify data is available through Blobstream
      ​​​​​​​​        require(
      ​​​​​​​​            blobstream.verifyAttestation(
      ​​​​​​​​                claimedSpan.celestiaHeight,
      ​​​​​​​​                tuple,
      ​​​​​​​​                dataTupleProof
      ​​​​​​​​            ),
      ​​​​​​​​            "Data not available in Celestia"
      ​​​​​​​​        );
      ​​​​​​​​        
      ​​​​​​​​        // 2. Verify namespace merkle proof (shares → row root)
      ​​​​​​​​        require(
      ​​​​​​​​            verifier.verifyShareInclusion(
      ​​​​​​​​                shareData,
      ​​​​​​​​                shareProofs,
      ​​​​​​​​                rowRoots
      ​​​​​​​​            ),
      ​​​​​​​​            "Invalid share proof"
      ​​​​​​​​        );
      
      ​​​​​​​​        // 3. Verify binary merkle proof (row root → data root)
      ​​​​​​​​        require(
      ​​​​​​​​            verifier.verifyRowInclusion(
      ​​​​​​​​                rowRoots,
      ​​​​​​​​                rowProofs,
      ​​​​​​​​                dataRoot
      ​​​​​​​​            ),
      ​​​​​​​​            "Invalid row proof"
      ​​​​​​​​        );
      ​​​​​​​​        
      ​​​​​​​​        //4. Verify that claimed data location is same as location of data on celestia
      
      ​​​​​​​​        require(
      ​​​​​​​​                verifier.verifyDataLocation(
      ​​​​​​​​                    shareProofs,
      ​​​​​​​​                    rowProofs,
      ​​​​​​​​                    claimedSpan
      ​​​​​​​​                ),
      ​​​​​​​​                "Invalid data location"
      ​​​​​​​​            );
      
      ​​​​​​​​            return true;
      ​​​​​​​​    }
      ​​​​​​​​    
      ​​​​​​​​    //.......other verifications
      ​​​​​​​​}
      

      circuit for generating the above verifier contract can be easily written, can be discussed if needed.

Note: Header posting transaction and L1Settlement proof verification is an atomic transaction, only those header will be accepted by L1 whose proof verification has been succeded, also this transaction will be performed only when blobstream contract on L1 has been updated with latest celestia commitments. This will be a raw ethereum transaction consisting of tuple, proof, header and header_signature_by_sequencer as call_data done to L1Settlement contract's address.

Full node synchronization

There are a few different mechanisms that could be used to download blocks for full node synchronization, this is required when new L2 nodes will try to join the network by downloading all data and re-executing all transactions themselves. The simplest solution is for Fullnodes to wait until the blocks and the headers are posted to the respective chains, and then download each as they are posted. It would also be possible to gossip the headers ahead of time and download the rollup blocks from Celestia instead of waiting for the headers to be posted to Ethereum. It's also possible to download the headers and the block data like a normal blockchain via a gossiping network and only fall back to downloading the data and headers from Celestia and Ethereum if the gossiping network is unavailable or the sequencer is malicious. or we can use snap-sync mechanism to provide syncing points to new nodes to start synchornization from.

Posting data to ethereum

Posting to ethereum is much simpler, everything will be posted to Ethereum as call_data in a raw ethereum transactions that includes Block and zk_proof for state-transitions verification. L1Settlement contract will only do state-transition verification and some minor checks during settlement.

I have not accounted for blob-transactions up yet, will have to modify this blog to include blob transactions as well

ZK rollups would provide two commitments to their transaction or state delta data: the KZG in the blob and some commitment using proof system the ZK rollup uses internally. They would use a commitment proof of equivalence protocol, using the point evaluation precompile, to prove that the KZG (which the protocol ensures points to available data) and the ZK rollup’s own commitment refer to the same data.

Volition

State Management design

  • State Update Verification Process:
    When a transaction occurs, the system performs a multi-step verification process. First, it identifies all addresses touched by the transaction and checks their DA layer preferences. The transaction data is then posted to Ethereum if the sender opted for Ethereum DA, but the state updates are split based on the preferences of affected addresses. The system generates a CompleteStateUpdate structure that contains three state roots: the complete L2 state root, Ethereum DA state root, and Celestia DA state root. The verification process ensures that no address appears in both DA layers' updates and that the combination of split states matches the complete state. This is done by maintaining mappings (ethereumUpdates and celestiaUpdates) that track which addresses were updated in each DA layer. A validationHash is computed by combining all three roots, and this is signed by the sequencer to ensure integrity.
  • Atomic Update Mechanism:
    The atomic update mechanism ensures that either all state changes are applied successfully across all three tries (complete state, Ethereum state, and Celestia state) or none of them are. This is implemented using a two-phase commit-like protocol in the updateTries function. The process begins with beginAtomicUpdate() calls on all three tries, then applies the updates to the complete state trie first. For each address in the update, the corresponding state is copied to either the Ethereum or Celestia trie based on the address's DA preference. The system verifies that the resulting roots match the expected roots from the CompleteStateUpdate. If any step fails, all tries are reverted to their previous state using revertAtomicUpdate(). Only when all updates are successful and verified does the system call commitAtomicUpdate() on all tries.
  • Recovery Process:
    The recovery mechanism is designed to handle scenarios where the L2 state needs to be reconstructed from the DA layers. The recoverStateFromDA function first fetches all state updates from both Ethereum and Celestia. It then reconstructs the complete state by applying these updates in sequence, considering the timestamp and batch index to ensure correct ordering. The reconstruction process validates each state update's proofs and signatures, and ensures that no conflicts exist between the DA layers. If conflicts are detected, the system applies predetermined resolution rules (for example, taking the state with the stronger consensus or more recent timestamp). The reconstructed state must pass verification against both DA layers' data before being accepted as the new current state.
  • Proof Generation System:
    The proof generation system creates Merkle proofs for state updates in each DA layer. For each affected address, the system generates proofs that demonstrate the state change in the appropriate DA layer. These proofs are stored in the stateProofs mapping within the StateUpdate structure. The system optimizes proof generation by batching similar updates and using compact proof formats. For addresses that appear in multiple transactions within a batch, the system generates a single consolidated proof showing the final state. The proofs are designed to be verifiable independently on either DA layer, allowing users to prove their state updates without needing access to both layers. The system also generates absence proofs when necessary to prove that a particular address's state was not modified in a given batch.

Associated data structures:

/// Represents the Data Availability layer choice
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum DALayer {
    Ethereum,
    Celestia,
}

/// Account state in the Twine system
#[derive(Debug, Clone)]
pub struct Account {
    pub nonce: U256,
    pub balance: U256,
    pub storage_root: H256,
    pub code_hash: H256,
    pub da_preference: DALayer,
}

/// Transaction in the Twine system
#[derive(Debug, Clone)]
pub struct Transaction {
    pub from: Address,
    pub to: Address,
    pub value: U256,
    pub nonce: U256,
    pub data: Bytes,
    pub signature: Bytes,
    pub sender_da_layer: DALayer,
}

/// Batch of transactions with metadata
#[derive(Debug, Clone)]
pub struct BatchData {
    pub batch_root: H256,
    pub batch_size: U256,
    pub prev_total_elements: U256,
    pub transactions: Bytes,
    pub signatures: Vec<H256>,
    pub batch_index: U256,
}

/// State update for a specific DA layer
#[derive(Debug, Clone)]
pub struct StateUpdate {
    pub state_root: H256,
    pub parent_state_root: H256,
    pub timestamp: U256,
    pub batch_index: U256,
    pub sequencer: Address,
    pub state_proofs: HashMap<Address, Bytes>,
}

/// Complete state update across both DA layers(Twine's state)
#[derive(Debug, Clone)]
pub struct CompleteStateUpdate {
    pub complete_state_root: H256,
    pub ethereum_state_root: H256,
    pub celestia_state_root: H256,
    pub batch_index: U256,
    pub timestamp: U256,
    pub ethereum_updates: HashMap<Address, bool>,
    pub celestia_updates: HashMap<Address, bool>,
    pub validation_hash: H256,
    pub sequencer_signature: Bytes,
}

Here CompleteStateUpdate will be used for updating Twine's state, which is representation of complte state of Twine L2.
So when a transaction is done by an address that has opted for ethereum as the DA layer and this transaction also touches different addresses with different da preferences(some celestia and some ethereum), then after obtaining these transaction, sequencer will first process all these transactions then update the main state trie and then use this data to update ethereum and celestia state trie, then during the data posting BatchData of these transactions will be posted to ethereum(as da preference of initiator was ethereum) and an ethereum specific StateUpdate will be created and posted to ethereum, which will consist of data obtained by doing state updates to only those addresses that have opted ethereum as the da layer, basically state updates of ethereum-da-state-trie, simmilarly another celestia-specific StateUpdate will be create and only this StateUpdate will be posted to celestia, there's no need to post BatchData to celestia.

Gas charging design

Detailed analysis TODO
Gas charging dynamics is already clear by the transaction flow
only we need to calculate gas_requried_for_DA_i for each DA that Twine supports and add it to get total gas to charge user for each transaction.

`gas_requied_for_DA_i` = `bytes_stored_on_DA_i`*`gas_of_per_byte_storage_of_DA_i` + `fixed_costs_of_DA_i`

Transaction within same da preference

For transaction within the same da preference, the flow and the architecture will be same as mentioned above, for celestia as a da only transactions see Posting data to celesita and for ethereum as a da only transactions see Posting data to ethereum.

The case of cross-da preference transaction

For cross Transactions that touches addresses with different da choices, above architecture of state management will be accompanied by additional modifications to account for concrete cross da verifications.

(below two sections need more elaboration, these sections are to be used for describing croo-da proof verifications)

Sender(celestia-DA) to receiver(ethereum-DA)

In a cross-DA preference transaction where Alice (using Celestia DA) sends tokens to Bob (using Ethereum DA), the flow starts with Alice submitting her transaction to the L2 sequencer. The sequencer first posts the complete transaction data to Celestia (Alice's chosen DA) which includes details like the sender (Alice), receiver (Bob), value, and importantly, a marker indicating this is a cross-DA transaction targeting Ethereum. Once Celestia confirms this transaction, a temporary state update occurs marking Alice's balance reduction and a pending reference to the upcoming Ethereum proof. The sequencer then waits for Celestia's confirmation before proceeding to the Ethereum side. For Ethereum (Bob's DA), instead of posting the full transaction again, the sequencer posts a proof package containing the Celestia block hash, transaction hash, and a merkle proof verifying that the transaction was indeed posted to Celestia. This proof, once confirmed on Ethereum, allows Bob to receive the tokens. Finally, the state is updated on both sides with cross-references: the Celestia state (Alice's side) gets a reference to the Ethereum proof hash, while the Ethereum state (Bob's side) gets a reference to the original Celestia transaction hash. This cross-referencing ensures that the transaction can be verified on both chains and maintains a clear audit trail. The process completes with both Alice and Bob receiving confirmation of the successful transfer.
image

Sender(ethereum-DA) to receiver(celestia-DA)

In a cross-DA preference transaction where Alice (using Ethereum DA) sends tokens to Bob (using Celestia DA), the process begins with Alice submitting her transaction to the L2 sequencer. The sequencer's first action is to post the complete transaction data to Ethereum (Alice's chosen DA), including details of sender (Alice), receiver (Bob), value, and a special marker indicating this is a cross-DA transaction targeting Celestia. After posting to Ethereum, the sequencer initiates a state update marking Alice's balance reduction and creates a pending reference for the upcoming Celestia proof. A critical step here is waiting for Ethereum's confirmation - this wait is necessary because Celestia will need to verify that the transaction exists on Ethereum. Once Ethereum confirms the transaction, the sequencer prepares a proof package for Celestia that includes the Ethereum block hash, transaction hash, and a merkle proof demonstrating that this transaction was legitimately posted to Ethereum. This proof package is then posted to Celestia, allowing Bob to receive his tokens. The final stage involves updating state references on both sides: the Ethereum state (Alice's side) receives a reference to the Celestia proof hash, while the Celestia state (Bob's side) maintains a reference to the original Ethereum transaction hash. These cross-references create a verifiable trail of the transaction across both DA layers. The process concludes with both Alice and Bob receiving confirmation of the successful transfer. The key difference from the Celestia-to-Ethereum flow is the sequence of proofs and verifications, as the source and target chains are swapped, but the fundamental principle of maintaining cross-chain references remains the same.
image