# Onboarding DAC Milestone: https://gitlab.com/tezos/tezos/-/milestones/156#tab-issues Design doc: https://hackmd.io/PBNbp7gfRbevwdnCjMlmng Open questions: https://hackmd.io/BUsS_LVvQ6eAinpHDymPcg Source code: `grep -r "*_dac_*" in src/` ## Terms - what is it? - DAC: Data Availability Committee - Merkle-based hashing scheme: - DAC datas/ DAC messages: these are the messages that will be fetched by the **rollup node** and applied them by the [WASM PVM](https://hackmd.io/5xagcBl2R8inMTraPjPONA?view). - DAC pages: can be retrived (recovered) by the DAC for **signing** - DAC page hash: signed DAC page (by enough members (? how many) of DAC), its hash is posted on the L1 inbox. ## Onboarding ### The big pictures - SORU big picture :::spoiler ![](https://hackmd.io/_uploads/H13zCOWWn.png) ::: - Slots big picture (maynot be relevent anymore) :::spoiler ![](https://hackmd.io/_uploads/S1zYYFbW3.png) ::: ### 1. What is DAC? DAC message is one of the external message that the manager operation will send to the global inbox of rollup. DAC members retrived DAC page(s) and sign them -> a signed DAC page by enough members post its hash to L1 inbox. Rollup node import into PVM messages whose hash has been requested by the latter (who?). ### 2. What is the goal of it? The goal of it is to ensure that: - the **rollup node** and **L1** can handle DAC data. - provides a **merkle-based hashing scheme** for paginating DAC data, and its support in **kernels** ### 3. What are the main points in the design? #### Phase 1: single rollup node :::spoiler Rollup node able to have messages to request to read the DAC page (this page can be pushed to L1 inbox). Scenario for testing: publishing and importing DAC messages by Tezt This service (can be DAL or rollup node) has: - a command for publishing data as files into a rollup node - it is a sidecar for rollup node - able to access to the folder where the rollup node read reveals data - able to access the private keys of all (or majority of) DAC members (purpose is to compute the aggregated signatures that attest the availability of the data) `<data>` as a request payload: - less than 4Kb - `<data>`'s contents are hashed --> `<hash>` - the final `<hash>` is signed by the DAC --> obtain `<signature>` - file: `<data'> + <hash>` --> write into the folder that the rollup node uses to retrieve DAC messages. (`<data'>` is `<data>` in a format quantitative analysis). - `<hash> + <signature>`: now is available to be injected to L1 inbox. Finish the procedure. - more than 4Kb, a file is chunked/divide/split into 4Bk each. - each chunk has its contents is `<chunk-hash>` - file: `<signed-chunk> + <chunk-hash>` save into the folder that rollup node use to send. - All the `<data-hash>`'es from the first step are grouped together, and the result is processed again from step 1, as if it were a new file - **data-chunks**: the first iteration of the workflow above contain the contents of the original file. - **pointer-chunks**: chunks constracted in further iterations will contain hashes of other chunks. Note: it is always possible to reconstruct the original file from the first page whose hash is injected in the L1 (where all the pages are stored in disk). ::: #### The proposed design :::spoiler - DAL node has 3 modes: - A coordinator mode: - get the payload, split them into 4096 bytes each, compute the final root page hash -> send them to DAC members for signing -> get back the root page hashes signed -> aggregate those signatures. - A committe member mode - get the root page hashes from the coordinator -> download and store those payloads based on their root page hashes -> sign them -> send back payloads to coordinator. - An observer mode - Get the root page hash from the coordinator -> download and store payload base on the root page hash -> ask for missing data from rollup node -> then forward it to the DAC members. ::: _Below is a details of the DAL node_: ##### DAC coordinator The DAC coordinator consists of several component: - An RPC server - An entrypoint to the DAC - A data streamer - A signature manager 1. RPC server :::spoiler The RPC server to serve requests from users, and to receive signatures from DAC members. - `PUT /dac/payload`: user requests the DAC to store and sign a payload. - `GET /dac/pages/{page_hash}`: retrieve the contents of a page hash - `PUT /dac/signatures/{root_page_hash}` to collect a signature for a root page hash from a DAC member. ::: 2. Entrypoint to the DAC :::spoiler - `PUT /dac/preimage` is the entrypoint to the DAC coordinator where payloads are sent to them. - After receiving this payload, the coordinator will compute the **merkle tree of pages** and then store each page on a disk. - Not only that, the coordinator will also compute: - **an expiration** by summing the `current_level` of the L1 node that the coordinator uses to track the heads, and - `expiration_lag` that will be specified the node configuration. Then the **root page hashes** and **expiration levels** are streaming via the **data streamer**. ::: 3. The data streamer :::spoiler After receive the **root page hashes** and **expiration levels** from the coordinator, the data streamer is responsible to <u>provide</u> them to the Committe members. The result of this is the **root page hashes** will be added to a `Lwt_stream.t` by the message aggregator. Another process take place where a stream serve that return the next element of the stream can be defined. An API of a Data_streamer can look like: :::spoiler ``` module Root_hash_streamer : sig type t type configuration val init: configuration -> t val publish : t -> (Dac_hash.t * int32) -> unit tzresult Lwt.t val make_subscription : t -> ((Dac_hash.t * int32) Lwt_stream.t * Lwt_watcher.stooper) tzresult Lwt.t end ``` The coordinator will: - manage a `Data_streamer.t` with streamer. - provide a streamed point (will be called by committe members and observers). The function `make_subscription` will be trigger when calling the streamed endpoint on the coordiantor side. On the user side, this will return a `Lwt_stream.t` and stop the function that monitor the root page hashes that have been streamed. ::: 4. The signature manager :::spoiler - `PUT /signature/{root_page_hash}` (this include the root page hash and expiration level) is the endpoint of signature manager. The DAC members will use this endpoint to communicate between each others. The responsibilites of a signature manager are: - Receive and verify that the signature is valid. - Prevent malicious actors (send incorrect signature -> fail at signature verification on kernel side -> discard the whole payload corresponding to the root page hash). - DAC committe will do this job (signature manager needs to know the list of DAC members). Currently, the list of DAC members are stored in the DAL node configuration. In future, it may store in the DAC node instead. - The verified signature will be stored in the signature storage. - `(root_page_hash, expiration_level) list option`: is the signature storage. Each position in this list corresponding to the signature of the DAC member if available, otherwise `None`. Another process is adding a new signature, the DAL node have to check whether or not it is sufficient amount of signatures for the **attesting** the data available has been collected for the root page hash. This is store in the DAC node configuration. - The signature manager will compute the **aggregate signature** (combine with the root page hash --> produce the external message --> inject to L1). The end of this process the signature manager will post the new signature to an external endpoint (this external endpoint is specified in the DAC node configuration). ::: ##### The Committe member :::spoiler - Data_stream is where the comitte member use to discover the root page hashes. These root page hashes advertised by the DAC coordinator. - After receive the new root page hash, the DAC member will download the data from the DAC coordinator. Using `GET /page/{page_hash}`. - Checking the `{page_hash}` - compute the hash of the downloaded `{page_hash}` - check it with the requested `{page_hash}`: - Mismatch: download terminate, DAC member does not sign the root page hash x expiration level. - Match: continue the next step to procede to sign the (root page hash, expiration level). Then communicate the signature --> to the DAC node (call `dac/signature/{root_page_hash}` endpoint). - `GET /page/{get_page_hash}` endpoint will be use for obsevers to retrieve missing pages, when requested by the rollup node. ::: ##### The Observer :::spoiler Observer lives in the same hosts which the rollup node. - Rollup node: downloading pages from DAC coordinator and save them into its own pages storage. - `HEAD /dac/notify_missing_page/{page_hash}` is the endpoint of the observer, it can return immediately 200 response. Observer will then **broadcast** a request to all of the DAC members to retrieve the page. ::: ##### The rollup node :::spoiler - `dac/notify_missing_page/{page_hash}` is an asynchronous endpoint. Rollup node will use this endpoint when the PVM requests a page that the rollup node does not have in its own page storage. It is asynchronous because the observer is living in the same host with the rollup node. ::: ### 4. What is the current state of the implementations, tests, etc.? #### The implementation of Coordinator :::spoiler Milestones: - Production and streaming of pages https://gitlab.com/tezos/tezos/-/milestones/159#tab-issues; - Signature verification and injection https://gitlab.com/tezos/tezos/-/milestones/161#tab-issues <u>Goal of coordinator</u>: - RPC server: handle the requests from users and receive signature from DAC members. - An entrypoint to the DAC: handle the payloads, compute the merkle tree of pages and store them on a disk. The `(root_page_hash, expiration_level) list option` is using for data streamer. - A data streamer: provide the `(root_page_hash, expiration_level)` to the committe members. - A signature manager: receive, verify signature that is valid or not, the verified signature will be store in the signature storage. Also add a new signature (aggreate signature). The milestone "Production and streaming of pages" handle the: - RPC server - An entrypoint to the DAC - Data streamer The milestone "Signature verification and injection" handle the: - A signature manager ##### Production and streaming of pages :::spoiler - DAL node can run in coordinator mode - RPC server: - provides an RPC endpoint to fetch a single page - provides an RPC endpoint to collect signatures from signers - produce root page hashes are streamed to signers List of MRs: :::spoiler - [7310](https://gitlab.com/tezos/tezos/-/merge_requests/7310): move protocol-dependent logic to own plugin. :::spoiler Reason: isolate the protocol-related logic of DAC from DAL. Action: Moving DAC RPC server and related files from DAL plugin --> new DAC plugin. :::info - in `manifest/main.ml`: - add function `val dac`, - add `?dac` parameter in `make`, - define `dac` register `path//lib_dac` as a protocol specific library for DAC, - `_dac_tests` register `path//lib_dac/test` for testing. - in `src/proto_alpha/lib_dac_plugin/test/test_helpers.ml`: - define a module `Unit_test`: - `spec: string -> unit Alcotest_lwt.test_case list -> string * unit Alcotest_lwt.test_case list`: Example calling this function ``` Test_helpers.Unit_test.spec (Protocol.name ^ ": Dac_plugin_registration.ml") tests; ``` - in `src/lib_dal_node/dal_plugin.ml(i)` and `src/proto_alpha/lib_dac_plugin/dac_plugin_registration.ml`: Define plugin for DAL node - in `dal_plugin.ml`: define function to `register` a dal plugin, and `get` a dal plugin when giving a hash. Where a module `T` of dal plugin has a module `Proto` is a `Registered_protocol.T`. - in `dac_plugin_registration.ml`: where the module `Plugin` has module `Proto` is a `Registerer.Registered` - Now in a DAL node register both DAL and DAC plugins: `src/bin_dal_node` - `RPC_server.ml` - `daemon.ml` - `event_legacy.ml` - `node_context.ml` ::: - [7345](https://gitlab.com/tezos/tezos/-/merge_requests/7345): this MR rename the endpoints of the DAC node server from `/plugin/dac/<endpoint_name>` to `/<endpoint_name>`. <span style=color:red>This work has been removed (?)</span> :::spoiler :::info - Do this task in `src/bin_dac_node/RPC_server`, `proto_alpha/lib_dac_plugin/RPC.ml` and `tezt/lib_tezos/rollup.ml` ::: - [7349](https://gitlab.com/tezos/tezos/-/merge_requests/7349): This MR is about PBT for DAC pages encoding, it does 2 things: - Factors out some helper functions in the `test_dac_pages_encoding.ml` - Introduce simple PBT for round-trip (serialization/deserilazation) of the DAC payload posted via single request to the new stack based implementation. - [7451](https://gitlab.com/tezos/tezos/-/merge_requests/7451): Define pages storage signature for DAC node and implementation via filesystem. - In `src/lib_dac_node/page_store.ml(i)` - [7524](https://gitlab.com/tezos/tezos/-/merge_requests/7524): This MR add coordinator client (DAC client) context. - In `src/lib_dac_client/dac_node_client.ml(i)` - [7548](https://gitlab.com/tezos/tezos/-/merge_requests/7548) This MR works on the streaming of root hashes. :::spoiler :::info - First add root hash streamer to DAC node context - in file `src/lib_dac_node/node_context.ml(i)` - Define a RPC `GET monitor/root_hashes`, this is a monitoring service. - In the file `src/lib_dac_node/monitor_services.ml(i)`: to define a get service using the function `Tezos_rpc.Service.get_service`. To define this service to a context of Tezos using `Tezos_rpc.Context.make_stream_call`. - After define the RPC above, we need to register this RPC to RPC server. - In the file `src/lib_dac_node/RPC_server.ml`, the function is `register_monitor_root_hashes`, it takes a dac_plugin, a hash's streamer, and a directory. To register this directory in Tezos using the function `Tezos_rpc.Directory.gen_register`. - Then in the function `register` add the case for this new RPC registered. - Now we emit a new event for handling new subscription to the streamer's hash. The event is handle in the file `src/lib_dac_node/event.ml` at the function `handle_new_subscription_to_hash_streamer` ::: - [7595](https://gitlab.com/tezos/tezos/-/merge_requests/7595). This MR uniform the implementation of Preimage_store. - [7621](https://gitlab.com/tezos/tezos/-/merge_requests/7621): This MR refine and implement root hash streamer interface. The tests for data streamer at `src/lib_dac_node/test/test_data_streamer.ml`. - [7812](https://gitlab.com/tezos/tezos/-/merge_requests/7812): This MR implements `Coordinator`'s `POST /preimage` endpoint. It also bind coordinator's `POST /preimage` in the test of `tezt/lib_tezos/rollup.ml(i)` with the function `coordinator_store_preimage`. And write the test for it in Tezt at `tezt/tests/dac.ml` at the function `test_coordinator_post_preimage_endpoint`. - [7876](https://gitlab.com/tezos/tezos/-/merge_requests/7876): This MR remove the `Lwt` that aren't use inside data streamer. - [7389](https://gitlab.com/tezos/tezos/-/merge_requests/7389): This Mr add new PBT `Merkle_tree.Make_buffered`. (in review) - [8164](https://gitlab.com/tezos/tezos/-/merge_requests/8164): This MR improves of Page_encoding module. ::: ##### Signature verification and injection :::spoiler List of MRs - [7822](https://gitlab.com/tezos/tezos/-/merge_requests/7822): This MR add the implement of handle store DAC member signature and test in Tezt for it. - In `src/lib_dac_node/signature_manager.ml(i)` ::: #### The implementation of Member/Observer :::spoiler Milestone: https://gitlab.com/tezos/tezos/-/milestones/160#tab-issues <u>Goal of observer</u> Give the missing page for rollup node when requested, broadcast a request to all DAC members to retrieve the page. List of MRs :::spoiler - [7738](https://gitlab.com/tezos/tezos/-/merge_requests/7738): This MR implement comittee members dowload advertised page from Coordinator. - In `src/lib_dac_node/page_store`/`page_encoding`. The test in `tezt/tests/dac.ml`, the payload example is in `tezt/tests/dac_example_payloads/preimage.json` - [7750](https://gitlab.com/tezos/tezos/-/merge_requests/7750): This MR verfiy hash of downloaded pages in Committe member/observer, and add tests for it. - In `src/lib_dac_node/page_store`, tests in `src/proto_alpha/lib_dac_plugin/test/test_dac_pages_encoding.ml` ::: #### The implementation of Rollup node integration :::spoiler Milestone: https://gitlab.com/tezos/tezos/-/milestones/162#tab-issues The goal is to integrate rollup node with DAC ::: ### 5. Git branch that contains my notes on code: My onboarding branch: https://gitlab.com/marigold/tezos/-/commits/quyen@dac_oboarding The code of DAC: - `src/bin_dac_node`: this contents the binaries features (command line) of DAC node. - `src/lib_dac`: this is the library of DAC. It contents: - __RPC services__ :::spoiler - Module `Coordinator` :::spoiler - RPC service contents a module `Coordinator`. Inside contents an RPC `POST dac/preimage`, this RPC sends a `payload` to the DAC coordinator (serializing DAC payload). It returns a __root_page_hash__ (representing the stored preimage), and pushes (streaming) it to the subscribed `Observers` and `DAC members`/committee members. :::info `Tezos_rpc.Service.post_service` `Tezos_rpc.Query.empty` `Tezos_rpc.Path.(open_root/ "preimage)"` ::: There is a TODO that return the pair `(root_page_hash, expr_level)` instead of only `root_page_hash` (`~output:P.encoding`, this `encoding` is defined in `src/lib_dac/dac_plugin`, it is the encoding of reveal hashes) - `POST dac/store_preimage`: this RPC post a payload using pagniation scheme and return a (__root_page_hash__, raw bytes of external_message). :::spoiler - Input: a given [pagniation_scheme] `~input:store_preimage_request_encoding: (payload: Bytes.t * pagination_scheme: Pagination_scheme.t)`. Split DAC reveal data because a pagination scheme will split a payload in a set of pages of 4096 bytes each. - Output: It returns the `base58` encoded __root_page_hash__ and the raw bytes `~output:(store_preimage_response_encoding ctx): (root_page_hash:Dac_plugin.hash * external_message: Bytes.t)`. ::: - `GET dac/verify_signature`: Verify signature of an external message to inject in L1. :::spoiler This RPC requests the DAL node to verify the signature of the external message. The DAC committe of the DAL node must be the same that was used to produce the external message. `Tezos_rpc.Service.get_service` `~query:external_message_query` ::: - `GET dac/preimage`: this RPC request/get the preimage's hash/page hash (this consist of a single page from cctxt) and returns its contents if found. :::spoiler - Output: success case, get the raw page as a sequence of bytes. :::info `Tezos_rpc.Service.get_serice` `~query:Tezos_rpc.Query.empty` `~output:Data_encoding.bytes` ::: - `PUT dac/member_signature`: This RPC will verifies and stores the DAC member signature of a root page hash. :::spoiler :::info `Tezos_rpc.Service.put_service` `~query:Tezos_rpc.Query.empty` `~input:(Signature_repr.encoding dac_plugin` (root_hash, signature: `Tezos_crypto.Aggregate_signature.encoding`, signer_pkh: `Tezos_crypto.Aggregate_signature.Public_key_hash.encoding`) `~output:Data_encoding.empty` ::: - `GET dac/certificate`: retrieve/get the DAC certificate associated with the given __root_page_hash__. :::spoiler `Tezos_rpc.Service.get_service` `~query:Tezos_rpc.Query.empty` `~output:(Data_encoding.option (Certificate_repr.encoding (module P)))` ::: - `GET dac/missing_page/[page_hash]`: The observer fetches the missing page from a Coordinator node. The missing page is then saved to a page store before returning the page as a response. :::spoiler `Tezos_rpc.Service.get_service` `~query:Tezos_rpc.Query.empty` `~output:Data_encoding.bytes` ::: ::: - __DAC plugin__: handle DAC plugin, where one can register a new [DAC_plugin.T] or search if there is a DAC_plugin in a registered a DAC_plugin (get). :::spoiler :::info - A register function will derives(obtain) and registers a new [Dac_plugin.T] by given a [of_bytes]. The `make_plugin` input has to be built before. In the `src/proto_alpha/lib_dac_plugin/dac_plugin_registration.ml` shows how to define the `make_plugin` and call the `register` function to register this new plugin. ``` // dac_plugin_registration.ml let make_plugin : (bytes -> Dac_plugin.hash) -> (module Dac_plugin.T) = fun of_bytes -> let module Plugin = Make (struct let of_bytes = of_bytes end) in (module Plugin) let () = Dac_plugin.register make_plugin ``` For the function `get`, it takes a `hash` to search. :::info ``` Dac_plugin.get protocols.current_protocol Dac_plugin.get protocols.next_protocol Dac_plugin.get Protocol.hash ``` - A module [Dac_plugin.T] contains - A module `module Proto: Registered_protocol.T`, and functions that manipulate on the hash. - <span style=color:red>NOTE</span>: the function name `of_hex`/`to_hex` take a `string` should it be rename? `of_string`/`to_string` instead. ::: - __Signature representation__: is a representation of committee member signature. :::spoiler :::info ``` type t = { root_hash: Dac_plugin.hash; signature: Tezos_crypto.Aggregate_signature.t; signature_pkh: Tezos_crypto.Aggregate_signature.public_key_hash } ``` A signature is either a `Bls12_381` or unknown bytes. ::: - __Certificate representation__: is a representation of a DAC certificate. :::spoiler :::info ``` type t = { root_hash: Dac_plugin.hash; aggregate_signature: Tezos_crypto.Aggregate_signature.signature; witness: Z.t } ``` ::: - __Pagination scheme__: this convert a payload in a set of pages of 4096 bytes each. These schemes are supported by DAC node. :::spoiler :::info ``` type t = Merkle_tree_V0 | Hash_chain_V0 ``` ::: - `src/lib_dac_client`: This is an instance of `Tezos_client_base.Client_context` that only handle IOs and RPCs. It can be used for keys and RPCs related commands. :::spoiler :::info - It calls the functions defined in `src/lib_dac/RPC_services.ml` and add the `cctxt` for instance: ``` module Coordinator : sig val post_preimage: Dac_plugin.t -> #cctxt -> payload:bytes -> Dac_plugin.hash tzresult Lwt.t end ``` The implementation look likes: :::info ``` module Coordinator = struct let post_preimage (plugin: Dac_plugin.t) (cctxt: #cctxt) ~payload = cctxt#call_service (RPC_services.Coordinator.post_preimage plugin) () () payload end ``` ::: - `src/proto_alpha/lib_dac_plugin`: define a client mode for interacting with a DAC node in Observer mode. This client should only used by components that are compiled with a protocol. It also register DAC plugin in a SC_rollup_reveal_hash. - <span style=color:red>Questions:</span> - what is it different with the lib_dac? is it an integration between lib_dac and the reveal channel of SORU? - Why the type `type t = unit` in dac_observer_client? is the `fetch_preimage` a todo? I think this file is a TODO. :::spoiler - tests: it contains the tests for - dac_plugin_registration and :::spoiler - Test data encoding roundtrip between DAC hash and reveal hash: - Binary - Question: the test check bytes dac hash roundtrip = bytes reveal hash roundtrip, why the text said the "roundtrip hash is not equal"? same at the assert_equal_bytes. - Hex - Test equality between DAC hash and reveal hash - bytes dac hash = bytes reveal hash - string dac hash = string reveal hash - Check the Json encoded DAC hash string should be a Hex of string. (TODO: call the wrong test case) ::: - dac pages encoding ::: - `src/lib_dac_node`: DAC node is the heart of DAC. - Page manager: it contains page manager and page storage. - __Pages manager__: show how to encode a page. Read/write to page storage. When a DAC members request a page to page manager, it will send that page(s) to DAC members if found. A page encoding is a library for encoding payloads of arbitrary in formats that can be decoded by the SC-rollup kernels. :::spoiler - A page has a maximum number of size. - It also has version: content version and hashes version. - It has a module `Dac_codec`: this module for encoding a payload `bytes` as a whole and return the calculated __root_page_hash__ (serialize payload/deserialize it). - Serialize payload: taking a raw payload and convert it into the root_page_hash (`Dac_plugin.hash`). - Deserialize payload: taking the `Dac_plugin.hash` and convert it back to the `bytes` (raw information/payload). ::: - __Page_encoding__ :::spoiler A page is either a [Contents] page (containing a chunk of the payload to serialize), or a [Hashes] page (containing a list of hashes). There is a [max_page_size] bytes. - Merkle tree: Merkle tree encodings of DAC pages/payload are versioned. This allow for multiple hashing schemes to be used. The branch is arbitrary and factor >= 2 branches. - A merkle tree is either a: - Contents page: contents a chunk of payload to be serialize; or - Hashes page: contents a list of hashes. The size of both are bounded by the [max_page_size]. The current bound for the max_version is 127. The preabmle (the prefix of each content, hash) has 5 bytes: 1 byte denote the version, and 4 bytes encoding the size of the rest of the page. - Serialize: Question: need a good explaination of the Merkle tree? TODO: rework on the docstring in the code. - A payload (a large sequence of bytes), is split into several pages of fixed sized. Each page is prefix with a small sequence of bytes (also fixed size), this is a `preamble` of the page. `Contains pages` are the pages that can obtained directly from the original payload, they are the leaves of the Merkle tree. - Each contents page is then hashed. Each size of the hash is fixed. The hashes are concantinated together. (Question? after having a long hashes, split them in the same size and each hash is a node of the tree?) - Deserialize: reconstructing the original payload from its Merkle tree root hash. - Hash chain/Merkle list: encoding of the DAC payload as a Hash chain/Merkle list. This encoding is implemented is specific to the Arith PVM. ::: - __Page storage__: database/storage of pages. Read/write to page manager. These are the storage that is a backup by the local filesystem. :::spoiler TODO ::: - __Data streamer__: how to communicate with observer, DAC members :::spoiler ::: - __Signature manager__: it contains signature manager and signature storage. - Signature manager: compute aggregate signature and bitset of DAC members that signed __root_page_hash__. :::spoiler ::: - Signature storage :::spoiler ::: - __Handler__: :::spoiler ::: - __External message/message aggregator__: it receives a new_head from L1 tracker, finalise payload, it will store page to page manager when complete. It streams/sends __root_page_hash__ to Data streamer. :::spoiler ::: - Together a DAC node also need: - A __configuration of a node__: a node can save/write configure file to [config.data_dir], load a configure file from [data_dir] and change to different mode. :::spoiler - A configuration of a node has: - a data directory: is the path to the DAC node. The default path is: `${HOME}/.tezos-dac-node` (Question: should it be octez-dac-node?) - an RPC address: is the address the DAC node listens to. The default address is `127.0.0.1` - an RPC port: is the port the DAC node listens to. The default port is 10832. - a reveal data directory: is the directory where the DAC node saves pages. A default is: `${HOME}/.tezos_rollup_node/wasm_2_0_0` (Question should it be octez-rollup-node?) - DAC mode: is the operation mode of the DAC, it contains: - a Coordinator mode :::spoiler - In a coordinator mode contains: - a number of threshold (define the range in which we expect our network to perform). - a list of committee members addresses aka their public key hashes. - The implementation is define a type `t` and then using the `Data_encoding` to encode it. ::: - Committee member mode :::spoiler A committee member contains: - coordinator RPC address, - coordinator RPC port and - its own addres aka its public key hash. ::: - Observer mode :::spoiler An observer contains: - coordinator RPC address - coordinator RPC port ::: - and Legacy mode (Question: when do we use the legacy mode?) :::spoiler A legacy mode contains: - A number of threshold - a list of committee members addresses - an optional of a configuration of a DAC context: it contains a host and a port information. - an optional of a committee member address ::: - __Node context__: after having the configuration for the DAC node, Node context is where the node makes its interations with the outside world. :::spoiler A node context has: - its status can change: ready context or starting. A context is ready contains: the DAC plugin and the hash of Data streamer. - it store the information of node configure. - it has the information of the Tezos node context. - it has an option choice to store information of the context of a DAC node client (coordinator). This is used only for integration tests and where all the nodes is in legacy mode. - it has the information of the page storage - it has the information of a node storage. It is an Irmin storage. A node has these features: - `init`: will initiate the DAC node context to [Starting]. - get/set: manipulate on the get/set of the status of a DAC node, get the information of the node configure, get the information of the Tezos node context, get the information of the DAC plugin, get the page that it stored, get the information of a node that store in Irmin in a specific mode, get the list of DAC committee members, get the DAC node client context aka coordinator. ::: - __Event__: declare all the events for DAC node. :::spoiler Below is a list of events of a DAC node: - Starting the DAC node - Shutdown the DAC node - DAC node is ready - The directory of DAC node is ready - The DAC node storage is ready - RPC server is ready - The DAC committee member is ready - Tracking the Layer 1 node: - New head is updated - Started tracking the Layer 1 node - Tracking the protocol DAC plugin - Resolved - Not resolved - Error for daemon - DAC node threshold not reached it required numbers - Handle the DAC committee members: - Committee member does not exists in a wallet - No Committee member address provided, DAC node will only deserialize payload but not signing it. - Committee member cannot sign DAC __root_page_hashes__. It happens when the tz4 has its secret key URI not available. - Committee member has no public key. There is an public key hash tz4, but its public key is not available. - Handle the Data streamer - Add a new subscription of another DAC node to the Data streamer (hash). - Subcribed to the root hashes stream - Handle __root_hash__: - Received new root hash via monitoring RPC - Finished processing previously received root hash - New root hash pushed to the Data streamer - Failed to process root hash, raise an error - Handle signatures - Cannot retrieve/get keys from an address. It happens when the DAC node is not a Committee member mode. - Handle getting the missing page. - Successful fetching missing page for hash. - TODO? ::: - __Daemon__: is a daemon of the DAC node. :::spoiler It has: - It defines a daemon with different handlers: - handle a streamed call - resolve plugin and set ready: monitor heads and try to resolve the DAC protocol plugin corresponding to the protocol of the targeted node - new head: monitor heads and store published slot headers by block hash. - push payload signature, call PUT /dac_member_signature to submit member signature. - new root hash: only available in the coordinator mode in the DAC node configuration. - This daemon able to: - get all the committee members keys - one can run this daemon ::: - __RPC server__: define all the RPC interaction with the DAC node. Using the `Tezos_rpc_http` and `Tezos_rpc_http_server`. :::spoiler - Register RPC server: - register for preimage - post store preimage - get preimage - coordinator post preimage - register get verify signature - register get certicicate of committee members - register get missing page - register put DAC member signature - Start a legacy mode - Shutdown RPC server - Initialize finalizer RPC server ::: - __Monitor services__: monotor services happens on the DAC node. :::spoiler - __root_page_hashes__: returns a stream of __root_page_hashes__ and a stopper for it. Stream is produced by calling the RPC `GET /monitor/root_hashes` ::: - __Manage wallet__: manage the information of wallet (keys) for coordinators, committee members, and legacy. There is a helper to help manipulate on the wallet. :::spoiler - Helper: - get keys of a context given a public key hash - get a public key in a context given the address - check to see if one can verify a public key - check to see if one can verify a secret key URI. - Coordinator keys :::info ``` type t = { public_key_hash: Tezos_crypto.Aggregate_signature.public_key_hash; public_key_opt: Tezos_crypto.Aggregate_signature.public_key option } ``` - Committe member keys :::info ``` type t = { public_key_hash: Tezos_crypto.Aggregate_signature.public_key_hash; secret_key_uri: Client_keys.aggregate_sk_uri; } ``` - Legacy keys :::info ``` type t = { public_key_hash: Tezos_crypto.Aggregate_signature.public_key_hash; public_key_opt: Tezos_crypto.Aggregate_signature.public_key option; secret_key_uri_opt: Tezos_crypto.Aggregate_signature.sk_uri option } ``` ::: - __Storage__: A DAC node is stored by Irmin and store these information: :::spoiler - Signature: - the primary key: is the __root_page_hashes__ (the DAC plugin hash). Currently, the root_page_hashes are strings instead of [`Dac_hash.t`], the reason is to avoid runtime functorization of the module (?). - the secondary key: is the comittee member public key hash - the value is the signature. - Certicicate - the key is the __root_page_hashes__ (DAC plugin hash) - the value is the signature and the number of witness. ::: For Test: - `tezt/lib_tezos/` - `tezt/tests/` ### 6. What are the use cases for DAC? ## References - Martin's notes: https://hackmd.io/0DCqMzx7S1m43jBMeu2YHg?view - old SCORU review code party: https://hackmd.io/i2uTFLRMTQKFbnHOTl0ilQ - Lin's notes on reading SCORU code: https://hackmd.io/sq-CqJNSQWShjI_SmmDRXg?view - My own notes about SORU, WASM, PVM, Kernel: https://hackmd.io/5xagcBl2R8inMTraPjPONA?view - Rollup use cases: https://hackmd.io/1Wn46JCdTVyAXjw6Z9BLqA?view\ - Glossary: https://hackmd.io/5L0GUPb8TduWffJTHyL18Q?view - Drawing of Andrea: https://excalidraw.com/#room=3557fce8b122b29572f6,wunnJnsUAu_4aE3XXloZ8A - Paper on Cryptoeconomic Security for Data Availability Committees: https://arxiv.org/pdf/2208.02999.pdf - An incomplete guide of rollup: https://vitalik.ca/general/2021/01/05/rollup.html