Onboarding DAC
Milestone: https://gitlab.com/tezos/tezos/-/milestones/156#tab-issues
Design doc: https://hackmd.io/PBNbp7gfRbevwdnCjMlmng
Open questions: https://hackmd.io/BUsS_LVvQ6eAinpHDymPcg
Source code: grep -r "*_dac_*" in src/
Terms - what is it?
- DAC: Data Availability Committee
- Merkle-based hashing scheme:
- DAC datas/ DAC messages: these are the messages that will be fetched by the rollup node and applied them by the WASM PVM.
- DAC pages: can be retrived (recovered) by the DAC for signing
- DAC page hash: signed DAC page (by enough members (? how many) of DAC), its hash is posted on the L1 inbox.
Onboarding
The big pictures
- SORU big picture

- Slots big picture (maynot be relevent anymore)

1. What is DAC?
DAC message is one of the external message that the manager operation will send to the global inbox of rollup.
DAC members retrived DAC page(s) and sign them -> a signed DAC page by enough members post its hash to L1 inbox.
Rollup node import into PVM messages whose hash has been requested by the latter (who?).
2. What is the goal of it?
The goal of it is to ensure that:
- the rollup node and L1 can handle DAC data.
- provides a merkle-based hashing scheme for paginating DAC data, and its support in kernels
3. What are the main points in the design?
Phase 1: single rollup node
Rollup node able to have messages to request to read the DAC page (this page can be pushed to L1 inbox).
Scenario for testing: publishing and importing DAC messages by Tezt
This service (can be DAL or rollup node) has:
- a command for publishing data as files into a rollup node
- it is a sidecar for rollup node
- able to access to the folder where the rollup node read reveals data
- able to access the private keys of all (or majority of) DAC members (purpose is to compute the aggregated signatures that attest the availability of the data)
<data>
as a request payload:
- less than 4Kb
<data>
's contents are hashed –> <hash>
- the final
<hash>
is signed by the DAC –> obtain <signature>
- file:
<data'> + <hash>
–> write into the folder that the rollup node uses to retrieve DAC messages. (<data'>
is <data>
in a format quantitative analysis).
<hash> + <signature>
: now is available to be injected to L1 inbox. Finish the procedure.
- more than 4Kb, a file is chunked/divide/split into 4Bk each.
- each chunk has its contents is
<chunk-hash>
- file:
<signed-chunk> + <chunk-hash>
save into the folder that rollup node use to send.
- All the
<data-hash>
'es from the first step are grouped together, and the result is processed again from step 1, as if it were a new file
- data-chunks: the first iteration of the workflow above contain the contents of the original file.
- pointer-chunks: chunks constracted in further iterations will contain hashes of other chunks.
Note: it is always possible to reconstruct the original file from the first page whose hash is injected in the L1 (where all the pages are stored in disk).
The proposed design
- DAL node has 3 modes:
- A coordinator mode:
- get the payload, split them into 4096 bytes each, compute the final root page hash -> send them to DAC members for signing -> get back the root page hashes signed -> aggregate those signatures.
- A committe member mode
- get the root page hashes from the coordinator -> download and store those payloads based on their root page hashes -> sign them -> send back payloads to coordinator.
- An observer mode
- Get the root page hash from the coordinator -> download and store payload base on the root page hash -> ask for missing data from rollup node -> then forward it to the DAC members.
Below is a details of the DAL node:
DAC coordinator
The DAC coordinator consists of several component:
- An RPC server
- An entrypoint to the DAC
- A data streamer
- A signature manager
- RPC server
The RPC server to serve requests from users, and to receive signatures from DAC members.
PUT /dac/payload
: user requests the DAC to store and sign a payload.
GET /dac/pages/{page_hash}
: retrieve the contents of a page hash
PUT /dac/signatures/{root_page_hash}
to collect a signature for a root page hash from a DAC member.
- Entrypoint to the DAC
PUT /dac/preimage
is the entrypoint to the DAC coordinator where payloads are sent to them.
- After receiving this payload, the coordinator will compute the merkle tree of pages and then store each page on a disk.
- Not only that, the coordinator will also compute:
- an expiration by summing the
current_level
of the L1 node that the coordinator uses to track the heads, and
expiration_lag
that will be specified the node configuration.
Then the root page hashes and expiration levels are streaming via the data streamer.
- The data streamer
After receive the root page hashes and expiration levels from the coordinator, the data streamer is responsible to provide them to the Committe members.
The result of this is the root page hashes will be added to a Lwt_stream.t
by the message aggregator.
Another process take place where a stream serve that return the next element of the stream can be defined.
An API of a Data_streamer can look like:
The coordinator will:
- manage a
Data_streamer.t
with streamer.
- provide a streamed point (will be called by committe members and observers).
The function make_subscription
will be trigger when calling the streamed endpoint on the coordiantor side.
On the user side, this will return a Lwt_stream.t
and stop the function that monitor the root page hashes that have been streamed.
- The signature manager
PUT /signature/{root_page_hash}
(this include the root page hash and expiration level) is the endpoint of signature manager. The DAC members will use this endpoint to communicate between each others.
The responsibilites of a signature manager are:
- Receive and verify that the signature is valid.
- Prevent malicious actors (send incorrect signature -> fail at signature verification on kernel side -> discard the whole payload corresponding to the root page hash).
- DAC committe will do this job (signature manager needs to know the list of DAC members). Currently, the list of DAC members are stored in the DAL node configuration. In future, it may store in the DAC node instead.
- The verified signature will be stored in the signature storage.
(root_page_hash, expiration_level) list option
: is the signature storage. Each position in this list corresponding to the signature of the DAC member if available, otherwise None
.
Another process is adding a new signature, the DAL node have to check whether or not it is sufficient amount of signatures for the attesting the data available has been collected for the root page hash. This is store in the DAC node configuration.
- The signature manager will compute the aggregate signature (combine with the root page hash –> produce the external message –> inject to L1).
The end of this process the signature manager will post the new signature to an external endpoint (this external endpoint is specified in the DAC node configuration).
The Committe member
- Data_stream is where the comitte member use to discover the root page hashes. These root page hashes advertised by the DAC coordinator.
- After receive the new root page hash, the DAC member will download the data from the DAC coordinator. Using
GET /page/{page_hash}
.
- Checking the
{page_hash}
- compute the hash of the downloaded
{page_hash}
- check it with the requested
{page_hash}
:
- Mismatch: download terminate, DAC member does not sign the root page hash x expiration level.
- Match: continue the next step to procede to sign the (root page hash, expiration level). Then communicate the signature –> to the DAC node (call
dac/signature/{root_page_hash}
endpoint).
GET /page/{get_page_hash}
endpoint will be use for obsevers to retrieve missing pages, when requested by the rollup node.
The Observer
Observer lives in the same hosts which the rollup node.
- Rollup node: downloading pages from DAC coordinator and save them into its own pages storage.
HEAD /dac/notify_missing_page/{page_hash}
is the endpoint of the observer, it can return immediately 200 response.
Observer will then broadcast a request to all of the DAC members to retrieve the page.
The rollup node
dac/notify_missing_page/{page_hash}
is an asynchronous endpoint. Rollup node will use this endpoint when the PVM requests a page that the rollup node does not have in its own page storage.
It is asynchronous because the observer is living in the same host with the rollup node.
4. What is the current state of the implementations, tests, etc.?
The implementation of Coordinator
Milestones:
Goal of coordinator:
- RPC server: handle the requests from users and receive signature from DAC members.
- An entrypoint to the DAC: handle the payloads, compute the merkle tree of pages and store them on a disk. The
(root_page_hash, expiration_level) list option
is using for data streamer.
- A data streamer: provide the
(root_page_hash, expiration_level)
to the committe members.
- A signature manager: receive, verify signature that is valid or not, the verified signature will be store in the signature storage. Also add a new signature (aggreate signature).
The milestone "Production and streaming of pages" handle the:
- RPC server
- An entrypoint to the DAC
- Data streamer
The milestone "Signature verification and injection" handle the:
Production and streaming of pages
- DAL node can run in coordinator mode
- RPC server:
- provides an RPC endpoint to fetch a single page
- provides an RPC endpoint to collect signatures from signers
- produce root page hashes are streamed to signers
List of MRs:
- 7310: move protocol-dependent logic to own plugin.
Reason: isolate the protocol-related logic of DAC from DAL.
Action: Moving DAC RPC server and related files from DAL plugin –> new DAC plugin.
- in
manifest/main.ml
:
- add function
val dac
,
- add
?dac
parameter in make
,
- define
dac
register path//lib_dac
as a protocol specific library for DAC,
_dac_tests
register path//lib_dac/test
for testing.
- in
src/proto_alpha/lib_dac_plugin/test/test_helpers.ml
:
- define a module
Unit_test
:
spec: string -> unit Alcotest_lwt.test_case list -> string * unit Alcotest_lwt.test_case list
: Example calling this function
- in
src/lib_dal_node/dal_plugin.ml(i)
and src/proto_alpha/lib_dac_plugin/dac_plugin_registration.ml
: Define plugin for DAL node
- in
dal_plugin.ml
: define function to register
a dal plugin, and get
a dal plugin when giving a hash. Where a module T
of dal plugin has a module Proto
is a Registered_protocol.T
.
- in
dac_plugin_registration.ml
: where the module Plugin
has module Proto
is a Registerer.Registered
- Now in a DAL node register both DAL and DAC plugins:
src/bin_dal_node
RPC_server.ml
daemon.ml
event_legacy.ml
node_context.ml
- 7345: this MR rename the endpoints of the DAC node server from
/plugin/dac/<endpoint_name>
to /<endpoint_name>
. This work has been removed (?)
- Do this task in `src/bin_dac_node/RPC_server`, `proto_alpha/lib_dac_plugin/RPC.ml` and `tezt/lib_tezos/rollup.ml`
- 7349: This MR is about PBT for DAC pages encoding, it does 2 things:
- Factors out some helper functions in the
test_dac_pages_encoding.ml
- Introduce simple PBT for round-trip (serialization/deserilazation) of the DAC payload posted via single request to the new stack based implementation.
- 7451: Define pages storage signature for DAC node and implementation via filesystem.
- In
src/lib_dac_node/page_store.ml(i)
- 7524: This MR add coordinator client (DAC client) context.
- In
src/lib_dac_client/dac_node_client.ml(i)
- 7548 This MR works on the streaming of root hashes.
- First add root hash streamer to DAC node context
- in file `src/lib_dac_node/node_context.ml(i)`
- Define a RPC `GET monitor/root_hashes`, this is a monitoring service.
- In the file `src/lib_dac_node/monitor_services.ml(i)`: to define a get service using the function `Tezos_rpc.Service.get_service`. To define this service to a context of Tezos using `Tezos_rpc.Context.make_stream_call`.
- After define the RPC above, we need to register this RPC to RPC server.
- In the file `src/lib_dac_node/RPC_server.ml`, the function is `register_monitor_root_hashes`, it takes a dac_plugin, a hash's streamer, and a directory. To register this directory in Tezos using the function `Tezos_rpc.Directory.gen_register`.
- Then in the function `register` add the case for this new RPC registered.
- Now we emit a new event for handling new subscription to the streamer's hash. The event is handle in the file `src/lib_dac_node/event.ml` at the function `handle_new_subscription_to_hash_streamer`
- 7595. This MR uniform the implementation of Preimage_store.
- 7621: This MR refine and implement root hash streamer interface. The tests for data streamer at
src/lib_dac_node/test/test_data_streamer.ml
.
- 7812: This MR implements
Coordinator
's POST /preimage
endpoint. It also bind coordinator's POST /preimage
in the test of tezt/lib_tezos/rollup.ml(i)
with the function coordinator_store_preimage
. And write the test for it in Tezt at tezt/tests/dac.ml
at the function test_coordinator_post_preimage_endpoint
.
- 7876: This MR remove the
Lwt
that aren't use inside data streamer.
- 7389: This Mr add new PBT
Merkle_tree.Make_buffered
. (in review)
- 8164: This MR improves of Page_encoding module.
Signature verification and injection
List of MRs
- 7822: This MR add the implement of handle store DAC member signature and test in Tezt for it.
- In
src/lib_dac_node/signature_manager.ml(i)
The implementation of Member/Observer
Milestone: https://gitlab.com/tezos/tezos/-/milestones/160#tab-issues
Goal of observer
Give the missing page for rollup node when requested, broadcast a request to all DAC members to retrieve the page.
List of MRs
- 7738: This MR implement comittee members dowload advertised page from Coordinator.
- In
src/lib_dac_node/page_store
/page_encoding
. The test in tezt/tests/dac.ml
, the payload example is in tezt/tests/dac_example_payloads/preimage.json
- 7750: This MR verfiy hash of downloaded pages in Committe member/observer, and add tests for it.
- In
src/lib_dac_node/page_store
, tests in src/proto_alpha/lib_dac_plugin/test/test_dac_pages_encoding.ml
The implementation of Rollup node integration
Milestone: https://gitlab.com/tezos/tezos/-/milestones/162#tab-issues
The goal is to integrate rollup node with DAC
5. Git branch that contains my notes on code:
My onboarding branch: https://gitlab.com/marigold/tezos/-/commits/quyen@dac_oboarding
The code of DAC:
src/bin_dac_node
: this contents the binaries features (command line) of DAC node.
src/lib_dac
: this is the library of DAC. It contents:
- RPC services
- Module
Coordinator
- RPC service contents a module
Coordinator
. Inside contents an RPC POST dac/preimage
, this RPC sends a payload
to the DAC coordinator (serializing DAC payload). It returns a root_page_hash (representing the stored preimage), and pushes (streaming) it to the subscribed Observers
and DAC members
/committee members.
Tezos_rpc.Service.post_service
Tezos_rpc.Query.empty
Tezos_rpc.Path.(open_root/ "preimage)"
There is a TODO that return the pair (root_page_hash, expr_level)
instead of only root_page_hash
(~output:P.encoding
, this encoding
is defined in src/lib_dac/dac_plugin
, it is the encoding of reveal hashes)
POST dac/store_preimage
: this RPC post a payload using pagniation scheme and return a (root_page_hash, raw bytes of external_message).
- Input: a given [pagniation_scheme]
~input:store_preimage_request_encoding: (payload: Bytes.t * pagination_scheme: Pagination_scheme.t)
. Split DAC reveal data because a pagination scheme will split a payload in a set of pages of 4096 bytes each.
- Output: It returns the
base58
encoded root_page_hash and the raw bytes ~output:(store_preimage_response_encoding ctx): (root_page_hash:Dac_plugin.hash * external_message: Bytes.t)
.
GET dac/verify_signature
: Verify signature of an external message to inject in L1.
This RPC requests the DAL node to verify the signature of the external message. The DAC committe of the DAL node must be the same that was used to produce the external message.
Tezos_rpc.Service.get_service
~query:external_message_query
GET dac/preimage
: this RPC request/get the preimage's hash/page hash (this consist of a single page from cctxt) and returns its contents if found.
- Output: success case, get the raw page as a sequence of bytes.
Tezos_rpc.Service.get_serice
~query:Tezos_rpc.Query.empty
~output:Data_encoding.bytes
PUT dac/member_signature
: This RPC will verifies and stores the DAC member signature of a root page hash.
Tezos_rpc.Service.put_service
~query:Tezos_rpc.Query.empty
~input:(Signature_repr.encoding dac_plugin
(root_hash, signature: Tezos_crypto.Aggregate_signature.encoding
, signer_pkh: Tezos_crypto.Aggregate_signature.Public_key_hash.encoding
)
~output:Data_encoding.empty
GET dac/certificate
: retrieve/get the DAC certificate associated with the given root_page_hash.
Tezos_rpc.Service.get_service
~query:Tezos_rpc.Query.empty
~output:(Data_encoding.option (Certificate_repr.encoding (module P)))
GET dac/missing_page/[page_hash]
: The observer fetches the missing page from a Coordinator node. The missing page is then saved to a page store before returning the page as a response.
Tezos_rpc.Service.get_service
~query:Tezos_rpc.Query.empty
~output:Data_encoding.bytes
- DAC plugin: handle DAC plugin, where one can register a new [DAC_plugin.T] or search if there is a DAC_plugin in a registered a DAC_plugin (get).
- A register function will derives(obtain) and registers a new [Dac_plugin.T] by given a [of_bytes]. The
make_plugin
input has to be built before. In the src/proto_alpha/lib_dac_plugin/dac_plugin_registration.ml
shows how to define the make_plugin
and call the register
function to register this new plugin.
For the function get
, it takes a hash
to search.
- A module [Dac_plugin.T] contains
- A module
module Proto: Registered_protocol.T
, and functions that manipulate on the hash.
- NOTE: the function name
of_hex
/to_hex
take a string
should it be rename? of_string
/to_string
instead.
- Signature representation: is a representation of committee member signature.
A signature is either a Bls12_381
or unknown bytes.
- Certificate representation: is a representation of a DAC certificate.
- Pagination scheme: this convert a payload in a set of pages of 4096 bytes each. These schemes are supported by DAC node.
src/lib_dac_client
: This is an instance of Tezos_client_base.Client_context
that only handle IOs and RPCs. It can be used for keys and RPCs related commands.
- It calls the functions defined in
src/lib_dac/RPC_services.ml
and add the cctxt
for instance:
The implementation look likes:
src/proto_alpha/lib_dac_plugin
: define a client mode for interacting with a DAC node in Observer mode. This client should only used by components that are compiled with a protocol. It also register DAC plugin in a SC_rollup_reveal_hash.
- Questions:
- what is it different with the lib_dac? is it an integration between lib_dac and the reveal channel of SORU?
- Why the type
type t = unit
in dac_observer_client? is the fetch_preimage
a todo? I think this file is a TODO.
- tests: it contains the tests for
- dac_plugin_registration and
- Test data encoding roundtrip between DAC hash and reveal hash:
- Binary
- Question: the test check bytes dac hash roundtrip = bytes reveal hash roundtrip, why the text said the "roundtrip hash is not equal"? same at the assert_equal_bytes.
- Hex
- Test equality between DAC hash and reveal hash
- bytes dac hash = bytes reveal hash
- string dac hash = string reveal hash
- Check the Json encoded DAC hash string should be a Hex of string. (TODO: call the wrong test case)
- dac pages encoding
src/lib_dac_node
: DAC node is the heart of DAC.
- Page manager: it contains page manager and page storage.
- Pages manager: show how to encode a page. Read/write to page storage. When a DAC members request a page to page manager, it will send that page(s) to DAC members if found. A page encoding is a library for encoding payloads of arbitrary in formats that can be decoded by the SC-rollup kernels.
- A page has a maximum number of size.
- It also has version: content version and hashes version.
- It has a module
Dac_codec
: this module for encoding a payload bytes
as a whole and return the calculated root_page_hash (serialize payload/deserialize it).
- Serialize payload: taking a raw payload and convert it into the root_page_hash (
Dac_plugin.hash
).
- Deserialize payload: taking the
Dac_plugin.hash
and convert it back to the bytes
(raw information/payload).
- Page_encoding
A page is either a [Contents] page (containing a chunk of the payload to serialize), or a [Hashes] page (containing a list of hashes). There is a [max_page_size] bytes.
- Merkle tree: Merkle tree encodings of DAC pages/payload are versioned. This allow for multiple hashing schemes to be used. The branch is arbitrary and factor >= 2 branches.
- A merkle tree is either a:
- Contents page: contents a chunk of payload to be serialize; or
- Hashes page: contents a list of hashes.
The size of both are bounded by the [max_page_size]. The current bound for the max_version is 127. The preabmle (the prefix of each content, hash) has 5 bytes: 1 byte denote the version, and 4 bytes encoding the size of the rest of the page.
- Serialize: Question: need a good explaination of the Merkle tree? TODO: rework on the docstring in the code.
- A payload (a large sequence of bytes), is split into several pages of fixed sized. Each page is prefix with a small sequence of bytes (also fixed size), this is a
preamble
of the page. Contains pages
are the pages that can obtained directly from the original payload, they are the leaves of the Merkle tree.
- Each contents page is then hashed. Each size of the hash is fixed. The hashes are concantinated together. (Question? after having a long hashes, split them in the same size and each hash is a node of the tree?)
- Deserialize: reconstructing the original payload from its Merkle tree root hash.
- Hash chain/Merkle list: encoding of the DAC payload as a Hash chain/Merkle list. This encoding is implemented is specific to the Arith PVM.
- Page storage: database/storage of pages. Read/write to page manager. These are the storage that is a backup by the local filesystem.
TODO
- Data streamer: how to communicate with observer, DAC members
- Signature manager: it contains signature manager and signature storage.
- Signature manager: compute aggregate signature and bitset of DAC members that signed root_page_hash.
- Signature storage
- Handler:
- External message/message aggregator: it receives a new_head from L1 tracker, finalise payload, it will store page to page manager when complete. It streams/sends root_page_hash to Data streamer.
- Together a DAC node also need:
- A configuration of a node: a node can save/write configure file to [config.data_dir], load a configure file from [data_dir] and change to different mode.
- A configuration of a node has:
- a data directory: is the path to the DAC node. The default path is:
${HOME}/.tezos-dac-node
(Question: should it be octez-dac-node?)
- an RPC address: is the address the DAC node listens to. The default address is
127.0.0.1
- an RPC port: is the port the DAC node listens to. The default port is 10832.
- a reveal data directory: is the directory where the DAC node saves pages. A default is:
${HOME}/.tezos_rollup_node/wasm_2_0_0
(Question should it be octez-rollup-node?)
- DAC mode: is the operation mode of the DAC, it contains:
- a Coordinator mode
- In a coordinator mode contains:
- a number of threshold (define the range in which we expect our network to perform).
- a list of committee members addresses aka their public key hashes.
- The implementation is define a type
t
and then using the Data_encoding
to encode it.
- Committee member mode
A committee member contains:
- coordinator RPC address,
- coordinator RPC port and
- its own addres aka its public key hash.
- Observer mode
An observer contains:
- coordinator RPC address
- coordinator RPC port
- and Legacy mode (Question: when do we use the legacy mode?)
A legacy mode contains:
- A number of threshold
- a list of committee members addresses
- an optional of a configuration of a DAC context: it contains a host and a port information.
- an optional of a committee member address
- Node context: after having the configuration for the DAC node, Node context is where the node makes its interations with the outside world.
A node context has:
- its status can change: ready context or starting. A context is ready contains: the DAC plugin and the hash of Data streamer.
- it store the information of node configure.
- it has the information of the Tezos node context.
- it has an option choice to store information of the context of a DAC node client (coordinator). This is used only for integration tests and where all the nodes is in legacy mode.
- it has the information of the page storage
- it has the information of a node storage. It is an Irmin storage.
A node has these features:
init
: will initiate the DAC node context to [Starting].
- get/set: manipulate on the get/set of the status of a DAC node, get the information of the node configure, get the information of the Tezos node context, get the information of the DAC plugin, get the page that it stored, get the information of a node that store in Irmin in a specific mode, get the list of DAC committee members, get the DAC node client context aka coordinator.
- Event: declare all the events for DAC node.
Below is a list of events of a DAC node:
- Starting the DAC node
- Shutdown the DAC node
- DAC node is ready
- The directory of DAC node is ready
- The DAC node storage is ready
- RPC server is ready
- The DAC committee member is ready
- Tracking the Layer 1 node:
- New head is updated
- Started tracking the Layer 1 node
- Tracking the protocol DAC plugin
- Error for daemon
- DAC node threshold not reached it required numbers
- Handle the DAC committee members:
- Committee member does not exists in a wallet
- No Committee member address provided, DAC node will only deserialize payload but not signing it.
- Committee member cannot sign DAC root_page_hashes. It happens when the tz4 has its secret key URI not available.
- Committee member has no public key. There is an public key hash tz4, but its public key is not available.
- Handle the Data streamer
- Add a new subscription of another DAC node to the Data streamer (hash).
- Subcribed to the root hashes stream
- Handle root_hash:
- Received new root hash via monitoring RPC
- Finished processing previously received root hash
- New root hash pushed to the Data streamer
- Failed to process root hash, raise an error
- Handle signatures
- Cannot retrieve/get keys from an address. It happens when the DAC node is not a Committee member mode.
- Handle getting the missing page.
- Successful fetching missing page for hash.
- TODO?
- Daemon: is a daemon of the DAC node.
It has:
- It defines a daemon with different handlers:
- handle a streamed call
- resolve plugin and set ready: monitor heads and try to resolve the DAC protocol plugin corresponding to the protocol of the targeted node
- new head: monitor heads and store published slot headers by block hash.
- push payload signature, call PUT /dac_member_signature to submit member signature.
- new root hash: only available in the coordinator mode in the DAC node configuration.
- This daemon able to:
- get all the committee members keys
- one can run this daemon
- RPC server: define all the RPC interaction with the DAC node. Using the
Tezos_rpc_http
and Tezos_rpc_http_server
.
- Register RPC server:
- register for preimage
- post store preimage
- get preimage
- coordinator post preimage
- register get verify signature
- register get certicicate of committee members
- register get missing page
- register put DAC member signature
- Start a legacy mode
- Shutdown RPC server
- Initialize finalizer RPC server
- Monitor services: monotor services happens on the DAC node.
- root_page_hashes: returns a stream of root_page_hashes and a stopper for it. Stream is produced by calling the RPC
GET /monitor/root_hashes
- Manage wallet: manage the information of wallet (keys) for coordinators, committee members, and legacy. There is a helper to help manipulate on the wallet.
- Helper:
- get keys of a context given a public key hash
- get a public key in a context given the address
- check to see if one can verify a public key
- check to see if one can verify a secret key URI.
- Coordinator keys
- Committe member keys
- Legacy keys
- Storage: A DAC node is stored by Irmin and store these information:
- Signature:
- the primary key: is the root_page_hashes (the DAC plugin hash). Currently, the root_page_hashes are strings instead of [
Dac_hash.t
], the reason is to avoid runtime functorization of the module (?).
- the secondary key: is the comittee member public key hash
- the value is the signature.
- Certicicate
- the key is the root_page_hashes (DAC plugin hash)
- the value is the signature and the number of witness.
For Test:
tezt/lib_tezos/
tezt/tests/
6. What are the use cases for DAC?
References