The DAL node is the main component of the Data-Availibility Layer (DAL). Objectives of the DAL node can be split into two:
The users of the DAL node can be:
This document will not talk about the P2P interaction. To simulate a P2P interaction, we will rely on the API.
The goal of this document is therefore to describe a backend of the DAL node and an API that is adapted to the different users described above.
We describe in this section the various use-cases for the DAL node:
We do not tackle the problem of how-long those data must be made available by the DAL node.
This notion of various use-case leads to a notion of profile as described below:
Several data-structures are involved with the DAL node. We give a rough description of those data here.
The DAL node stores raw data that we call slot
. A slot will have a fixed size. This slot can be split in two ways:
pages
which are chunks of the original slot
shard
which encodes a chunk of the original slot
with an erasure code.The main property of shards is that any sufficiently large subset of the shards can be used to reconstruct the original slot.
In the DAL nodes, those notions are abstract:
type slot = bytes
type shard
type page
Only the cryptographic primitives know how to interpret them.
To prevent spamming we must be able to prove that a shard
or page
is related to the original slot. To do so, there is a notion of commitment
and proofs.
type commitment
type shard_proof
type page_proof
Again, those data-types can only be interpreted by the cryptographic primtives.
The DAL node must interact with the cryptographic primitive to provide the various services mentioned above. Cryptographic primitives are parameterized by some constants:
slot_size
which is the size of a slotnumber_of_shards
which is the number of shards per slotnumber_of_pages
which is the number of pages per slotredundancy_factor
which is the erasure-code factor used for the shard. Only There is a dependency between those parameters, and the data stored by the DAL node as well as the validation of those data. Consequently, it is essential to always be able to retrieve those parameters when using the API.
It is important for this design to assume that those parameters can change. In particular because some parameters are correlated to other parameters of the economic protocol. For example, the slot_size
depends on the time_between_blocks
.
The following diagram gives an overview of the storage (viewed as a tree) of the DAL node. Whenever there is an item starting with <
and ending with >
it is a variable whose range belongs to the corresponding datatype. Hence <commitment>
means it can be any valid representation (as an OCaml string) of a commitment. Only leaves contain datatypes. All datatypes have a fixed length.
This storage is an over-approximation of what could be actually stored.
For example, it may be faster to recompute the commitment_proof
than to store it (same argument could be done for page
). A benchmark should allow to determine whether this is useful. For performance reasons, it may also be interesting to provide a cache for the various data stored. We can expect that shards related to a slot header that is Waiting_for_attestations
will be asked many times in a relatively short amount of time.
Finally, concerning the concurrency, we should ensure that the writings are done atomically and the node can support several readers and writers (each connection can write/read data).
The DAL node provides en API so that we can interact with it. This API can be used by external users like rollup nodes
or attestors
but could also be used internally via the notion of profile
as explained in the next section.
POST /commitments
-> rename /slots -> /commitments
slot
with length of exactly 1
MiBPATCH /commitments/<commitment>
type input =
{
published_level : level;
slot_index : slot_index
}
GET /commitments/<commitment>
slot
<commitment>
<commitment>
is not known.PUT /slots/<slot_id>
.GET /commitments/<commitment>/proof
commitment_proof
<slot>
corresponding to <commitment>
is below the size of the commitment.GET /commitments/<commitment>/headers
type attestation_status =
| Waiting_for_attestations (* The slot header was published onto the L1 but remains to be confirmed. *)
| Attested (* The slot header was published and confirmed onto the L1. *)
| Unattested (* The slot header was published but not confirmed onto the L1. *)
type status =
| [ attestation_status ]
| Not_selected (* The slot header was published onto the L1 but was not selected as the slot header for this index. *)
| Unseen (* The slot header was never seen in a block. This can happen if the RPC `PATCH /slots/<commitment>` was called but the corresponding slot header was never included into a block. *)
type slot_header =
{
level : level;
commitment : commitment;
slot_index : slot_index;
status : status;
}
<commitment>
that were published onto the L1. Considering one <commitment>
it can be only has one status at time that may change depending on the L1.
<commitment>
is unknown.GET /levels/<published_level>/slot_indices/<slot_index>/commitment
<published_level>
and <slot_index>
. This commitment must have the status Confirmed
or Not_selected
.
<level>
and slot index <slot_index>
.Not_selected
not handled (returnsGET /levels/<published_level>/headers
<level>
is in the future?GET /levels/<published_level>/<slot_index>/status
None
type status = attestation_status =
| Waiting_for_attestations
| Attested
| Unattested
GET /slots/<published_level>/<slot_index>/slot
None
<slot_id>
is not known.POST /commitments/<commitment>/shards
type input = {
with_proof : bool
}
None
is given as input, then all the shards are computed. If with_proof
is given, the proofs associated to each given shards are also computed.
<commitment>
is not known.GET /slots/<published_level>/<slot_index>/shards/<shard_index>
<slot_id>
is not known.<shard_index>
is not in the correct range.<shard_index>
is not known.GET /slots/<published_level>/<slot_index>/shards/<shard_index>/proof
<slot_id>
is not known.<shard_index>
is not in the correct range.<shard_index>
is not known.GET /slots/<published_level>/<slot_index>/page/<page_index>
<slot_id>
is not known.<page_index>
is not in the correct range.GET /slots/<published_level>/<slot_index>/page/<page_index>/proof
<slot_id>
is not known.<page_index>
is not in the correct range.<page_index>
is not known.GET /level/<published_level>/confirmed_level
<published_level>
is too far in the future.PATCH /profiles {profile}
type profile =
| Attestor of public_key_hash
| Consumer of slot_index
| Producer of slot_index
| Observer of [`Slot of slot_index | `Shards of public_key_hash]
| Archive
<slot_index>
is not in the correct rangeImplemented in !7104 for Attestor
profile
GET /profiles
Implemented in !7104
GET /profiles/<public_key_hash>/attested_levels/<level>/assigned_shard_indices
GET /profiles/<public_key_hash>/<slot_id>
true
if the DAL node has the corresponding shards associated to this public key hash for the given slot.GET /profiles/<public_key_hash>/attested_levels/<level>/attestable_slots
Attestable_slots of slot_set | Not_in_committee
GET /monitor/slot_headers
GET /monitor/<slot_id>/shards
GET /limit/<level>
type output = {
slot_size : int;
number_of_shards : int;
number_of_slots : int;
page_size : int;
shard_size : int;
number_of_pages : int;
endorsement_lag : int;
shard_proof_size : int;
page_proof_size : int;
commitment_size : int;
}
<level>
is in the future or too far in the past.The following RPCs can be used to delete the corresponding ressources:
DELETE /slots/<slot_id>/pages/<page_index>/proof
DELETE /slots/<slot_id>/pages/<page_index>
DELETE /slots/<slot_id>/pages
DELETE /slots/<slot_id>/shards/<shard_index>/proof
DELETE /slots/<slot_id>/shards/<shard_index>
DELETE /slots/<slot_id>/shards
DELETE /slots/<slot_id>
DELETE /slots/<level>
DELETE /profile/<public_key_hash>
Note: This interface could be refined depending on the road blocks that can be identified while executing end-to-end tests. All those RPCs are not strictly needed to implement a demo.
We describe below the various use cases of this API depending on the various profiles
defined below. Each profile can be implemented using the API provided above. For each profile below, except the archive profile, DELETE
RPCs can be called for slot headers which are too old.
Some of those profiles could be provided directly via the DAL node such as the slot producer or the slot consumer while other profiles could be implemented externally.
Producing a SLOT could be implemented as:
POST /slot {SLOT}
–> returns a commitment
See/replaced by POST /commitments
PATCH /slot/<commitment> {level;slot_index}
–> associates a slot header
to the commitment
See/replaced by PATCH /commitments/<commitment>
GET /slot/<commitment>/proof
–> returns a commitment_proof
See/replaced by GET /commitments/<commitment>/proof
slot_header
with the commitment proof onto the L1POST /slot/<slot_id>/shards {with_proof=true}
. This call can be costly (between 5
and 10
seconds).A slot consumer tracking SLOT_INDEX could be implemented as:
GET /monitor/slot_headers?slot_index=SLOT_INDEX&confirmed=true
–> returns a stream of slot headers
For each slot headers SH
produced by the stream:
GET /slots/<commitment>/slot
GET /commitments/<commitment>
GET /slots/<published_level>/<slot_index>/pages/<page>
to get a page of a slot requested by the PVM
An attestor with the public key hash PUBLIC_KEY_HASH can be implemented as:
GET /monitor/slot_headers
–> returns a stream of slot headersSH
produced by the stream:
GET /slots/SH/info
GET /profile/PUBLIC_KEY_HASH/SH
(TODO: The attestor must do this request at the appropriate moment)An observer can observe slot headers via the GET /monitor/slot_headers
.
An archiver aims to archive everything that went through the DAL. It can be implemented as:
GET /monitor/slot_headers
–> returns a stream of slot headersSH
produced by the stream:
GET /slots/SH/info
GET /slots/SH/shards/<shard_index>
GET /slots/SH/shards/<shard_index>/proof
PUT /slots/SH/content {reconstruct=true}
GET /slots/SH/shards/<page_index>
GET /slots/SH/shards/<page_index>/proof
archiver
profile will be able to know old slot headers.There are mainly two difficulties to handle properly the switching of protocol:
DAL parameters can change, which means the cryptographic primitives may change too. Except for the archive
profile, we only need to deal with at most 2 (maybe 3 protocols switching). The design ensures this is doable even though for the demo it is not a necessity. In particular, it requires some anticipation to publish a slot header for the first level of a protocol because it requires to know the DAL parameters for this protocol in advance.
The DAL node uses a plugin mechanism to get some data which are protocol specific such as the DAL parameters. We muse ensure we can switch from one plugin to another. This should be similar to what is currently done for the prevalidator. However, the DAL node must still be able to answer to request for a previous protocol for some amount of time.
This document does not aim to solve this problem. Instead it aims to provide a design that allows to solve those two problems without having to redesign the internals of the DAL node.