Brainstorming doc: [https://hackmd.io/FAXOf-PUTAOP--2_kuNb-A](https://hackmd.io/FAXOf-PUTAOP--2_kuNb-A)
## Plan
- WORKER: L1 mode and L2 mode
- For L1 mode
- [ ] track "injected" proofs through the pipeline
- [ ] when we submit the aggregated proof to the contract, populate the `offChainSubmissionMarkers` flags
- [ ] start injecting proofs from the off-chain submission DB
- [ ] track when off-chain submissions are fully aggregated
- [ ] "catchup" (determine how much of the next off-chain submission has already been aggregated)
## Scope
This design covers:
- Changes to the `UpaVerifier` contract.
- Design of off-chain RPC endpoint and worker
- L2 considerations (can we re-use off-chain design?)
It does not cover:
- How or when are fees charged?
- Incorporating blobs/DA layers
- Bundling off-chain submissions
- Batch filling
- Multi-worker
## Goals and requirements
1. One verifier contract that can handle both on-chain and off-chain submissions
- Must support both on/off-chain in the same aggregated proof so we have one "liquidity pool" of proofs.
- Must retain censorship resistance of on-chain submissions
- Ideally has some mechanism to "advance" the on-chain submission queue that does not cost too much gas
2. Off-chain RPC endpoint and worker
- Must be able to sequence both on-chain and off-chain submissions into the same aggregated proof.
## RPC Endpoint and Worker
We must aggregate on-chain submissions in order, but we have some flexibility in how we interleave off-chain submissions. To simplify the Verifier contract logic, we will maintain:
**Invariant:** Aggregated batches have all on-chain submissions at the front followed by all off-chain submissions at the back.
Other details:
- The endpoint should be able to take a signed transaction to be executed once a proof has been verified.
- Scheduling choices for worker (not exhaustive):
1. Greedily aggregate on-chain submissions first.
- May cause missing an off-chain deadline
2. Dovetail on-chain and off-chain submissions
## Verifier Contract Modifications
Currently we have
```
function verifyAggregatedProof(
bytes calldata proof,
bytes32[] calldata proofIds,
SubmissionProof[] calldata submissionProofs
)
```
with
```
struct SubmissionProof {
bytes32 submissionId;
bytes32[] proof;
}
```
This `SubmissionProof` is needed for on-chain submissions to ensure that the proofIds in an interval belong to the a submissionId that matches the one stored in proofReceiver. For off-chain submissions we no longer need to do this consistency check- we can just store the current submission's proofIds and then, once we get to the end of the submission, compute which submissionId to mark verified from within the Verifier contract.
### `VerifyAggregatedProof`
```
function verifyAggregatedProof(
bytes calldata proof,
bytes32[] calldata proofIds,
SubmissionProof[] calldata submissionProofs,
uint256[] calldata offChainSubmissionMarkers,
)
```
We add one more argument `offChainSubmissionMarkers` that is a `bool[]` packed into a `uint256[]`. It marks each off-chain proofId with a bool. A 1 indicates the end of a submission. (Could make this a fixed size array based on aggregation batch size. Right now `offChainSubmissionMarkers` will only contain one entry since our agg batch size is 32.) The gas cost impact is very low (2 x 32 bytes of calldata x 10 gas / byte $\approx$ 700 gas).
`VerifyAggregatedProof` will process the (on-chain) proofIds using `submissionProofs` first. Once there are no more `submissionProofs` we know the rest of the proofIds are for off-chain submissions, and at that point we start using `offChainSubmissionMarkers` to determine submission boundaries.
**Storage:** We will add a storage array `currentOffChainSubmissionProofIds` that stores the proofIds so far for the current partially verified off-chain submission.
### `verifiedAtBlock` to hold off-chain submission status
```
mapping(bytes32 => uint256) public verifiedAtBlock;
```
For on-chain submissions we needed to keep track of `numVerifiedInSubmission` to track partial progress towards verifying the current submission. For off-chain submissions, `currentOffChainSubmissionProofIds` tracks partial progress instead. So we only need to store whether the submissionId was verified, not the number of proofs verified. To facilitate the payment note system, we will do a bit more and store the block when a submission was verified.
### `SubmissionId` and `isSubmissionVerified`
We don't have ideal options for checking the verification status of a proof/submission:
1. Separate methods `isOnChainSubmissionVerified`, `isOffChainSubmissionVerified`
- Bad UX
2. One `isSubmissionVerified` that checks both on-chain and off-chain submission statuses.
- Either needs an extra bool flag
- Or pays extra cost when checking on-chain submission status: another SLOAD (~2k gas) to check `verifiedAtBlock` before `numVerifiedInSubmission`.
3. Use the first byte of submissionId to indicate on-chain vs off-chain submission?
- Still a bit awkward in the single-proof submission case.
- Also, app contract computes the submissionId- they would have to add code to look up both on-chain and off-chain submissions if they submit using both channels.
Out of these, I'd choose **option 2** and pay for an extra SLOAD when checking the status of on-chain submissions (since the 2k gas is less impactful there). Also in some sense this SLOAD is offset by the packing of `numVerifiedInSubmission`.
### On-chain advancement
Aggregators commit to aggregating off-chain subsmission by some deadline (whereas they have no such time constraints for on-chain submission), they may be incentivised to focus on off-chain submission if they have a large backlog.
We need to give on-chain submitters *some kind of guarantee of liveness*, namely that aggregation of on-chain proofs will not be stalled due to an aggregator promising to aggregate off-chain submissions by some deadline.
- This condition should not require worker to abandon an in-progress aggregation batch. For example, when on-chain queue not empty, no more than 5 aggregated batches can have no on-chain submissions.
- Impl sketch: UPAVerifier has counter `num_all_off_chain` that increments if: `!(on-chain queue empty) && !(agg. batch has on-chain submission)` and is reset if `agg. batch has on-chain submission`
- Above allows worker to only serve queue once every 5 batches.
```
Note- looks like this may cost ~10k gas per aggregation:
- Check on-chain queue empty (~5k gas to access external contract storage)
- increment counter (~3-5k gas to read and update nonzero storage slot)
```
#### Proposal: minimal liveness
- Maintain a bounded queue of size `N` called `delayedLastSubmissionIdx`
- fixed set of N storage slots, potentially packed so it can be updated by writing to a single slot (~2k gas?)
- For each aggregated proof, call
```solidity
// Get the index of the last on-chain submission
uint64 lastSubmissionIdx = upaReceiver.getNextSubmissionIdx();
delayedLastSubmissionIdx.enqueue(lastSubmissionIdx);
```
- `delayedLastSubmissionIdx.tail()` is then always the `nextSubmissionIdx` as of `N` aggregated batches ago.
- If the following condition is true, the next aggregated batch MUST contain at least on on-chain submitted proof:
```
// delayedLastSubmissionIdx.tail() is the last submission index
// as of N aggregated batches ago.
if (lastVerifiedSubmissionIdx < delayedLastSubmissionIdx.tail()) {
// We have not aggregated a full on-chain submission for N blocks,
// even though there are pending on-chain proofs.
// We must include some on-chain submitted proofs
}
```
> Note: what if there is a single invalid on-chain submission? Aggregator cannot make progress.
#### Proposal: Challenge system
**Challenge system?** We already record `submissionBlockNumber` of on-chain submissions. For aggregated off-chain submissions, we populate a map of the form:
```solidity=
mapping(SubmissionId => BlockNumber) aggregatedOffChainSubmissions;
```
If challenger (restrict to be the submitter?) supplies:
1. On-chain submissionId for an unverified submission, submitted at block number $b$
2. $N$ Off-chain submissionIds that were verified after block $b$
then we have waited too long to service the on-chain queue, so we are penalized/they get a refund. We can store the on-chain submission referenced in the challenge to avoid repeats.
> NOTE, this measures submissions rather than blocks
## Re-using this on L2
We can re-use the above design on L2. If we simply remove all the on-chain codepaths, the design becomes the same as our previous L2 design.