This is a "hand wavy" description of a simple singleton action manager in the context of CometBFT/ABCI. ``` NOTE: on reflection, this idea is not proving general and robust enough to presue as a useful technique ``` In decentralized systems there are occasionally requirements to perform actions on external systems that require single actors, or are otherwise inefficient to make idempotent. (Explain L1 aggregate/settlement as an example) Decentralized CometBFT clusters go through predictable state transitions per round with a nominated proposer. This can be leveraged to trigger singleton actions as long: - app proposer is allowed to skip block proposal (on action error) - action duration added to block production rate is acceptable - action inputs are computed by each node uniformly from block - inputs are included in app hash & state (What happens if action succeeds but block production fails? Might have to require external action is ultimately idempotent, and this setup merely reduces duplicate actions as much as possible, but no more) ### App State App state should include the action inputs as part of the app hash. ```go type State struct { Size int64 `json:"size"` Height int64 `json:"height"` Action []byte `json:"action"` } ``` ### Propose The proposer for the round is chosen. This is the opportunity to process the action input from the previous round. Failure to do so should produce a nil proposal, allowing next proposer to try both processing the action and proposing next block. In `PrepareProposal` ```go // before proposing block if err := app.processAction(app.state.action); err != nil { log.Warn("error performing action", err) return nil, nil // produce nil proposal } // continue with block proposal ... ``` - Failure to perform action skips block proposal, allowing cometBFT to select the next proposer to try again (failover) - Success allows this round to continue ### Finalize Finalizing the next block & commit produces the next action to perform in the next round. In `FinalizeBlock` ```go // execute transactions & compute next action inputs input := // computed from block app.state.setActionInput(input...) // return finalize response ``` In `Commit` ```go app.state.Save() // includes input in app hash ``` ### Marcello's Heimdall Flow By the way, with regards to checkpoint mechanism, basically it works the following way: - `heimdall` holds a validators list which rotates based on stake, that defines the voting power (e.g. see this endpoint) - *Q: would be helpful to understand the details better here. Our assumption is that this proposer selection can't be the same as proposer selection for Heimdall blocks.* - when a new proposer (validator) is selected, it will propose a new checkpoint, a data structure (retrieved from `bor`) containing the `childChain` (L2) `RootHash`, `StartBlock`, `EndBlock` and some info about the proposer (e.g. see this endpoint) - this proposal will undergo a round of `tendermint` consensus (this will be replaced with `cometBFT` consensus) - *Q: what is the consensus 'on'. I would assume it's at least 1) that the proposer is valid - ie. it is their turn and 2) that the checkpoint is valid - ie. correct hash etc.* - if the proposal collects positive votes from 2/3+ of `heimdall` validators, it gets submitted to L1 Ethereum on this contract (and the proposer gets the corresponding reward when successful…) - the checkpoint may succeed or fail on Ethereum block (e.g. `Submit Checkpoint` through this contract method), - based on that, `Ack` or `No-Ack` `heimdall` transactions are executed, and those messages will rotate the sequence of the `checkpoint_proposer` list (therefore a new proposer for the next checkpoint gets selected) - *Q: this is done by the 'watcher' process, described below?* - if the submitCheckpoint is successful, a `NewHeaderBlock` event is emitted on the `rootChain` (L1) contract - in the meantime, `heimdall bridge` (which is a process running in parallel to the `heimdalld` service in every node) is polling some L1 contracts waiting for interesting events (included the one we are waiting for…) - if the event comes, the bridge detects it and will p2p broadcast an `Ack` message to the network (which will rotate the proposer and start the process again...) - if the event does not come in a certain time interval, a NoAckMessage is required, hence a new proposer is selected (and the process starts all over again…) - *Q: this is just a timeout parameter?* - FYI all the messages, txs, and data structure are based on cosmos-sdk (and eventually extended based on `heimdall` business logic) - In case it helps, other info about the aforementioned endpoints, are available at the swagger json file