owned this note changed 2 years ago
Published Linked with GitHub

Autonomous Contracts

Motivation

Some app-chains might want to have actions triggered automatically based on certains conditions at a specific frequency. While this is usually done through keeper networks, there are a lot of advantages to make this available natively especially security-wise.

Specs

Scheduler Contract

Contract responsible for jobs registration and triggering. It should implement the following interface :

use starknet::account::Call; struct Policy { ... } struct Job { last_block_executed: u64, calls: Span<Call>, policy: Policy } #[starknet::interface] trait ISchedulerABI<TContractState> { fn trigger_job( ref self: TContractState, job_id: u64, ) -> Array<Span<felt252>>; fn is_ready( self: @TContractState, job_id: u64, ) -> bool; fn register_job( ref self: TContractState, calls: Span<Call>, policy: Policy ) -> u64; fn update_job( ref self: TContractState, job_id: u64, calls: Option<Span<Call>>, policy: Option<Policy> ); fn remove_job( ref self: TContractState, job_id: u64, ); }
  • register_job: Generates a job id, stores it in the contract's storage along with it's execution policy and who is the job admin.
  • update_job: Updates the given job, only callable by job's admin.
  • remove_job: Removes the given job, only callable by job's admin.
  • trigger_job: Basically does what ArgentAccount is doing, updates the job in storage.
  • is_ready: Returns wether the corresponding job can be triggered or not based on its policy property. Can be used by off-chain services to schedule job execution.

Q:

  • Should we make the Scheduler an Account contract ? In that case, should we have one account contract per policy and perform the verification in __validate__ ?
  • Should the job registration happen at the contract level or in substrate's runtime ?
    Most probably, we want it to happen at the contract's level to avoid un-neccessary interactions with runtime and make it provable.
  • How should the policy be defined ?
    There are 2 kind of policies : time & condition, how massa handles it is that you can trigger jobs based on storage changes and at a certain timestamp.

Scheduling Pallet

Optional pallet that can be added in order to facilitate the execution of scheduled jobs.
The pallet looks at registered jobs and using the on_idle hook, which means only with the remaining block space executes the jobs if the conditions are met.

Priority Policy
If multiple jobs are meant to be called at a given block but there is not enough block space, the pallet has to make a decision about which go first.
This can be defined at the pallet's config level. The exact format remains to be defined.
There is no guarantee that any job will be executed at a specific block but given the chain works normally, execution should most likely happen around this block.

Fees

A few designs are possible:

  1. Scheduler is a regular contract, whoever pays is the account contract calling the trigger_job function. In the case of the pallet it would be the sequencer's account and he should get rewarded for this.
  2. Scheduler is an account contract, it pays for all the jobs. It should keep track of its balance for each job. Will need to add a way to refill this balance.
  3. Job Registration can only be called by the sequencer. In this case, the pallet can give a quote by simulating the job's cost/frequency. It will only register the job if the user provides a sufficient amount.

In a real-world use case, most of the incentives for executing jobs will come from the MEV that can be extracted from it.

Possible Attack Vectors

  1. DOS by scheduling big jobs

This attack goes as followed: an attacker schedules a job at exactly the same frequency as another job, or even every block if he has the funds for that takes up to 80% of the block space reserved for jobs.

e.g max steps/block = 8M & max steps/tx = 1M & block space left = 20% total block space

In that case the attacker only needs to schedule jobs for ±1.5M steps to prevent anyone from executing jobs > 500k steps.

Conclusion: Custom limits needs to be set for transactions executed autonomously.

Select a repo