**Date:** 2026 / 04 / 27
**Hosts:** Matt & Felix
**Time:** 13:30
**Matt:** All right. Welcome. We were talking about what was the best thing to do today, and I feel that right now there are a lot of different ideas around what account abstraction is, what is the most important thing to be doing with account abstraction. Felix and I have been spending the last few hours just trying to think through, go through documents that have been shared, and come up with a list of things that at least we feel are important goals. We can use that as a starting point to begin this discussion. If you feel it's missing things, we can try and add stuff. If you feel things are on there that aren't really core requirements of account abstraction, we can debate that.
Maybe one good goal to leave this room with is to say this is kind of the set of properties that a solution to account abstraction needs to have. If we do that quickly, then we can start talking more about how we achieve these properties with the native account abstraction proposal. I'm just trying to be realistic with the time we have today. We can talk about it more throughout the week, but just leaving this room saying, okay, we kind of all know and agree on a set of constraints for account abstraction — that sounds like a great place. Does that sound good?
So I sent on the breakout telegram our list as well. I'll just start writing some of these things up here.
### Primary Goals
**Matt:** The first primary goal is we want to be able to support **alternative signature schemes**. Many things fall out from underneath this — multi-sigs, post-quantum is a big topic that lives under signature scheme but also lives in other places. Things like R1, many different things. We want to support any type of signature scheme. There's a debate sometimes about should we just support signature schemes that have pre-compiles or should we also support any signature scheme that you can write to the EVM? That's something that's up for debate.
Next one — **post-quantum signature aggregation**.
We want to be able to let users **batch their transaction calls**. One thing we've realized with the frame transaction is that there's a lot of demand to support batching at the top level of the transaction. With 7702, with 4337, a lot of thinking in the past was that we would just let the applications and the wallet developers build their own batching protocols. With the revitalization of the discussion with frame transactions, we're getting a lot of comments from wallets saying they would really like batching to be supported at the protocol level. That would mean we have some top-level frame or transaction object that says this is a list of calls.
**Speaker 8 (Question):** Does that include atomicity of that batch?
**Felix:** Yeah, that's what we're talking about. When we say call batching, what we mean is you want to be able to do multiple top-level actions and have them be atomic. There are extensions where people say I want to do multiple groups of atomic actions inside of a transaction. But this is basically what we're talking about — it's about the atomicity. And how do we marry this with the general model of transactions? Like, are there parts of the transaction that are covered by this and other parts which are not?
**Speaker 8:** Account recovery — these are like key management topics, right? Transaction assertions.
**Felix:** Let's quickly go over this. When you say account recovery, mostly what we mean is just this wider topic of key management — how users manage their keys. There's right now this link between the public key and the account. Up here we're talking about signature schemes — it's open-ended, but this discussion can in some ways be about the cryptographic schemes you want to employ. But there's also this other thing where you want to have all these features to say I have an account on chain, and that account has one or more keys, but I want to be able to potentially change those keys later, or have some kind of recovery mechanism where you have an account that has one key that does the transactions and then another key that can be used to reset it. These kinds of schemes. It's not 100% related with the signature schemes — you need cryptographic signatures to make these features work, but they are more like smart account features.
What about **post-transaction assertions**? This is a big topic that we hope should be discussed in the context of this, but mostly in the context of how can we make — I think this is more like a frame transaction specific thing. With our proposal this becomes somewhat easily possible. What is meant there is just this idea that when you perform an action on-chain, you want to be able to declare the intended outcome of that action. For example, when you're making a contract call that transfers tokens, you want to put an assertion into the transaction that says this transaction should only be successfully included — as in it should not be reverted — when the one and only outcome is the transfer of these tokens.
This gives you a kind of layer of safety when it comes to trusting certain contracts. You could either fully audit each contract you're interacting with, but we know that's not realistic. Users will typically not do this. So you'd rather have it that the protocol enables you to state your assertions and be sure the transaction will only be included when these assertions are met. Frederick is the one who is most behind this. It's more security-driven, by us trying to increase the level of security that users of Ethereum can expect.
**Matt:** Do you want to say a quick thing on how you guys came up with this as a high priority, Tim?
**Tim:** Yeah, this came from one TS, the Trail Security Initiative, where a lot of users were saying that one of the major problems the ecosystem is seeing today is these drainers and scams and phishing attacks, especially when interacting with malicious contracts. It might appear that by interacting with this contract you're going to receive an NFT, but on top of receiving the NFT you also end up giving permission to spend all your USDC. The idea is that by having these assertions you can say "I'm going to receive the NFT, but if anything else happens, then revert the transaction." It could also be that the contract looks fine, but between the time you review it, it might have been upgraded or had things altered. This would also be able to prevent those kinds of attacks against users.
**Matt:** You could even be simulating it locally with your own node, but if someone had access to your computer and changed the binary you were running, the simulation would seem fine. The assertion information is sent to your hardware wallet, displayed in clear text — "this is the thing that is going to happen" — and it's actually asserted at the end of the transaction. If it isn't upheld, the transaction will revert.
**Speaker 8:** I don't know if you already know, but there are already solutions out of protocol that are doing this. For example, on Linea there is a solution.
**Matt:** What's the solution from Linea?
**Tim:** Phylax. We've been talking quite a bit with them. They're approaching it differently — they are primarily focusing on protecting protocols, making sure that when a user interacts with a protocol, the protocol doesn't end up doing something that causes big problems on the protocol. But this is primarily focused on end users' protection.
**Speaker 8:** These simulations are mostly for accidents today — you don't want to accidentally swap with too high slippage and lose everything. But they are not good for security because it's trivial to make something that simulates differently than it executes later. With private order flows, it's quite easy for someone to sandwich your transaction. You review it, you submit it, and the builder puts an upgrade in front that changes the code to be malicious, and after, reverses it back. Your transaction now does something else because you never signed over the code that is actually executed. The code can change.
The other thing I wonder is how far does something like Gnosis Safe get us? They have account recovery, batching, probably a way to change keys.
**Matt:** Yeah, you're missing something — we want people to be able to use the layer one transaction pool to submit their transactions. This removes the need to have a relayer. We also want them to be able to use things like FOCIL. If you have a Gnosis Safe today, you can't use FOCIL when it becomes live on mainnet because you don't have an externally owned account that FOCIL understands. So you'd have to have someone relaying or have a gas account yourself to relay. That should also be a core goal.
**Speaker 8:** Another feature is the fee payer — sometimes you want it different. That's a Solana feature.
### Stretch Goals & Native AA Definition
**Felix:** Yeah, we're getting to the stretch goals. We didn't really state the obvious — for us, native AA means that you can send a transaction from a contract. The whole goal is making that work. What we get out of this is all these things. It's more about exploring how exactly contracts can do all these things in a good way. The basic premise is that with native AA, you can send a transaction from a contract. This is the reason why 7702 gets us halfway there — you can turn your key-based account into a contract which then can do some of these things, but it's not the same as saying there is an account that is fully detached from a key and it can still do all these things. This is where it gets confusing. There are some proposals that say "I solved AA," but it's just 7702.
Down here, TX assertions — let's discuss it later, it's a big topic, not 100% related with native AA. It fits really well with our proposal but isn't a really native AA thing.
**Stretch goals:**
**Default account discussion** — when users currently have key-based accounts (EOAs, externally-owned accounts), if we now introduce a scheme for native account abstraction, should EOAs be treated in an equivalent way, or is it a divided system where you can use one type of thing with EOAs and a totally different thing with contract-based accounts? In the frame transaction proposal, we have a thing that says if the account doesn't have code (so it's not a smart account), we just treat it as a smart account with a certain minimal feature set.
**Speaker 8:** That way you can inherit the alternative signature schemes even without...
**Felix:** There is some nuanced discussion there. For Matt, one of the things that came up early on is that this would also give us a chance to actually improve the user experience over time of these accounts because we can change the behavior of the default account. It would be done in native code, so we can keep adding stuff in forks. That would be an opportunity to say "your EOA can now do this." I'm not so much on board with these things, but it's something to consider.
**Matt:** One reason this came up is we spent quite a bit of time trying to talk to Rabby, which is one of the major wallets on Ethereum at this point. They don't support 7702, and it's been a big point of contention between Rabby, their users, and the whole 7702 work. The reason they don't support 7702 is they don't want to have any transaction type where they have to deploy code into their EOA account. They feel this is not a good user experience. We can debate them in circles about how you can batch this operation with whatever operation the user wants to do, but they have this fundamental belief that if they're going to deploy code to the account, it's not something they want to support. The default account is a nice way around this — their users can inherit all the account abstraction benefits without having to actually deploy any code.
**Felix:** Next point — **gas sponsoring** is also a really important topic. We want people to be able to send a transaction where the fees are paid by another account, and it doesn't have to be the sender of the transaction. We have specific ideas about this. It can go very deep because it relates to "can you pay your transaction fees in another currency?" We want this basic ability to sponsor the fees of the transaction in ETH. For us this is more required because people really want this. But paying fees in tokens is much more complicated — it requires either a relayer or a really complex on-chain action to ad hoc swap.
**Potuz:** Can it be done at the application layer? If you have this gas abstraction, someone else can pay it, then a contract can pay.
**Matt:** One goal is we'd like this to happen without a relayer. Even if we just do this, you'll always be able to have some application system where a relayer can step in.
**Potuz:** You can do gas abstraction without a relay, and that enables paying with any token by just allowing Uniswap to be the contract that pays for gas.
**Felix:** Yeah, but the details get deep. The basic sponsoring thing is important. The advanced sponsoring with tokens — it's nice if we can solve it and we know how to solve it; it can be done with the frame transaction, but there are many details.
**Matt:** It gets harder if we think about the future of statelessness and you may not have all the state.
**Felix:** Then this point — **2D nonces / flexible nonces**. People want flexibility with their nonces. The 2D thing is one way to do it, but people don't want to be constrained by sequential ordering of transactions per account. We feel it's okay if we don't have this in the beginning, but it will come.
**Speaker 8:** What's the benefit of a flexible nonce?
**Felix:** Concrete example — privacy pools. Like Tornado Cash or Aztec. Let's say you have money in one of these pools and you want to redeem it. There's an object called the nullifier that gives you access to your funds. Now imagine where contracts can originate transactions — you have a transaction originated by the privacy pool, and any member can send a transaction from that pool redeeming their funds. If there's a nonce, you'd have to synchronize with all other members to get the ordering. So for these transactions, the nullifier effectively is the nonce because it can only be used one time to redeem. You'd want the protocol to say in this case the nonces are different. So it's mostly for native accounts, not EOAs. People have put a lot of justification for these things and it has benefits for specific applications.
**Speaker 8:** It's like a UTXO instead of an account model.
**Felix:** Vitalik probably has more comments on this.
**Speaker 8:** Bitcoin in a way is account abstraction because each coin is not controlled by a signature, it's controlled by a script.
**Felix:** No, we went all this way just to discover Bitcoin in Paris. We will become Bitcoin at some point.
**Speaker 8:** But you could have ZK verifiers on Bitcoin.
**Felix:** This last thing — the **keystore wallet**. This is the bottom of the iceberg. The keystore wallet has a horrible name. The idea is you create an account on L1 — you deploy it to L1 and this would automatically give you that same account on all the L2s. It's a one-time "create my account in the whole Ethereum ecosystem." Right now when you interact with L2s you have to have an account there, which can be complicated, especially in native AA because the key of the account is not necessarily attached to the address. If you want to do key management features, you'd have to repeat all your key management actions on all L2s. So you want this stuff to depend on operations like L1SLOAD where the L2 can read your account on L1, access the configuration, and apply that on another chain. If we get to this point today, I'd be surprised.
**Matt:** It's not even just L2s — could be other L1s that also use K1.
**Vitalik:** The other one is privacy protocols — if you have 100 Railgun coins, you use the same technology to send one transaction to change the key to all of them without linking them to each other.
**Matt:** One core goal somewhat related — we need to make sure there are unified account addresses for your smart account. Can't have a world where your user has its own account on Ethereum mainnet and a different address on an L2 even though it's the same contract code. We're mostly expecting we'll use 7997 to have factory contract deployment.
**Felix:** How the account deployment works is a big topic — how exactly will you install your smart account to the chain.
**Speaker 8:** We still have the legacy transaction without chain ID that gives you the same address on every network.
**Felix:** Yeah, you can do the universal deployment method, but we're thinking about this in the context of post-quantum. It will become unviable. We want to replace it with things like 7997 — universal deployment proxy. Basically just CREATE2 as a contract.
**Speaker 8:** How much of this is tied to the signature schemes? I think we have two things mixed together. One is the signature scheme, which could just be like instead of VRS have prefix plus binary data. The other is AI.
**Felix:** It's never that simple. The main thing for us is that we want people installing contracts to the chain or somehow proxying to contracts to become their account. In this model we have to think about how they actually get the account on chain. So we have to figure out how they can do that.
Okay, so this was everything — the whole goals thing. Do you guys have more?
### Additional Topics
**Speaker 8:** CowSwap limit orders. You pre-authorize something and then someone else can randomly spend your money according to some rules until you cancel. Partial fills. Every hour you get a little bit of transfers.
**Matt:** This is just talking about arbitrary validation. We're talking signature schemes — maybe one thing missing is arbitrary validation logic. In your CowSwap example, related trades — you can also imagine daily spending limits.
**Felix:** Doesn't have to be a signature.
**Vitalik:** This definitely requires talking to a builder, but you can go in the other direction and literally stick a zero slippage requirement into the validation phase.
**Felix:** Specifically we didn't want to frame this in the context of the frame transaction because we think you can solve all these things in a really nice way. We don't just think about replacing the signature scheme but also want people to have these arbitrary kinds of rules. With multi-sig, if you want to originate a transaction, that's not just signature verification — there's management of members.
**Matt:** We could potentially list out all known signature schemes or things we'd want to support, and there's probably still more things you'd want in the validation.
**Speaker 8:** It should just not be enshrined — like we can only do these three signatures now.
**Felix:** For us this has been a big goal — to keep this open. There's the counterpoint that one of the big criticisms of the frame transaction proposal is that if we make validation just like arbitrary computation (EVM), there have always been people who say "why can't we just list out these ten things we want to do and make a purpose-built shrink-wrap solution that does these ten things?"
**Speaker 1:** Ben, you've been arguing more from that side — how can we satisfy these use cases? Frame transactions are quite convoluted. Why don't we directly address the problem?
**Felix:** We think with the frame transaction we directly address the problem.
**Matt:** Like Ethan is saying — let's support CalSwap and let me sign some special limit order.
**Speaker 1:** I mean CowSwap is just a permit-equivalent signature.
**Matt:** Permit equivalent signature. So you have... if you want to trade your ETH? You'd have to wrap it to WETH.
**Speaker 1:** It's a smart contract that has "I accept batch, I accept permit, I accept..."
**Felix:** What is really the advantage of a completely open signature scheme given that we're forcing everybody to upgrade every N months?
**Speaker 3:** What do you mean forcing? What do you mean we're forcing everyone to upgrade?
**Speaker 3:** If we had a list of signature algorithms, everybody gets them after six months if we add a new one.
**Felix:** For us there are multiple benefits. This is a chance for us to also decouple ourselves from validation a little bit because we no longer have to — people can do whatever they want because they have arbitrary compute. It becomes more like we extend the platform for them to enable them to do more things. We don't tell them exactly what to do — we give them the computation platform and they can implement it. That's empowering the user in the maximum way.
**Speaker 12:** It's shifting responsibility from core development to the wallets or whoever wants to use it. The problem is wallets don't want that responsibility.
**Felix:** Yes and no. We feel like this is a philosophical debate.
**Speaker 12:** If we say that 10 important developers are vouching for these 10 contracts that everybody can use, it can shift the responsibility back to Ethereum core developers.
**Felix:** We'd be fine with this as long as users still have the option to do it themselves. We could come up with ERCs that specify validation, and you could vouch for these ERCs.
**Speaker 1:** There's a disproportionate thing — if we approve it as a precompile, the cost to run it is very different than if somebody wrote a smart contract to validate a signature. There's another philosophical thing — if you ask Vitalik for example, we should just remove the precompiles anyway.
**Felix:** Wait, how are you going to remove the precompile with post-quantum?
**Speaker 1:** Looking at making transactions very expensive.
**Felix:** We're not talking about philosophical arguments, we're not talking about next fork actions. We have to agree on the high-level direction. This has been the whole problem with the AA debate — people want different things and it's very subjective. We have to set the high-level direction for the future. With precompiles, I personally think precompiles are okay, but everyone has their own view.
### Constraints Discussion
**Potuz:** I see you have the goals as high-level goals. I wonder if you should make also at a high level what your constraints are. For example, you've already made one — you don't want to have relayers. I talk about constraints because one I wanted to give you, which I'm sure you're aware of, is that arbitrary validation would break rollups. This is the biggest fear for the sequencer because the sequencer can actually input invalid transactions into the chain, and the rollup needs to deal with the fact that invalid transactions will be in the sequence. You don't want the sequencer to DOS the nodes on the L2. This is the biggest problem Arbitrum has now with frame transactions — they don't know how to solve that problem.
**Matt:** They put the call data on the chain — that's already pretty high cost.
**Potuz:** The L2 node is working at that limit. It has a cost on putting an invalid transaction. With arbitrary validation, the cost is still the call data, but the compute can be very high. You can DOS the chain easily by being the sequencer and putting invalid transactions on the sequence. They don't know how to solve this. So I wonder if this should be part of the constraints. Do you want to support L2s?
**Matt:** Have them put $100 associated with the block. If the block is invalid, that's provable, then they lose their deposit.
**Speaker 8:** I don't see that big of an issue, honestly.
**Speaker 3:** Listening to the AA debate, you end up talking about requirements and then a whole set of people talk about constraints. Unless you put both together, it's going to be really difficult because there's always an argument in both directions. You want to know what are the goals, what are the constraints we're prepared to deal with, and then something should fall out.
**Felix:** We can give a pretty concise update. Now we are at Ethereum Foundation, and EF recently made it clear where they stand. It means we have things like the **walk-away test** — we want to design a solution that doesn't require us to constantly fiddle with it. We want a long-lasting solution. That's a big constraint. **No relayers** for us is really important — even if most activity on chain will involve relayers, there has to be a way for users to do it without a relayer. This is what we stand for as an organization. Different for other people in this room working for other organizations who aren't constrained by the same values.
**Compatibility with statelessness** is a big topic. There's the concept of AA VOPS that people have been discussing.
It has to be compatible with the future — we have plans for stateless nodes and whatever solution we come up with for native AA has to work with this.
**Matt:** But what you just said about precompiles already kills ZK EVM. There's no way in ZK EVM you can prove post-quantum precompiles. It's gonna be extremely expensive.
**Vitalik:** On just that issue briefly — there are two types of post-quantum signatures: lattice-based and hash-based. If we go the hash-based route, the precompile already exists — it's the hash precompile. We can already make signatures work with hashes between like 150 and 200K gas. For lattices there are basically two routes. The challenge is once you get into lattices, there's a bunch of different options. The approach we're exploring is a precompile for vector math, and vector math is the hard building block of lattices. It looks like it's getting gas costs down to the point where it's close to call data being the larger component. If we do that fundamentally — the reason stuff in the EVM is expensive today is two things stacking. One is that almost no post-quantum stuff is big-int — it's all 32-bit math. You're using a 256-bit resource to do 32-bit things and have to charge for 256-bit, and inside a prover you have to do a 256-bit path. The other reason is for each operation you also have one unit of EVM overhead. If instead for 16 operations or 1024 operations you do one unit of EVM overhead, then asymptotically the same things that make it viable in raw execution also make it viable in the prover.
**Felix:** It's a good point. It has to be compatible with ZK EVM.
**Matt:** At least it should make it clear that only hash-based signatures will be available in frame at the beginning, post-quantum-wise.
**Felix:** Yeah, definitely at the moment. The hash-based ones are where we know the most how to solve it. We have a really concrete proposal — how hash-based signatures are going to work in the context of aggregation, block sizes, mempool. Not fully done, but we've been working on it and feel it's a solvable problem.
**Speaker 9 (Julio):** I just think maybe you should remove from the goals the PQC aggregation because we don't have any actual concrete idea how to do it yet. The second thing — you can actually figure it out later. And third, I don't think there is any good way to do it either with frames or with any other proposal right now.
**Matt:** My idea is not that we have PQ aggregation in the transaction type, more that we would support aggregation in the future. You can easily build an AA solution that does not do a good job of supporting aggregation in the future. If the signature is available to be introspected in the EVM, you can't really later remove the signature.
**Speaker 9:** I don't know how you'd do post-quantum signature aggregation with frames.
**Matt:** My proposal — we add a signature object to the top level where you have four elements: the signature type (if we support multiple), the message you're signing over (the message hash), the public key (or hash of the public key), and the signature. That information is viewable except for the signature in the EVM. This is the critical part.
**Felix:** The only thing you really need for aggregation is that the EVM cannot have access to the signature itself — it cannot introspect it because that means you can remove it later. As long as we have that, the protocol can strip it away and handle the signature in another way — like segregated witness. We have proposals — there's a proposal for post-quantum signature aggregation, the lean multi-sig. It's a good proposal, making really good progress.
**Matt:** As long as we support that and we don't enshrine some mechanism of viewing the signature in the EVM, this box is basically checked.
**Felix:** With post-quantum signature aggregation — this is not going to be part of the frame transaction EIP. We just want to make sure this is possible later.
**Speaker 1:** Sounds more like a constraint.
**Felix:** It's more like a constraint, yeah.
**Speaker 1:** Isn't that 100% against allowing the EVM to validate the signature?
**Vitalik:** No. You need the signature to be a piece of call data that can only be accessed by a pure function. That pure function can be EVM and that's fine.
**Felix:** So you could submit it on chain — it's just kind of big and expensive. So back to optimizing for the user.
**Speaker 1:** But the flexibility you started with — where any other protocol can write their own validation...
**Felix:** This is good because it's composable. You're not constrained by putting a single PQ signature on your transaction — you can put three of them, or use PQ signatures in other ways. People will have to validate post-quantum signatures on-chain anyway, even for things that aren't transactions. Once we have that primitive, we can link it into transaction validation by making it accessible to the EVM in a specific way.
**Matt:** I don't think we are generally expecting people to implement their own cryptographic operation in EVM without a precompile or some support of a precompile. But we are building for a world where they have arbitrary other validity requirements. Like "I don't want my account to submit a transaction if my spending limit has been reached for today."
**Matt:** They're separate concerns but both validity concerns. You can't do a daily limits check after you've already validated the transaction because someone can grief gas costs from the account. If I've hit my daily transaction, people can keep replaying transactions. So we don't want the gas to be paid for.
**Speaker 1:** But you can only do that if somebody else is creating transactions that spend your gas.
**Matt:** Yeah. Depends on exactly what your account setup is.
**Felix:** For us it's about these problems. We want to find a proposal that addresses all these problems at the same time. Some are more important than others, but if we just address every problem individually it's not going to compose as well as if we think about all of them at the same time. That's how we ended up with the frame transaction. The proposal may be misunderstood at some points or there may be concerns. We're willing to go back. This isn't the frame transaction session — we're trying to talk it out with you because every time we bring it up, everyone misunderstands what we're trying to say. Can we just go back to the fundamentals and see where we're coming from?
**Speaker 8:** Just one more for aggregation — we have BLS today, so the same thing could already be done with BLS. We have a BLS signature and don't include it in the transaction, but it's segregated. When we build the block, we have one field in the block header that is the master BLS.
**Felix:** Exactly, this is what we want to do.
**Speaker 8:** And the one thing that's missing is just a commitment to the sender address.
**Felix:** Exactly. By PQ signature aggregation, we mean there has to be some kind of commitment in the block that says all signatures done in this block are valid, with a proof. You're compressing the computation of all signature validations into this thing.
**Speaker 8:** And we can drop the signatures as soon as the block is finished.
**Felix:** Exactly. It's similar to blobs — the signature only exists for the duration of validation, but at some point later you only have to prove it was done successfully, and this proof can be accessed in the EVM.
**Speaker 8:** BLS isn't post-quantum, but the mechanism is the same — doesn't have to be PQ.
**Felix:** This is why EF is also working heavily on this lean multi-sig — it's a replacement for BLS also for the beacon chain. The beacon chain relies very heavily on EC cryptography like BLS. We need a post-quantum signature that's aggregatable. The lean multi-sig is making really good progress. It's very secure — hash-based signature aggregated by a hash-based ZK scheme. If we pick the right hash function, this will never be broken. Unfortunately at the start they picked Poseidon because it had better numbers, and it was broken later. But if we pick a more conservative hash function, we expect this to last a long time.
### Mempool & Constraints
**Toni:** Looking at the listed features, the top half plays very well already with mempool constraints. Privacy pools you could say is basically 2D nonces plus gas abstraction but with ERC-20s maybe. As soon as we ignore ERC-20 sponsoring, it gets quite easy. So is it fair to assume we can get to good solutions if we ignore ERC-20s?
**Felix:** Honestly, R1 should never be — can you just forget about R1?
**Toni:** ERC-20s — you suddenly need a lot of state in the verify frame. As soon as you think about it from the EVM perspective, it's quite easy. I have no clue how much PQ signature verification costs, but the rest feels like you can fit it easily in 200-300K verify gas.
**Felix:** This is also a big constraint for us. We absolutely want native AA to be compatible with the public mempool because we identified that as one of the key issues with earlier account abstraction proposals like 7701 — they had a specialized mempool that has to be operated by specialized entities, and it was never really clear who would even run this. Whatever we come up with, at least most of these features have to be available to users using the public mempool and integrated with the protocol.
**Potuz:** I'd like to see what are the orders of priorities. One priority for me would be that I want a decentralized set of includers. If we ever move to lean, the set of includers should be similar to the set of attesters. It's hard for me to think of a world where AA like this works, the centralized set of includers work, and those set of includers are also stateless at the same time. The interaction of the three of them — AA, FOCIL, and statelessness — seems very complicated. I'd like you to understand it and say "if we go with this and this, then this third one cannot be included."
**Felix:** We have some thoughts. Bossi, you have thoughts on Yeah?
**Bossi:** We have this public mempool thing — the public mempool is a requirement because to do FOCIL you need an operating mempool. All participants need pre-visibility of transactions so they can build an inclusion list. So mempool is a requirement for FOCIL but state isn't necessarily a requirement for FOCIL.
**Felix:** For statelessness, there are two ideas. One that's been around for a long time is **VOPS** — Validation Only Partial Statefulness. You don't have to keep all state, just some. With AA, any account abstraction scheme will require updating the VOPS definition because by definition you can access more stuff.
**Potuz:** If we want our users to use these kinds of transactions and be censorship-resistant, we want them to send these transactions to includers, so includers need state to validate them.
**Felix:** We have a specific idea. For the public mempool we have a document with three strategies. With native AA you can do anything — that's great, but not compatible with mempool. So it's all about policies — which kinds of transactions are actually possible in public mempool. You can put any transaction in the block, but only some go in the mempool. Defining these rules is a bit adjacent to protocol rules because mempool rules are never actual protocol rules.
We have a set of policies that can support most use cases. With strategy two there's the concept of **AA VOPS** — you keep only the accounts, their codes, and the top-N storage slots. You're removing deep storage — not keeping all the big storages of big contracts. It saves space.
**Matt:** AA VOPS takes like 20 gigs. So with 20 gigs you can be an includer.
**Felix:** That's one thing. The other strategy is **mempool sharding** — our proposal is specifically compatible with mempool sharding. However, that means we have to be more careful about the FOCIL definitions because you need more includers to cover all the state space. The way we define strategy 2 means you can keep only a part of the state, and from that range you keep accounts, their codes, and top-N storages. That further reduces stuff you have to keep, but comes with more constraints.
**Toni:** I'm not sure if AA VOPS is the solution because for example the privacy pool wouldn't work — the privacy pool itself would be the transaction sender and you'd need more storage of the privacy pool itself. I like this other partial statelessness solution where you keep more storage of certain accounts. Yes.
**Matt:** I don't think there's any way that we can support privacy pool withdrawal without adding it to a policy and saying we're going to track the full state.
**Matt:** That is more acceptable than the ERC-20s, right? Because you have very few privacy pools.
**Matt:** I think the big debate with how do we do gas abstraction without relayers in stateless is — are we going to support ERC-20s in the public transaction pool? If we don't support them, it's fairly straightforward.
**Toni:** Privacy pools — since Tornado Cash exists, this is a few gigs, probably not even a gigabyte. So this isn't a heavy constraint on state.
**Felix:** It is possible, but then we enter this world of really custom policies. We don't want to make a protocol rule that says "please keep Tornado Cash" — it just feels wrong. We have to design the system in a way that yes, it makes it possible, but this is why we have it as a stretch goal. It's really hard. But I think this is legit hard — has nothing to do with our proposal. Generally a hard problem.
**Carlos:** For all the things you've listed above stretch — basically the top priorities — with the first four slots of every account stored, is it enough to actually cover all of them?
**Felix:** I think so. Maybe not the first four — more like a couple more.
**Matt:** Multi-signatures depending on how big your set of accounts is, is probably the highest storage load.
**Felix:** The top-N storage is contentious — how many storage slots do you actually need to do something useful? You can do a lot with four. You need bounded storage. I always like to say "top N" because it sounds better, but to be honest, the N can't be very large.
**Matt:** That's the distinction of saying we're going to support only ETH as gas abstraction versus ERC-20. With ERC-20 you need the balance. The top-N balances doesn't really work.
**Carlos:** In the latest breakout call, L2s and wallets were there and it was pretty obvious they were assuming ERC-20 sponsorship was going to be there. They are not envisioning a world where this doesn't come with AA.
**Matt:** We can still support ERC-20 sponsorship — it just requires a relayer. We're saying what can we do in the core protocol? These things will always be possible with relayers. We're not going to stop that ability. We just know these are hard problems. The without-relayer is really difficult. Are we going to do it for everything, all ERC-20s, or just for ether? Is it acceptable to say we support relayer-less gas abstraction but you can only use ETH?
**Felix:** That's acceptable for us. It's a really good step. It would be a huge step if we could do this.
**Matt:** It's going to be much easier to add more functionality if we actually have a transaction pool with policies. We're going from not having anything to something.
**Felix:** We have policies, but they are client-specific.
**Guillaume:** With those privacy pools, do you need them at transaction validation like with FOCIL? What kind of data do you need? We could make a hybrid model — if you're sending a regular transaction, VOPS nodes or partial stateless full nodes will take care of you. But if you want a more complicated usage, you just have to pass some witness. It's a spectrum.
**Felix:** Maybe that's the thing. For certain contracts it is possible to construct a witness that 100% proves you can do this action. For other contracts with arbitrary state access, it's not possible. Then we enter this world of "can you actually define a protocol that allows you to create this witness?" That's a different problem.
**Toni:** How do you make sure you don't have to update that witness every slot again?
**Guillaume:** Either nothing has been touched and your proof is unexpired, or it was touched recently in which case the client keeps enough data in RAM that it can easily update your thing. So all you need to do is send a transaction that is a proof that is recent enough.
**Toni:** That would even work for non-privacy-pool use cases. For privacy pools you add something to an array and you have a proof against the Merkle root, so nothing can change and be invalidated except that your nullifier was suddenly spent. But your solution would even work for things that can actually be invalidated.
**Felix:** Good. So how do you guys feel about it? Should we...
**Toni:** It feels like the only use case that won't work is ERC-20 sponsorship.
**Potuz:** That's wallets — in real life wallets are going to be the relay anyway. I don't worry about this issue of ERC-20 sponsorship.
**Felix:** I said in the beginning, for us specifically doing it without the relay is really important. There are many people who just don't care about this.
**Potuz:** No, I do care — but I'm pointing out we should do what we can do without a relay and let wallets deal with being a relay for the parts we're not supporting. The UX wouldn't change. People are sending transactions via wallets, not through Geth.
**Felix:** I know. It's just for us it's important to maximize the amount of things you can do without a relay.
**Toni:** The one thing that wouldn't work is if I withdraw USDC or USDT or DAI from Railgun and directly pay the miner. But for most use cases you can keep using relayers.
**Speaker 1:** Is the wallet going to want to sponsor you for withdrawing from a privacy pool? If you do something — a few occasions of North Korean hacking using it — then wallets might be like "this is on chain and trustless." I'm sure the legal cleared it at least in one wallet that my own other company owns. If the wallet is paying for the transaction that withdrew from the privacy pool, it's a contract that's paying. It's a contract paying in ETH and the contract is the relayer for what we support, and the contract has the logic to make the swap.
**Matt:** I think it's okay for us to say we won't support ERC-20 gas abstraction without a relay in the core protocol in the beginning.
**Felix:** Yeah, we don't have to absorb everything in this thought, but we want to have the platform to ramp up to these things.
**Speaker 12:** I think — yeah, we talk about wallets are going to do whatever they want. But responsibility matters here. If we said there's like 10 contracts that you can use that we verify they're going to use it. With EIP 7702 it's like you can put everything, but wallets are like "I don't want to allow user to do that, it's scary to allow everything posted to somebody's account." The contract that you want wallets to use should be embedded inside.
**Felix:** Yeah, we hated this in the beginning too. When we started working on frame transaction, Matt was also mostly on the side of saying "the account should just have logic for everything." But over time we realized nobody really wants this. Now we are much more aligned — the account should be as minimal as possible. What are the things the account specifically has to do? With batching, we always thought "you can call your own account, have a for-loop there, do the calls — there's no problem." But people don't really want to install this logic into their account. Now we designed the solution so this is no longer a part of it. The only responsibility the account has is the minimal thing of verifying the signature/validity of the transaction and possibly imposing some limits like spending limits because this is directly related to account balance.
**Speaker 12:** My mental model is basically — there are two parts of AA. One is **authentication**: this sender that is sent with the transaction, can it be representing this account? The init data or some contract decided state that is tied to that account that's going to be executed says "this sender with this input can access, can represent myself as the sender." The second part is basically **who is paying fee**. It can be a separate contract. There needs to be some logic that says "this sender, when executing this contract, is going to return the ETH that's needed for execution of this contract."
**Felix:** Yeah, that's why we have the two approval scopes in the frame transaction — approval of sender, approval of payment. These are the two basic things, the only two things the account needs to have input on. All these other things are more like — we want them to work with this model but we don't want people to install tons of code into their account because that makes it less secure or requires you to upgrade your account. We want people to mostly deploy their account and be happy with it for a long time. What we're seeing now especially with ERC-4337 is people making really complex accounts with loadable modules and all this stuff. We don't really want that to happen.
**Speaker 8:** What about memos? Just a little thing that combines the stuff inside a batch, or even when you have something across layers, just some persistent end-to-end ID.
**Matt:** A little note that says "this transaction was sending to my other wallet" or "this is an ID for this interaction" so an exchange could interpret it, or a bridge.
**Felix:** I guess we can add a text mode frame.
**Toni:** What's the difference to putting it in data?
**Speaker 8:** Why not put it in the data? Data is often not supporting it — a transfer log doesn't emit the memo. The way it's usually solved is for every bridge transaction you have to create a new key pair, and that's your wallet for that one specific flow. That's like an ERC-20 mistake from the start.
**Speaker 3:** Are you talking about an end-to-end identifier? That's a separate thing, right? It's a common thing in financial transactions.
**Speaker 8:** Exactly. Maybe this is an opportunity.
**Felix:** We can add a way to add data to a transaction.
**Speaker 3:** This is more specific than data. If you're going to do batches, can you add a specific field which is the end-to-end identifier? It allows you to tie stuff together at a user level.
**Matt:** Does it need to be at like a batch identifier, transaction identifier, frame identifier?
**Speaker 3:** Any of those. It's basically a user's identifier for the whole compound transaction.
**Felix:** That's a general thing in Ethereum — you have a transaction identifier. Transactions are top-level objects with an identifier.
**Speaker 8:** No, it's not on chain. It's just an RPC artifact. It's not in a tree.
**Matt:** The transaction hash is in the transaction hash tree.
**Speaker 8:** It's a different hash. It's like the MPT prefix plus the transaction. It's an overall interaction thing for a multi-step transaction that goes into an L2, bridges something, then triggers something.
**Felix:** Like a cross-reference. If you could attach arbitrary data to a transaction, you'd have that. We can add something for this.
### Privacy Pools Revisited
**Speaker 13:** What would it take to get privacy pools to the top section? It's just insane that we're not going to do privacy pools when we all want them.
**Matt:** Half my Ethereum transactions are privacy pools.
**Potuz:** You shouldn't assume that people that didn't raise their hand did not want to have it.
**Toni:** It works with ETH sponsorships, and even if I want to transfer ERC-20s out of privacy pools, I could still use a builder, pay a high priority fee, and builders would be more than happy to include my transaction. I'd love to have "pay my withdrawals with my tokens from privacy pools" — would be nice, but I can definitely see the problems.
**Matt:** I'm not assuming — I'm saying we can't ship AA if it doesn't deliver this. The reason I put it as a stretch goal was because of this concern of statelessness. We would have to say "this is a canonical privacy pool." You have to choose. That's a difficult thing to do. Guillaume made a good point about submitting a witness for the nullifier — that works with privacy pools but doesn't really work for gas abstraction. Considering that, I could just say it's a core goal because it's straightforward in that sense.
**Matt:** But we have the chance to give real privacy within the protocol already, which has never been a priority. We have the chance to do that now. If we let it go, this feature might not arrive at any point, and I'd always need a relayer to get any sort of privacy.
**Matt:** I have no problem with it being up here. The stateless reasoning was the one reason for putting it as stretch.
**Matt:** No, the stateless reasoning is for ERC-20s, which is a much deeper issue. For privacy pools there's two or three of them — you can whitelist them basically.
**Matt:** Sure, but then you can run one transaction per pool.
**Toni:** For privacy pools you don't even have the state issue because the pool would be the transaction sender and you're allowed to touch the transaction sender's storage.
**Matt:** Then you need 2D nonces. At some point we probably in the client have to say "here are the privacy pools we support in the public mempool" — and that's a very hard conversation.
**Vitalik:** Do we have to support privacy pools or is there a way to enshrine one gadget where the gadget is like a nullifier handler, and then any privacy pool can use it — it just has to wrap around that?
**Toni:** Yeah, you can do it. I had that in my pre-research post. You can have one contract that all privacy tools use to store their nullifier and Merkle roots. Canonical. Then clients only have to agree we support this canonical privacy pool handler. All privacy pools can plug in.
**Felix:** This has been our solution recipe for other things as well. For gas abstraction in the public mempool we have this canonical paymaster solution. It looks a bit strange at start but it just means we support a very specific contract which has known properties, and because it has known properties we can model it in the mempool and that makes it safe. It doesn't actually constrain wallets very much because they can have a lot of logic that feeds into this contract. But for mempool compatibility we ultimately need to know how the gas-paying contract behaves. If we can come to a definition of what is the canonical thing that all privacy pools need, then it's actually really easy to support the public mempool. We just haven't found that thing yet.
### L2 DOS Concerns
**Potuz:** I wouldn't dismiss the L2 problem — I think it doesn't have a solution. I looked into this for Arbitrum and I think it doesn't have a solution currently on the two issues I mentioned. It's fine — it doesn't break L2s, but I think we should be prepared for L2s not being able to support the whole thing.
**Felix:** Yeah, but that's bad.
**Potuz:** You guys have essentially a censoring policy for the mempool, which is correct — we should have such a thing. The mempool can't handle everything. The sequencer doesn't have a mempool. It cannot censor arbitrarily. So it has to deal with working at capacity. The nodes work at capacity. It's not like an L1 node that doesn't really work at capacity. The nodes for L2 typically work at full capacity and they can be DOS'd by the sequencer itself. L2 nodes need to work under the assumption that the sequencer itself can be malicious and want to DOS the nodes of the L2.
**Matt:** How do they DOS the nodes?
**Potuz:** They DOS nodes by inputting invalid transactions. Today for an L2 it's more expensive than the cost they cost on computation because they need to input it as data and pay for data, and the cost they can cost is less on computation. With arbitrary validation, the cost is still the call data and then the compute can be very high. You can DOS the chain by being the sequencer and putting invalid transactions on the sequence. This is a problem they don't know how to solve.
**Matt:** But you're still bound by the gas limit of the block.
**Speaker 8:** In both cases you're bound by the gas limit of L1, but they have a much larger gas limit on L2.
**Potuz:** And they're already working at that limit. There's a bound but they can break it by orders of magnitude. It's very cheap to put a full block on L1 of data that causes a lot of computation, much larger than the gas limit on L2.
**Matt:** If you reach the gas limit on the L2 while validating the block, then the block is invalid, so you stop executing.
**Potuz:** No, there's no gas limit on L2 either. L2 doesn't have a gas limit on blocks. Arbitrum doesn't even have the notion of blocks — there's no blocks, there's a sequence. These two problems have been studied by the L2s and they don't know how to solve them.
**Felix:** In Ethereum L1 we have this transaction-level gas limit — at the moment it's 16 million gas. So is this not a thing on L2?
**Potuz:** They do have a thing like this. But they also have different ways of accounting for gas. They have multi-dimensional gas that we do not have.
**Felix:** Yeah, oh yeah — compatibility with multi-dimensional gas.
**Potuz:** One of my solutions is to have an extra-dimensional multi-dimensional gas for frame transactions. But this doesn't solve it because of the fact that the sequencer can put invalid transactions on the sequence.
**Felix:** With the transaction-level gas limit, it kind of solves some of these concerns. The way we think about it — when you hit an invalid transaction, the block is over. So you have to have a sequence of valid transactions up to a point.
**Potuz:** Yeah, but in L1 you stop the block. In L2, the whole invalid transaction goes in the sequence and gets posted and needs to be processed by the L2 node.
**Vitalik:** Just to bisect the problem — we're saying status quo is fine and a big part of why is that signatures have 65 bytes and 3000 gas worth of EC recover, right? Hypothetically, if there was one other signature type supported — purely hypothetical — and it cost 100 times more compute but also cost 6500 times. Or if it's 100 times more compute and 100 times more gas, then it's like the same thing. The problem arises because we're creating a situation where you might have a small amount of data that costs a higher amount of compute. So hypothetically — this is a dumb idea, like a strawman — if you had a transaction validity rule that said the byte count of the transaction has to at least equal the gas consumption divided by 100. If you have high gas consumption, too bad, you have to flood it with more zeros. It would still satisfy the same rule. The need to paste it into blobs is the constraint that makes it safe.
**Speaker 1:** But it doesn't, because you could simultaneously do these huge transactions on every L2. They all have to post to blobs — more capacity than blobs can handle.
**Vitalik:** If we have this rule, if someone was a DOS attacker — case one, they DOS attack by sending a 1 million gas transaction that has 100,000 bytes. Case two, they send a thousand EC recover transactions. How are those cases different?
**Speaker 1:** For an L2, larger call data is probably worse.
**Vitalik:** If we set the constants so total compute and total call data are the same...
**Potuz:** It might actually solve it, but more generally we might not need a solution to this problem. We might want to just make it identifiable. As soon as the node can identify it, we don't need to solve the problems for L2s — we can only make it identifiable on chain, and the node can drop it. But we should be aware of their worries because they're going to be clients.
**Felix:** What if for our proposal we have this validation gas concept?
**Felix:** This is conceptually interesting. The reason we propose frame transactions how it is — there's a philosophical concept. You could design an AA scheme like in the early days where you have a single EVM call and during the course of this call somehow the transaction becomes valid by certain opcodes getting hit, then the transaction is valid and just keeps executing. That was the start of all AA — your guys' proposal saying "the PAYGAS opcode." But this has the problem that at the protocol level you can't distinguish where is the validation phase versus the actual execution.
With frame transactions, one of the key insights — why we propose frames in the first place — is because it's a way for the user to break down what they want to do in chunks that the protocol can identify the meaning of. The protocol can see "this is the validation phase of the transaction." It's still arbitrary computation, but it's clearly labeled as validation, and it has the property that it cannot actually write to the state. The EVM during validation is restricted. This is done for a really good reason — it gives certain guarantees for the mempool. In the L2s, if this is a problem, they could constrain the validation phase of the transaction in a specific way to remove this.
**Potuz:** I believe their solution is going to be: we're not going to support arbitrary validation.
**Felix:** What do we mean by arbitrary validation? It involves the EVM because it gives flexibility. But if you can bound it and the bound is lower than the transaction gas limit, then it's totally fine. We're going to be doing the same thing in the mempool. On L1 you can include a transaction with validation phase that spends 16 million gas, but it's never going to go in the mempool.
**Potuz:** This is going to be the solution that sequencers and builders are going to have to implement themselves to not be DOS'd. There's still the problem of the sequencer trying to include things invalid on the chain.
**Felix:** With validation gas, it makes it totally equivalent to a signature validation. Gas is a unit of how much computation can happen. It's the same as having a signature scheme that has a cost of computation. You can put these two things equivalent — "I'm willing to give this much computation; in the worst case, I'll be expending this much computation to determine if it's valid." It doesn't matter if it's EVM or not. For us, we think it's important to make this part EVM because it gives flexibility to the user. It's fundamentally equivalent.
### Closing Discussion
**Matt:** We're one and a half hours in, so we should probably close it down in the next five minutes or so. Unless there's a big topic some people wanted to discuss.
**Felix:** Post-quantum stark aggregation, transaction assertions — we can bring those back if you guys are still interested. Just let us know. We can also talk more this way.
**Vitalik:** We don't have to do everything right at this exact moment. Thinking through this question of different options of what makes sense for altruists to do feels important.
**Speaker 1:** Where in this list is — you don't want relayers. Where's the priority for other people to pay for transactions?
**Felix:** Can you be more specific?
**Speaker 1:** I did put forward another proposal — the contract payer transaction. You have your signing key and a contract one-to-one to pay for you. The contract holds the ETH, you sign for the transaction, the contract pays the gas. That doesn't fit if you want somebody else. So is that not a relayer? Is that out of scope?
**Matt:** That's gas sponsorship. Are we going to support this as a permissionless relayer functionality? If you have ETH then you have ETH and you can submit your transactions. The whole idea of contracts originating transactions — if you don't have ETH and you're not paying ERC-20, someone else has to pay for your transaction.
**Vitalik:** One motivating use case other than paying with ERC-20s — the goal of sponsorship is you want users who have not already obtained cryptocurrency to be able to send an on-chain transaction. The constraint that all forms of sponsorship have is you need some kind of anti-Sybil. You need someone who's willing to pay, but you also need anti-Sybil because no one's willing to pay infinite money. With wallets, there's implicit anti-Sybil — they have their own mechanism, whether based off accounts or web2 or IP addresses.
Here's the exact example. Let's say you want to do a vote on chain — say a country wants to do a vote on chain. Most users are people who haven't touched cryptocurrency before. For censorship resistance reasons, you want to give them the ability to send a transaction. You want that ability to be sort of real and not dependent on some central operator that the government itself could shut down. The thing you'd do is use ZK Passport. The transaction would have to contain a ZK Passport proof and could also contain a proof that it's a valid vote. You'd have logic that says "this guy is eligible for sponsorship" — the transaction gets included and the fee gets paid out of the government account.
That's a thing that if we push it in the stretch direction, you are able to do. But the level of hardness is similar to sponsoring with ERC-20s. If we take the more restrictive road, then that's a thing we're saying we're not prioritizing, and people in that position would have to find a relayer.
**Speaker 1:** Would you do an L2 or something where you say "I'm going to pay for all these transactions"?
**Vitalik:** L2s are an interesting analogy. In the context of L2s, if L2s value providing censorship resistance — if they value meeting a stage one definition — then they have an escape hatch. An L2 by default is a batching mechanism where lots of users send in transactions and the data goes into blobs and computation happens somewhere else. But that inherently means you need some batching actor that's off-chain. The escape hatch mechanism says if that batching mechanism is broken, a user can always self-submit. In the case of this kind of vote, probably it would happen the same way — by default they'd vote through a server, send their vote to a server, eventually within an hour the server puts it on chain, gives back a proof. That's your default path. Basically that is an L2. If it fails, you as a user have this backup option. If you really value censorship resistance, you want to give people that extra path. Even if you're doing an L2, if you value that goal highly, you need to include this backup channel. Whatever that backup channel is, it is nicer if it works without a relay and without requiring someone to already be a crypto user.
**Speaker 1:** If we were saying a country with a million people doing a vote on L1 using frame transactions — practically, the gas would be sky high.
**Vitalik:** Yeah, it's expensive. With the ambitious 1000x roadmap it gets there. But the more restrictive claim is if you do an L2, the happy case is people's votes get bundled up into blobs and we agree it's fine. The unhappy case is you need a user-score-censored to be able to self-submit. That is expensive but it's something where you agree there's value in making that path exist.
**Speaker 1:** I do accept putting up an L2 is a very expensive way to do gas sponsorship.
**Vitalik:** What I'm saying is actually a bit different — the definition of an L2 is actually a superset of a relayer.
### Wrap-up
**Speaker 3:** Probably a good place to start. That was nice. Where are we in this process?
**Felix:** I think it was just good to have this session where we all talk about the problem domain and resolve these things. In the past calls it's just been us saying "hey guys, frame transaction is really nice, we can support this and that." It's not the same as just sitting together. It ratcheted forward a bit.
**Speaker 3:** How do you stop us from sliding back to the next time we talk about it?
**Felix:** I don't think this is a productive way to actually do this. I think we somehow — at least I feel like — I was able to present it better than ever before. What are our constraints, this whole constraint-goal separation thing was interesting.
**Speaker 3:** From my perspective, just looking at it as — we'll probably end up implementing whatever we get. Ideally we'll agree something for Hekatar that's on this list. It feels like there's what's in 8141 and alongside it now we're getting a series of other EIPs. It feels like we're going to need to judge them all together.
**Matt:** At some point we have to stop doing that every week on Thursday and start implementing something. This is to establish a shared understanding of what success criteria for AA is.
**Felix:** For us we just came at it from totally different perspectives. There was already proposal 7701, there were proposals before, and we kind of figured it out now and were really hoping everyone would see what we came up with was great. Obviously it didn't work that way. It's mostly because we approached it kind of wrong. We should have maybe front-loaded all of this or had this kind of session at the start. We weren't really aware of the level of knowledge or understanding people have about this thing. We didn't have this shared starting context.
**Speaker 3:** You're sort of starting with the solution.
**Felix:** I know, but for us it was just like work brain — we were already in this all the time, so this was what our discussions were all about.
**Speaker 3:** It feels like it'd be quite good to capture this and make it more public. We could literally otherwise every Thursday will be a discussion.
**Speaker 8:** I can take a picture of this and try to write it out a bit more.
**Potuz:** I assume frame transactions that you guys are pushing is a solution to this problem that satisfies those constraints — or at least most of them. There are other competing EIPs that claim to solve some of these things. The point is if at least you guys agree these are the things we must satisfy and the constraints, then you can actually start judging and DFI'ing some other EIPs. Maybe there's another solution better than frames, who knows. But at least you have a hard constraint.
**Felix:** I don't know if we fully agree on all these things now, but we can work on different deeper questions in these things. We made some progress today.
**Matt:** Is the canonical paymaster still going to happen?
**Felix:** It's a very specific thing.
**Matt:** You kind of need the canonical paymaster.
**Matt:** Yes, even if we're using relayers.
**Matt:** So if we enshrine this canonical paymaster for mempool rules and wallets basically just want ERC-20 sponsorships at any cost, do they go with a different canonical paymaster or do their own?
**Matt:** Canonical paymaster supports ERC-20 payments. There's a difference between a relayer relaying a transaction and taking the payment in ERC-20 versus me as a user not interacting with a relayer but paying for the gas of my transaction with an ERC-20.
**Matt:** How can that be included in FOCIL?
**Matt:** Because validity only requires the signature of the user and of the relayer. The payment is validated after. The relayer determines if the ERC-20 payment is valid and if they're going to get paid from the transaction. If the payment doesn't come through, maybe there's a griefing vector against the relayer, but that happens after the gas payment. So to propagate around the transaction pool, you don't even look at the ERC-20 balance.
**Matt:** If 99% of users interact through wallets, which is the case, and wallets go with something that doesn't satisfy public mempool rules and therefore those transactions are not FOCIL-includable...
**Matt:** They're going to be following the public mempool rules. If they want to do something that's not going to follow the public mempool rules — public mempool rules say you cannot read more than the first N storage slots. The validation path is what matters.
**Matt:** That I agree with. But I'm trying to understand how that fits with the ERC-20 balance.
**Felix:** The problem is that relayers are unreasonably efficient. The relayer sees a transaction, can determine if the payment will happen, then puts its signature on it. For the purpose of the mempool, we only have to check that signature.
**Matt:** The reason we need a canonical paymaster is because you might have a relayer with many transactions that depend on its balance. We can't allow transactions in mempool to depend on one account without some confidence that account isn't going to rug all of them.
**Felix:** That's the whole point of the canonical paymaster — it's just a contract that checks if the signature of the relayer is valid.
**Potuz:** There's a bunch of consensus devs here. Can we make canonical paymasters validators and just take their money if they fail?
**Felix:** There is paymaster staking and stuff, but we don't want to get into it now.
**Toni:** Before everybody wanders off and forgets what happened here, there were at least two decisions made: number one, no ERC-20 sponsorship out of the gate; number two, only hash-based PQ algorithms available in the beginning.
**Matt:** I don't think that's a real decision.
**Speaker 3:** There was rough consensus on that as a starting point. You can't record that without all the rest of the stuff on the board.
**Speaker 8:** We took a picture of it. Really high definition.
**Felix:** I can transcribe this whole thing.
**Toni:** Put the priorities at the very top there.
**Matt:** Alright, thanks guys. Thank you. We'll discuss this more throughout the week, I'm sure.