# PQ Interop - leanSpecs Meeting 01
## Meeting Overview
**Date:** August 19, 2025
**Recording:** [here](https://drive.google.com/file/d/1kbE3Wjz5jZVLBOhqvF39YZs0R4wc9btn/view?usp=drive_link)
## Links & Resources Discussed
Lean Specs Repository: https://github.com/leanEthereum/leanSpec
(Posted by Will Corcoran at 00:10:06)
OpenRPC Website: https://open-rpc.org/
(Posted by danceratopz at 00:36:25 in response to Sam's comment about OpenRPC)
Beacon Attestation Subnet Specification: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#beacon_attestation_subnet_id
(Posted by Justin Traglia at 00:40:13 regarding networking specifications)
Progressive List EIP PR: https://github.com/ethereum/consensus-specs/pull/4445
(Posted by Justin Traglia at 00:44:06 in reference to the progressive list EIP)
---
## Meeting Notes
- **PQ Interop DevNet Goals**: Aimed at establishing a post-quantum signature scheme with multiple clients (primarily Zeam and Ream) for signing, aggregating, and validating signatures. Focus on minimal fork choice and consensus mechanisms without distracting from core ACD and beacon chain objectives. Targeting 1–2 DevNets before October IRL interop event.
- **Lean Specs Structure**: Switched to markdown from Python to unblock implementers due to type system issues (e.g., incompatible uint64, variable list types). Long-term goal: single repo with unified typing library (implementing SSZ if needed). Modular approach confirmed for cryptographic, CL, and networking components.
- **Documentation Best Practices**: Beacon chain specs were too terse, leading to duplicate annotation efforts. Recommendation: minimal code with generous comments, using AI to enhance documentation. Markdown useful for initial clarity; avoid over-reliance to ease Python transition.
- **Implementation Progress**: Koala Bear and Poseidon2 subspecs completed; signature verification spec in progress (Thomas, under Benedict/Dimitri review). Plan to integrate Vitalik’s 3SF mini into lean specs with state transition and fork choice PRs. Networking/RPC specs to be defined (potentially as executable code).
- **Tooling and Type System**: SSZ implementation with execution specs’ types needed (~2 weeks, per RLP precedent). Variable list types are main blocker; Unnawut to assist post-call. Single typing library preferred to reduce overlap.
- **Broader Outcomes**: Lean consensus specs prioritized initially; lean execution and data specs in early stages (kickoff August 22). Joint ownership of lean specs among teams, with Thomas volunteering for increased responsibility.
**Action Items**:
- Draft PR for 3SF mini state transition and fork choice specs (Gajinder).
- Implement SSZ with execution specs’ types; resolve variable list type issue (Sam + Unnawut, coordinate with HWW if needed).
- Create markdown documentation for networking/RPC specs; explore executable RPCs with OpenRPC (Felipe/Thomas).
- Share meeting notes and align with DevNet team on Wednesday call (Will).
- Coordinate with PandaOps for tooling guidance (Gajinder).
- Plan reference spec tests for clients (Felipe/STEEL team to support).
---
## Meeting Transcript
### Introduction and Context
**Will:** The post-quantum interop DevNet team that's standing up the first iterative passes at a new consensus client. So we've been working quietly once a week. We’ve got a call where we meet. There’s cryptographers and researchers, networking experts, client teams, primarily from Zeam and Ream teams that are getting together and working. We’ve got an IRL interop, small little event coming together in October. And we’re trying to hold together hopefully like one or two DevNets before that meeting. And so really the intention behind the DevNet is to land on a post-quantum signature schema and have multiple clients that are able to aggregate, sign, and test the validity of those signatures. And then trying to land on some new basics of fork choice and consensus. In parallel to that, we’re trying to do it in a way that doesn’t distract from core objectives of ACD and beacon chain clients. And so this group is going to be working on raising specifications and organizing it. So we’ve got our own little GitHub board. And so really we wanted to start this off as just a little bit of information sharing session. I know Felipe is going to probably add some additional context, but I think that we wanted to minimize the burden on the protocol security team and the Steel team and all of your expertise around writing specs and hopefully we could just get a little bit of best practices so that the participants in the PQ interop can run with that and minimize the additional work that might come out of supporting something like this. And so we’ll be doing something similar with the PandaOps and we have a call to make about metrics. And we’re just trying to set the stage and get all of our basic infrastructure setups so that we can embark on this process. So, Felipe, I don’t know if you want to add anything to that.
**Felipe:** I’m sure, yeah, I can pick up. So I came in to try and help the Python side of these specs as much as possible, just using very modern tooling. But I do feel like that repository is getting to a point where some decisions need to be made as a group on where it’s headed. As long as no one that’s implementing the DevNet-0 specs is blocked, I think we’re in a good place now. So we did start implementing things in Python, and then it got to a point where maybe some typing libraries needed some updates for SSZ and things like this. And eventually, I think it’ll be nice if somebody with Python experience takes over the lean specs side of things, and we would definitely on the Steel end of things help steer in that direction. But if somebody has ownership of that library, it would be really nice to coordinate there. But for right now, the specs switched to markdown, and I think that’s great. I think it’s going to unblock implementers for the DevNet. But I also do think it’s a good time to start talking and open that conversation up for how we want to structure. And then we’re going to start with the Python code and creating documentation for the lean specs and start having that library take its own shape. And right now, the way the execution specs are, you know, every fork is recreated upon itself, and they use something called doc to generate their docs. Consensus is very different. I thought this was a good time to have an initial conversation on where we want the lean specs to go. Obviously, I think the main goal is to have implementers unblocked and just having them moving forward. But maybe this conversation can also be done in parallel and decide who might be taking ownership of the lean specs Python side of things.
**Gajinder:** I guess the responsibility would be sort of shared among the teams, I guess. And yeah, so we can as a whole take ownership of lean specs and learn what structure is there and how to maintain it as such. But I don’t think there is one person who will be responsible for it, but there would be sort of a joint ownership.
**Felipe:** Right, yeah, I guess also, Justin Drake, I don’t know if you had any sort of ideas on the documentation side of things. I don’t know if there’s lessons learned from the execution side or consensus side that either would have been done differently or not. So maybe this is like one of those initial discussions of which tools we should use and go from there.
**Justin Drake:** Yeah, so there’s like several lessons learned. One is that for the beacon chain, we were a little bit too terse on the documentation, and that led to two separate efforts to write annotated specs, one by Bad and one by Vitalik, and it would have been just more efficient if we had more documentation in the specs itself. The other thing that comes to mind is that I think we should avoid the whole markdown approach that we’ve taken historically, and I don’t have a strong opinion as to how it should be done now. But what you described with doc sounds reasonable, but ultimately I’m happy to follow your guidance there. I think you have the most context. In terms of who should take ownership of the specs, I agree with Gajinder, it’s ultimately a group effort. I think that historically it’s been the Ethereum Foundation researchers that have played a heavy role because they know in some sense the specs is their deliverable, it’s their remit. And I know that Thomas (Coratger) has started the writing process at least for the Poseidon and Koala Bear. So I mean, I think I guess I’d be curious to hear from Thomas (Coratger) if he’s interested in taking more responsibility at least for a subset of the specs.
**Thomas (Coratger):** Yeah, sure. I just mentioned I already wrote some part of it. I started with a very simple part, Koala Bear and Poseidon; this is almost done. I am currently working on the signature spec under the guidance and review of Benedict and Dimitri. So yeah, would be interested in taking more responsibility in this, also fairly based at being a lot on everything related to Python stuff like typing and deep Python specification. So yeah, happy to take more responsibility.
**Justin Drake:** I mean, I think one thing that we did do well in the beacon spec is that it was very succinct by the actual lines of code, and there was a lot of emphasis on kind of boiling things down to the absolute minimum and to the essence. I think we kind of went overboard on the whole comments thing. So I think what I would suggest is having minimal lines of code but very, very generous comments.
**Thomas (Coratger):** Yeah, this is what I initially tried to do, especially for what I did because it was cryptographic parts and so kind of hard to understand if you are not familiar with. So I tried to minimize the lines of code, do maximum typing in order to have it secure, and had some tests and very generous comments also to specify all line by line what we are doing, why we’re doing this, and the equations if some are there. I think maybe using AI to sort of elaborate and write documentation would be very nice because I certainly find AI to be explaining much better and in more amazing ways.
**Felipe:** Yeah, for sure. Like if we take the example of Poseidon, I already have some personal notes and diagrams about explaining Poseidon, and I already have the full documentation of Plonky3. So I took example from this, and if some sentences weren’t clear, I just used AI to enhance that and check what is okay and not.
**Justin Traglia:** I have a question. Will this repo, lean spec, contain specifications for all layers like CL, EL, cryptography? It’s probably TBD, it sounds like, maybe. Do you have some guidance or thoughts on how to structure that? Would you advocate for one uniform or unified or multiple sub-repos?
**Justin Drake:** I mean, I haven’t thought about it too much, but I think if I were to restart, I probably just have a singular repo. Yeah, I would agree as well on the singular repo, and the hope is that what is currently EELS, which is presumably quite a large repo, would be condensed to just a few lines of code in the context of lean execution, where we would have RISC-V. Maybe this is an oversimplification, but if you specify RISC-V in Python, it’s about 50 lines of code. So I’m hoping that the complexity goes down so much that we can fit everything in one repo. Having said that, the kickoff call for the research of lean execution is happening on Friday, August 22nd. So it’s extremely early days. And I don’t expect lines of code to be written in the short term, and for lean data, which is a rewrite of the blobs and KZG and all of that stuff, that’s potentially even further than lean execution. So, practically speaking in the short term, I expect the repo to only contain lean consensus.
**Will:** On that note, yeah, Katya is also on the call, Justin, so we met a few weeks ago with her and you and talked about metrics, so we’re starting off by making sure that that’s also part of the lean Ethereum org and trying to get that delta in parallel with this first pass of specifications.
**Will:** Oh, any observations from Ream?
**Unnwat:** I think there’s a part of this that I’ve already kind of run into where the Python maybe types don’t play as well across all libraries as we’d like them to. But in an ideal world, we would sync some similar types across the spec repos. And so for this, I feel like if we do have somebody that is working strictly on the Python and letting implementers work on the DevNet and all that and helping them with the Python side of things, I feel like we can drive a little bit more synergy between the execution specs and lean specs and anywhere else that makes sense to you. I’d like to have more time that I can dedicate to this, but I don’t know if that’s possible. And so part of this, the idea for this call is to try to figure out like if we can get somebody aligned on the Python side for lean spec, and we’d be happy to help support that from the Steel side as well.
**Unnwat:** And for me, I guess I don’t have specific questions, but maybe if I may, I want to share my experience so far trying to write the lean specs with the framework that Felipe and the team set up. There was some roadblocks there that I couldn’t continue. So maybe I’ll start there. Basically, right now for the lean spec, we have this one PR in terms of specifications, right. Initially, I started writing the specs in Python first, but I stumbled across a few issues, right. So maybe I’ll go through the structure right now, what the specs contain. So this is basically the comprehensive PR for the first round of the specs. It contains stuff like some of the parameters like slot duration, the historical root limits, but I did it, which is really limited, and so on. Similarly for containers, there is a container for the checkpoint, for the state, for the block, and so on. And then for the networking, there is the transport, the identification protocol negotiation, the gossip sub-messages, the topics, and also the request-response format for the validators, pretty early, but basically trying to define how do we actually identify the validator just for the DevNet, which we don’t have proper render out to do any validator, so the shopping for now, and so on. Now the problem that I tried when I tried to convert this to Python specs is, for example, what Felipe already mentioned. These containers, once put into, I would say, the framework that’s based on the execution specs, the types were not compatible. A lot of the times they are actually the same types, but defined differently. For example, I think in the spec, it’s uint64 based on, I don’t remember which library, which Python library the execution specs has another set of libraries to define the types. And I think, for example, the beacon specs is pretty much driven by the SSZ containers and types. So a lot of things that are defined, for example, this defining the type and also the maximum length, that seems not possible to do with the current existing execution specs. And also these types that are available for the beacon specs itself are actually scattered in maybe two or three libraries. So when I tried to actually combine these, I actually had to use two to three libraries to come up with these containers and then figure out at the end, I wasn’t able to define the list in this way, where it has the types and the maximum length. And then I think there’s also questions of stuff that are not really Python specs and not really Python code, for example, defining the client, what transport the client needs, the specs for the gossip sub stuff like that. So those are also questions that I wonder, what’s the ideal way to actually define this kind of specs into the lean spec framework that’s the best that’s existing there. And I guess there’s also another question: we are also looking into some of the RPCs that we want to build right now, like we are not proposing it as part of specs, but only for Ream to debug. But I guess eventually then there’s also that question whether the RPC specs would be inside here as well. And how do we actually define that because I guess right now the RPCs are defined as probably swagger specs. So that’s also another question of how do we incorporate here if we actually want to have a single repo. And then at the end of the day, there’s the question of timeline, given their stuff like I was stuck at trying to implement the list type here and figure out you actually need a lot of Python expertise to figure out how to actually work this out as feedback also. So I guess having a dedicated person or someone that’s knowledgeable in Python to help maintain this for the lean specs, that would be nice. But even so, even though we have that person, I think it’s going to take some time as well before we have everything ready that we are able to define our proper specs with this. So that’s also another question, you know, in between, would it make sense to continue using markdown and for how long and what’s the timeline expectation on when should we actually get everything into the same repo and using the same specs framework for these kind of specs. So yeah, I guess not really a single question, but I like sharing my experience so far working on this for the past week or two.
**Will:** Oh, great, thanks for sharing that.
**Justin Drake:** Yeah, this is super helpful context. I didn’t realize there was this whole issue around types. Is there anyone from the EELS team that can provide context to why they came second and so maybe the reason why they went down their own path is, well, maybe they have good reasons for that, and maybe they can provide context as to what they think we should do.
**Sam:** Historically, I think we just started with our types in our own repository, and then they kind of grew into their own library. As for guidance of what we should do going forward, I think it would be ideal to try and get down to one typing library with all of the common stuff. There’s a lot of engineering time spent on maintaining these libraries, and they’re basically 80% overlap, so yeah, I think we should get down to one set of types. And have you implemented SSZ with your set of types?
**Felipe:** Not yet. We haven’t needed it from a... we have a tiny subset of it implemented for one of the EIPs. And is it planned to be to implement a larger subset for maybe a future EIP?
**Sam:** No, but I mean, if that’s the blocker for getting down to one type system, I think that’s something we can do.
**Thomas:** Okay, great. Okay, I don’t have a strong opinion, but it sounds like in the long term that... I don’t know if people have different opinions, in the long term, it feels like using whatever you guys have as the reference and implementing SSZ with it is the way forward. Also, maybe I know that you did the markdown stuff to showcase what you want to do and to avoid this typing stuff. But maybe this works not going super far with the markdown because then it would be kind of a long path to convert everything markdown to Python. So maybe going directly into the Python would be the way to go once we fix the type system and stuff so that we have a clear hierarchy with subspecs, clear tests, and stuff. And so maybe it is easier to navigate and to modify the spec once we need instead of going full markdown and then spending a couple of months transferring to Python.
**Gajinder:** On the contrary, I find markdown to be very helpful in understanding, and so basically I think going from markdown to Python should not be a big leap in the sense that it shouldn’t take months, we should take months for the spec, and then maybe a short duration to implement it in whatever, in Python or in Rust or whatever. And I think markdown is quite important to understand what’s going on and to also have things that are not interlinked, for example, something in the networking layer you want to define that is not interlinked to the main beacon chain working and things like that, as well as fork choice. So in that regard, yeah, I would still like to have markdown.
**Felipe:** So, Gajinder, are you saying that over the long term you think having markdown is good because it’s kind of a way to materialize the codebase?
**Gajinder:** So I have found markdown to be very helpful and interesting, and so basically an succinct way to understand what’s going on without basically getting into the details of what this language is doing. I do agree that we need to find a way to keep things very, very modular, for example, the cryptographic stuff should be probably its own module separate from the CL, partly because it might be reused at other places. But presumably, there is a Python-native way of building these modules, right. What would you recommend, maybe Felipe, on the modularization or someone else to Zeam?
**Felipe:** Yeah, I would recommend keeping it separate for sure, just in case it could be used as a dependency somewhere else. Yeah, the only sort of blocker there is really just finding time to make sure that all of the typing can work with SSZ and work with anything that comes up as these specs are written out. But yeah, I think making it modular makes a whole lot of sense. For example, like in the markdowns that you just presented, we have a chain subspec, let’s say in markdown right now with some constants like slot duration and other stuff. I know that they are uint64, but once we have the correct type to do that, I think it makes a lot of sense to just have a chain subspec dot py and just put this constant there to have already the py spec ready.
**Thomas (Coratger):** Yes, we currently have the Koala Bear and Poseidon2 subspec, and we have the signatures subspec in a PR; it is currently reviewed by Benedict, who is currently on holiday, so it may take a bit longer to be merged. But this is ongoing. To be noted that in doing the full spec for the signatures can take a while, so what we decided for now is not to spec the key generation and the signature process but just speccing the verification process because this is the one that we will use. And for the key generation and the signature, you should rely on the Rust implementation simply and nothing in the Python spec for now, just empty functions.
**Gajinder:** And then back in the beacon chain days, we had also noted Zeam working on a subspec for a PS and 3SF. Is that also something that is going to be coming together?
**Justin Drake:** So one of the outcomes of the last couple of months is that the consensus team is doing a review of some of the more modern and kind of latest designs, so for example, Alpenglow from Solana. We actually hired one of the authors of these modern consensus mechanisms. And the idea here is that in addition to having this super strong economic finality, there would be a fast path for very fast safe head, which might have only a latency of one slot as opposed to three slots. So this is one of the key topics that’s going to be discussed in Cambridge in the architecture plus consensus workshop. So right now, I think it’s a little bit too early to be working on this packing for that. Having said that, we have 3SF mini that could potentially act as a temporary solution, or some people have even suggested...
**Gajinder:** So that’s the way we are going to be able to do that. And if needed for the first DevNet, we can even transfer the 3SF mini code of Vitalik into the lean spec just for having an initial spec for the DevNet. This is not a problem.
**Ream:** Yeah, so basically following PR, I was intending to do a PR to add the state transition functions that we need because PR is just right now about containers. And so my intention was to follow it up with the state transition function, where we have process operations, where we will basically process the votes and sort of take Vitalik’s 3SF mini and convert it into our spec overhead and then as well as add a fork choice file as well, where we will also again describe the 3SF mini fork choice. So this, yeah. So this I think we need to do as a follow-up to PR.
**Will:** So I think that’s earlier you would share the link to that remark. Is there anything on the consensus tooling type that you wanted to mention?
**Justin Drake:** Sure, I mean, we use remerkleable for basic SSZ types like uint64 as well. From the just random ideas and thoughts, if I were to do it again, I would also include executable networking specifications, like for the RPC and layers and gossip and stuff. That’s something we didn’t really do in the beacon chain. And what else was I thinking? What is the long-term plan for this? Like, will these eventually go back into consensus specs and execution specs, or will they just live on as lean specs, and we all move to that specification repo? Do we know yet, or do we have intentions?
**Felipe:** I think it’s too early to tell. I think the default should probably be that we integrate with what we have, but then as you noted, there’s potentially benefits just having everything under one home. So maybe one of the existing repos will become the long-term home. I think at some point it would be nice for the lean Ethereum organization to be deprecated and it’s just Ethereum. But it’s way too early for that right now. And I really like your idea of having executable RPCs. That’s great. It would make it easier to generate tests for it as well. Will you guys be publishing reference spec tests for the clients or for all of the clients to use?
**Sam:** I think we should do it for sure. If you guys have any questions when making that, feel free to reach out to me or Steel. I think we have some experience with that.
**Will:** Yeah, I think that this is obviously a very organic process, and we’re going to be learning as we go. I think it would be great if we could find some sort of reasonable cadence for us to almost get critiques from this group, you know, so that we can interrogate the way that the repos are being organized and continue to try to glean your knowledge that you’ve picked up over the years and observations that you have about your current work environment and make sure that we’re staying on course and making the best of this effort so that we can look back in the future and all the certain things from lean Ethereum migrate over to Ethereum, and everyone’s happy and feels like it was a positive outcome and not just some new Frankenstein situation. And you also mentioned executable specs for networking. I can see the RPCs that in some sense a pure function, but how do you envision the networking being specced out?
**Justin Drake:** It’s a good question. I mean, you have to have it as abstract in some way, like you have to start somewhere. And I don’t really know, like each function would be like each RPC function would have its own handler function defined in Python. And then you can only go so far with that. Also, I’m not entirely sure.
**Justin Drake:** One completely different topic is that Luca from L2BEAT has started working on the execute precompile for native rollups, and you can think of that as being part in some sense of lean execution. And he’s starting point is EELS, and he’s writing an EIP using EELS, and he’s been very happy with the state of the specs. And one thing that I’ve been meaning to do is introduce him to the EELS team. And I’m just using this as a signal, I guess, that whatever EELS is doing right now is possibly the right way forward for not just lean execution but also the lean consensus as well.
**Will:** Great. Well, this has been super helpful, and I’m happy to gather the notes and share them with people just to make sure that we agree that we’re landing in the same direction or pointing in the same direction. Yeah, like I said, we host a little breakout room every Wednesday at 2 PM UTC, so tomorrow, and we’ll make sure to cover what we learned here on that call tomorrow. And yeah, feel free to join that call anytime you want and listen in, and we’ll hopefully be sharing some results of these DevNets here soon and try to keep you guys apprised of how things are moving.
**Will:** All right. Any last thoughts?
**Sam:** I’m going to confirm one of the next steps around writing an implementation of SSZ with the current EELS types. Is there an estimate of how much time it would take, and is there someone who would want to take responsibility for doing that?
**Felipe:** So I know the specs, like our types pretty well, but I don’t know SSZ pretty well. So I can’t give an estimate on there. I can say for RLP, it took maybe two weeks to write our implementation. So maybe that’ll give you somewhat of an idea.
**Will:** Okay, perfect. Thank you. I mean, one person that might be worth engaging if we get stuck is Xia Wei. I think this was the role that she had with maintaining the specs, and I think she’s quite familiar with the current typing as well as SSZ.
**Ream:** And actually, just a bit more context regarding the blockers for the SSZ types. I think the variable list is the only big blocker. The rest are just basic types which are available anyway. So I guess, yeah, if you figure out how to handle the variable list, I think we are good to do the specs in Python.
**Sam:** So by handling, you mean how to implement it? What do you mean by that?
**Ream:** I mean, basically just exposing the type so that we can actually define the variable list type in Python. So basically, we need that list implementation in Python; that is what you mean.
**Unnawut:** I think I might be able to help out with this. I can reach out like after the call, bounded length.
**Sam:** Yeah, sure. Yeah, there’s part of it also mentioned that there’s this new idea for progressive lists instead of a fixed length or fixed maximum length list. You have something that kind of grows dynamically. It’s part of an EIP, and we don’t actually use that yet, but might be something that links back to lean spec use. I can try to find a link.
**Ream:** Yeah, I have seen the EIP, and it’s quite fascinating, but I guess as of now, we’ll just focus on the normal list and then maybe at some point see if we want to add progressive list.
**Justin Traglia:** Sounds good. For reference, here’s just a PR. There’s a link to the EIP and there.
**Will:** Thanks all. Thanks. I’ll see a lot of you tomorrow. Thank you.