This guide will set up
11/17/2023Contextual logging:
9/8/2023Today in BOLD, challenges can have subchallenges, and we have a total of 3 levels. We do this because at each challenge, validators have to commit to N state roots of a block’s history, which can be very expensive to compute. For example, computing 2048 machine hashes for the execution of a block and collecting them takes around 5 minutes on a modern MacBook Pro.
8/16/2023If you knew what you know now, and you were starting a new CL from scratch, what structures would you use? Let me start myself to set the tone, you can make it language specific (for example Go and avoid templated methods) but I am not really interested in the small details but rather the main components of the critical parts of code. Typical questions I’m thinking about areWould you use a custom allocator for the beacon-state/validators/attestations structures?Would you use a struct like object for them or would you use a functional style object. The typical situation is this:We have a buffer with 8192 roots in the BeaconState, but these roots are sequential, if state for N had up to root r, the state for N+1 will share essentially all of the roots as that for N except at one point where it would have changed one. If you store vectors and then you want to share the common part then you are lead to structures like the multi-value slice or similar structures, where you have a base slice and then somewhere else you store the diffs (or some mathematically equivalent statement). Another way to store the object is as a linked list (with some additions to make it into a round buffer) and when you change one element you just change a single node on that list, someone that holds the pointer to the current node will resolve to the current full round buffer, while someone that holds the pointer to the previous-to-modified node, will resolve (by traversing the links from it) to the previous full round buffer. If you have a fork this still works, you just need to add a few pointers on either side of the fork. The point is that from the point of view of the structure itself, there’s no “base state” and some diffs, and rather you can just delete the pointer and let GC in Go, or the Destructuror in C++ deal with the rest. No need to be pruning yourself from a multi-value slice for example. On the other side, the multi-value slice would incur in way less memory fragmentation if you are dealing with a single state in the happy case.There are variations of this for all structures in the Beacon State and there are benefits and cons for each one of them.What sort of thread management would you use? (I think here there won’t be a disagreement and just relying on the Go runtime is fine)What sort of fork management would you use? I originally thought about templating, something like having the Fork being the template parameter for a BeaconBlock structure. In C++ at least, the best structure seems to be vertical polymorphism in the structures themselves like having class AltairBlock : public Phase0Block but not use methods of these structures, but rather pure functions like attestations(block Block) []Attestations and just let AltairBlock and Phase0Block satisfy the concept Block, which in this case would mean they both have a member attestations so you can just simply implement that function as return block.attestations. In Go this seems to be quite complicated, templates as far as I could read do not allow us to do this, and interfaces themselves we have them, but we still are forced to repeat the underlying structures completely for AltairBlock and Phase0Block instead of invisibly inheriting and obscuring the changed ones.Would you trade security and correctness for performance? I think this is a funny one: if you’re coding for 0.1% of the market, then you can do lots of shit like only keeping the last two states and break if there’s a depth 3 reorg. That’s an extreme case, but that’s probably not so far away from what Erigon’s CL may be, I would bet that they just follow blindly whatever is voted. But besides the extreme cases, how about Optimistic sync? I am pretty sure we are the only ones that considered a recursive system in which a bunch of branches turned out to be invalid. I am convinced that you can simplify a lot the optimistic status dealing if you just ignore correctness in a bunch of cases, with no real danger to the network.How would you deal with the engine? Would you change the interaction with the engine if you knew that ALL your users are using MEV-Boost? would you optimize the builder code in detriment of the local code? for example, it could be perfectly valid to not wait for local blocks and not request them in advance until it’s late in the block in case we know we are not resource-heavy. Not wait for the result of FCU is also good, since we are guaranteed already from a VALID response from NewPayload (this is something we could do right now on Prysm).
8/2/2023