owned this note
owned this note
Published
Linked with GitHub
# JAM0 @ Bangkok - November 12, 2024 Notes
[Here are Sourabh Niyogi's notes, to be compiled into docs.jamcha.in by end of November along with Nov 16 session notes]
M1 Import Blocks. As of Nov 2024, teams are focussed on M1 "Import Blocks", and baby steps with
* mode=fallback: extrinsic-less blocks [M0.0]
* mode=safrole: same as above but with |E_T| >0 [M0.1]
* mode=assurances: same as above but with |E_G| >0, |E_A| >0, |E_P| >0 as well [M0.4]
* mode=disputes: same as above but with |E_D| >0 [M0.5]
There are not enough teams who have processed work packages yet, so the consensus is that focussing on just the first two (fallback+safrole) is best, since it unlocks state root comparison issues on a tiny number of leaves (15-17). Following Gavin's advice from Saturday, having a fuzz test of valid/invalid blocks that can be sent into a single node can be planned, with a jamtest with cli flags for mode and an endpoint for basic fuzz testing. This would POST a set of JAM-codec encoded blocks "jam_importblock" to the endpoint with HTTP alone. Setting C14/C15 aside, teams can have their implemented pass this fuzz test without { JAMNP, erasure coding, PVM } implementation.
M2 Author. Here we bring in authoring with JAMNP. There is consensus that focussing on just UP 0 and CE 128 is best. A starting point can to POST some bytes and expect some bytes back with HTTP alone, which is sufficient for most JAMNP CEs in addition to 0+128. The endpoint should be full QUIC however. The same jamtest binary can be used to support a specific set of UP 0 + CE tests given certain cli inputs.
We can develop a writeup for a cli flags covering both M1+M2 for jamtest on Friday and get into precise RPC nitty gritty. For both M1+M2, it appears easier for teams to work with the above jamtest model than docker-compose, and in particular, the above supports teams working with interpreted languages (eg Typescript) without complex obfuscation. In addition, it may be able to cover V=1023 setting in a way that docker-compose cannot realistically do, which makes it possible to plug into JAM Toaster more easily, and skip any non-GP-conformance issues.
Tooling is valuable but only a few teams are engaged in tools so probably a bounty is overkill. Emiel guided us through https://fluffylabs.dev/ work who has developed many useful things beyond the beloved GP Reader. Of particular note is https://trie.fluffylabs.dev/ which is directly related to the above. Importing a trie snapshot to support M1 Import Blocks makes sense for this tool -- This would enable teams to quickly diagnose state roots differences with jamtest (or find problems with the tool itself). This concept deserves some followthrough with fluffylabs (not present here this week).
A JAM collective may be worth building, but there are not enough tool builders to justify it at this time, at least among the participants in the session. Basically teams have a lot of progress to make on their implementations (being able to process work packages) before implementing JAM Services. Perhaps in the next JAM0 session in early 2025 or with a larger number of JAM implementers we could reach a different conclusion.
https://jamcha.in/ is a fine place for knowledge sharing for docs / tutorials, supported by Oliver -- https://github.com/JamBrains/jam-chain
We did not cover Telemetry dashboard/UI/UX/Indexer in the session, but at Bryan's recommendation we invited Yongfeng (who leads Subsquare) Tuesday evening to discuss. An W3F RFP or OpenGov proposal for a design matching Shawn T's "Polkadot Cloud" concepts appears to be highly worthwhile for UI/UX teams like Subsquare+fluffylabs. We should be able to use standard open source tools like Grafana, Prometheus in parallel and instrument JAM Toaster node explorer. The format for nodes to be instrumented at the level of detail of E_T, E_G, E_A, E_P, E_D and the key JAM State variables in C1-C15 deserves deep dive follow up discussion, potentially on Friday at 11am?