# Erasure-coding requirements Each validator in JAM will need to: * be a guarantor for one core: * this involves constructing a workpackage and potentially checking other guarantors packages - 3x erasure coding construction * audits: * 10x recovery Availability recovery involves 0 reconstruction in the happy path (systematic chunks, and 1 otherwise), but needs to construct again in order to check the erasure root. So we care about the speed of erasure encoding, not decoding. With SIMD backend, the construction cost per 6MB blob is around 25ms on a laptop-level hardware. Decoding costs are even less (around 6ms). So we can easily do all that work on <1 CPU core. SIMD backend works with SSE, AVX2 on x86_64 and NEON on aarch64 (e.g. Apple m1). Almost all available CPUs on the market should be covered here. Non-SIMD backend cost will be around 100-150ms per blob, which is not too bad and might even work with 1 CPU core, but probably safer with 2-3 cores in case we'd need to decode (around 200ms) and/or require lower latency. The good thing is that it's parallelizable. ### Alternative design Instead of using binary tree merklization, we could do something similar to Danksharding and Avail (which has a Substrate-based impl), by using KZG commitments. In short, the guarantor (or a builder), will construct a proof (instead of Erasure Root), that can guarantee that the encoding is correct without the need to re-encode the data. The cost of verifying the proof should be around a few ms according to https://github.com/grandinetech/rust-kzg?tab=readme-ov-file#verify-kzg-proof. The tradeoff is that we require more compute on the prover (builder) side: https://github.com/grandinetech/rust-kzg?tab=readme-ov-file#compute-kzg-proof. But this is done only once per workpackage.