The goal of this work is to come up with a theoretically optimal pubsub algorithm and estimate its properties with respect to bounding conditions: network bandwidth and latency.
It would be helpful to have theoretical maximum speed of message dissemination withing p2p networks and have an understanding on how far is any specific pubsub algorithm (like GossipSub, Episub or any other) from the maximum possible performance.
Special thanks to Tanguy Menduist for good ideas, discussions and review
Model assumptions/simplifications
All nodes have the same bandwidth
All connections have the same latency
Considering dissemination of just a single message (i.e. no other messages are disseminated simultaneously)
Note: this is the high level idea on how the networking might potentially be organized to meet the future data sharding needs. This idea is kind of follow up to PeerDAS and SubnetDAS proposals. In some aspects it complements and in other aspects it serves as an alternative to these proposals.
The very basic idea is to split the network onto N (let's assume N = 32 as initial approach) Network Shards and let every shard take care of
disseminating (Push) of 1/N of the whole Gossip network traffic by serving as a backbone for various subnets (e.g. attestation, sync committee and future DA sampling subnets)
custodying and serving (Pull) of 1/N of the network data (DA samples, blob slices, blocks potentially)
Data dissemination (Gossip subnet backbones)
This idea might be thought of as a generalized Attnet Revamp spec PR
Note
This is a draft version of refactored Sharding DAS specification:
https://github.com/ethereum/eth2.0-specs/blob/dev/specs/das/das-core.md
The goals of this refactoring
Fill the gaps in the original version
Allow arbitrary length data to be sampled/reconstructed/proven (not just pow 2 lengths as in original version)
Generify the spec by relying on fundamental concepts (like polynomials and pairings) instead of specific optimized functions like FFT, FK20, KZG lib functions
(special case of the above point) Get rid of numerous point permutations in the spec as they are only applicable to optimized algorithms inputs/outputs. Those permutations were the major source of confusion for me while understanding the spec core.
Gihub: https://github.com/Nashatyrev/artemis/tree/rayonism/shard
Running test net of 3 nodes ([1] <-> [2] <-> [3])
Node private keys
need to be in separate files p2p-privateKey-1.hex, p2p-privateKey-2.hex, p2p-privateKey-3.hex:
080212206FD7AF84823661AE4F0FDB05D83BD5569B4270FE94C8AD79CCB7E3F843DEFCB1
080212206FD7AF84823661AE4F0FDB05D83BD5569B4270FE94C8AD79CCB7E3F843DEFCB2
080212206FD7AF84823661AE4F0FDB05D83BD5569B4270FE94C8AD79CCB7E3F843DEFCB3
[x] Add new/modify existing SSZ structures
[x] Implement block/epoch transition logic
[x] Add sanity transtion tests
[x] Add gossip shard_header topic with validation
[x] Add beacon block proposer logic to include shard_headers from the gossip topic
[x] Add beacon attester logic to 'vote' for a shard header
(no actual attestation logic: just get first non-empty header if any)
[x] Add shard_blob_{shard} topic 'router' which just directly expose this topic via REST API for a 'Shard Blob Node'
Gossip message validation is performed on the 'Shard Blob Node' side?
[ ] Add REST API for ShardBlob publishing for a 'Shard Blob Node'
# Gossip pubsub simulation in the context of Beacon Chain
The network of Libp2p Gossip nodes is simulated to find out how different gossip (and non-gossip) parameters affect message dessimination and network traffic
## Simulator overview
- Based on JVM Libp2p Gossip implementation
- Written in Kotlin
- Currently implemented in the form of [tests](https://github.com/libp2p/jvm-libp2p/blob/feature/simulator/src/test/kotlin/io/libp2p/simulate/gossip/Simulation1.kt) in a separate [jvm-libp2p branc