# National Portrait Gallery
Simulating the number of messages and their timings.
In our simulation lets assume there are `V` validators and `N` nodes. It might make it easier to make `V` a multiple of `N` in the range of around 400k-450k.
So if `N` is 50, lets make `V` 400k, so we can say that there are 8k vals per node. If there are 25 nodes, then 16k vals per node, etc.
## BeaconBlock Topic
### Real Case
In the real world, every slot, at the start of the slot, a single validator sends a BeaconBlock.
### Simulation Case
If our validators are uniformly distributed amongst the nodes, just randomly select one node every 12 seconds to send a BeaconBlock.
## Aggregate And Proof Topic
### Real Case
There are 64 subnets. When we have over 250k vals (currently the case, we use them all). So every slot, there are attestations being sent on every subnet. On each subnet, we do a `mod` of a thing such that on average 16 validators per subnet get chosen to be "aggregators". These 16 validators collect attestations on each subnet and aggregate them, publising the result on the `AggregateAndProofTopic`. We therefore expect (on average) 16*64=1024 messages per slot on this topic.
There is an annoying thing, where the same message can be aggregated by different validators (pretty likely to happen). Even though these are unique messages, as seen by gossipsub (because the signature is different, therefore the message is different), nodes do not propagate duplicates. Therefore, only around 20-30% of these make it through the network. So we might want to simulate that.
### Simulation Case
The 16 validators per subnet should be chosen at random uniformly. We therefore expect, per slot, 16*64=1024 validators at random to publish on the aggregate and proof topic. We have uniformly distributed validators accross nodes, i.e equal number of vals per node, then we should be able to randomly and uniformly select nodes to publish these 1024 messages on this topic. Due to the funky thing I mentioned, lets take 30% of these and say 307 messages.
## Subnet Topics
### Real Case
There are 64 subnets. The total validators, V, get split up such that they each submit an attestation per epoch. This means per-epoch we expect to see V attestations (at a network level, individual nodes don't need to see these).
Therefore, per slot, the total messages sent will be V/32. These should be divided evenly into each subnet. So on each subnet we should expect to see V/32/64 = V/2048. For a 400k val network, this would be 195 messages per subnet. Again, validators are randomly shuffled into slots and subnets.
### Simulation Case
If Validators are spread uniformly accross each node, then on each subnet topic, we should expect to see V/2048 messages per subnet, per slot. So we should be able to randomly and uniformly select nodes to send V/2048 messages on each subnet every slot.
The number of subnets a node is subscribed to main depends on long-lived subnets I guess. In our simulation I guess our nodes should be subscribed to all subnets. We could try having them subscribed to randomly half at see how it behaves, but this complexity is something we can add later.
## Exits, Slashings
Lets ignore these. They never really do anything
## SignedContributionAndProof Topic
### Real Case
This is like the AggregateAndProof topic, where we aggregate messages from the sync committees. Just like the aggregate topic, we target 16 aggregators per sync committee subnet. There are 4 sync committee subnets, so we expect 16*4= 64 messages per slot on this topic.
Again the same funky behaviour occurs (as in AggregateAndProof topic) and we see like 20-30%. Lets simulate with 30%. So 19 messages per slot.
### Simulation Case
We can just randomly select 64 nodes per slot to send messages on this topic.
## Sync Committee Messages
### Real Case
There are 512 random validators selected to be part of a sync committee. Each of these validators are split into 4 "subnets". Every slot, each validator signs the previous block root, making a SyncCommitteeContribution. Therefore on each Sync committee subnet, we expect to see 512/4 =128 messages per subnet.
### Simulation Case
Again these are distributed uniformly. Technically the sync committees last quite a while, like a few epochs, so in the real case we would be publishing from the same set of peers for a bit. I guess we can ignore this and just randomly and uniformly select nodes to send 128 messages per slot per sync committee subnet (there are 4 of them).
## TLDR
| Topic | Strategy |
| -------- | -------- |
| BeaconBlock | 1 Random node per slot |
| AggregateAndProof | 307 Random nodes per slot |
| Subnets | V/2048 messages per subnet per slot, randomly sent by each node |
| SignedContributionAndProof | 19 messages per slot, random nodes per slot|
| Sync Comittee Subnets | 128 messages per slot, randomly sent by nodes |