# SSV network topologies and what it means for scale
## Option #1 Pubsub topic for each new validator
A new pubsub topic is created for each new validator joinig the network.
**Pros**:
* Minimize "non-relevant" messages per node
* Simple
**Cons**:
* Large #of topics + topic overhead
* Large peer count per node?
* Potential inifinite # of topics, unknown effects on network
* Network not very robust, small amount of nodes on each topic makes it easier to attach
## Option #2 Constant number of topics
A constant number of topic is defined (128, 256, etc) and all validators deterministically are mapped to them.
Nodes join topics in which they have a validator
**Pros**:
* Predicted # of topics and their overhead
* Simple
* Robust network, hard to attack
**Cons**:
* High message processing overhead (nodes see many non relevant msgs)
* harder to scale, potentially there will be nodes needing to connect to all topics and process all network messages
| # of topics | # of validators to hit all topics (with high probability) |
| -------- | -------- |
| 128 | 695 |
| 256 | 1568 |
| 512 | 3490 |
| 1024 | 7689 |
[*Using coupon collector calc*](https://en.wikipedia.org/wiki/Coupon_collector%27s_problem)
## Option #3 Dynamic number of topics
When validators register they are assigned a unique incremental index.
A new topic is created for X number of sequential IDs (topic 0 for IDs 0-9, topic 1 for IDs 10-19, etc).
Topics are created on the fly.
**Pros**
* Reduces number of topics compared to option #1, reduces overhead
* Simple
* More robust network compared to option #1, less than option #2
**Cons**
* Still large # of topics (Validator/X compared to option #1)
* Small to medium processing overhead of non relevant messages
* Easier to scale than option #2 but still hard to scale
## Option #4 Reusable operator groups
A new topic is created for each unique (ordered) operator group (i.e. all validators using operators 1,2,3,4 will be on the same topic)
Assumes group reusability is used by users
**Pros**
* Smarter way to do options #1,3
* Minimal message processing overhead, nodes process only relevant messages
**Cons**
* Harder to model # of topics
* Still an infinite number of topics potentially
We can assume normal distribution for topic (operator group) reuse.
We can introduce incentives for reusing groups (for example discount on network fees) to lower the distribution variance (means more validators using existing groups/ topics)
The below is an example for group/ topic reuse.
For the mean i've put 50, it means 50% of all created groups have 50 or lower # of validators and 50% have more.
We can also calculate range, in the example below 57% of all topics/ groups will have 30-70 validators in them.
The lower the variance the more topics/ groups are reused
[
](https://onlinestatbook.com/2/calculators/normal_dist.html)