## Pool Selection Criteria
> source: https://medium.com/gauntlet-networks/balancer-v2-pools-trading-fee-methodology-7a65df671b8c
Taken from the above article, here are the highlights and extracted equations
### Equation
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Bequation%7D%20%5Cln%20L%2B%5Cln%20V_%7B30%7D%2B%5Cln%20V_%7B5%7D%2B%5Cln%20F%2B%5Cln%20M%20C%20%5Cend%7Bequation%7D">
\begin{equation}
\ln L+\ln V_{30}+\ln V_{5}+\ln F+\ln M C
\end{equation}
- where L is the pool liquidity,
- Vn represents the volume over the past n days,
- F represents the total fees collected, and
- MC represents the market cap of the base token.
We chose this metric to allow for each factor to contribute substantially while also reducing the impact of large outliers in any particular factor.
<img src="https://render.githubusercontent.com/render/math?math=%24%5CDelta%20f%3D%5Csum_%7Bi%7D%20w_%7Bi%7D%20g_%7Bi%7D%24">
For a fee $f$,
$\Delta f=\sum_{i} w_{i} g_{i}$
where
$g_{i}=M_{i} \tanh \left(h_{i}\right)$
where $M_{i}$ is the max percentage change for the $i$ th factor and
$w_{0}=20 \%, h_{0}=$ fee change that maximizes LP income (fees $+ L$ ) produced via agent based simulation $w_{1}=20 \%, h_{1}=$ fee bias towards estimated LP breakeven ROI (rewards + fees $+ L$ ) through liquidity adjustments from the $V 1$ to $V 2$ migration $w_{2}=10 \%, h_{2}=$ fee change based on 30D realized volatility $w_{3}=10 \%, h_{3}=$ fee correction based on $30 D$ impermanent loss $w_{4}=10 \%, h_{4}=$ fee adjustment based on organic volume share $w_{5}=10 \%, h_{5}=$ fee change based on Balancer utilization relative to Uniswap and Sushiswap $w_{6}=10 \%, h_{6}=$ fee change based on Balancer liquidity relative to circulating token supply $w_{7}=10 \%, h_{7}=$ fee change based on expected changes in liquidity mining rewards
and the model output is $f+\Delta f$
### Chain Reorgs
We need to detect re-orgs. Each time re query for a block we should have a reference for what we expect the parent hash to be. A re-org is dectected when the retrieved block's parent hash does not match the expected parent hash.
Next we need to add some metadata to how the Exfiltrator sends data to the Loader. Instead of passing raw blocks we should do something like this:
```python
class ChainSegment(NamedTuple):
blocks: Tuple[Block, ...]
is_reorg: bool = False
```
For the normal case, the exfiltrator would transmit
```python
ChainSegment(blocks=(next_block,), is_reorg=False).
```
In the case that a reorg has been encountered it would trace backwards up the parent_hash links until it encounters a previously known block. The exfiltrator would transmit ChainSegment(blocks=new_chain_segment, is_reorg=True). We can bound re-org detection to a fixed maximum size window and error out if tracing backwards up the chain exceeds this limit.
### Reconcilliation
If this throws an exception, removals may have been announced that are actually still in history since throwing will result in no history update. we can't catch errors here because there isn't a clear way to recover from them, the failure may be a downstream system telling us that the block removal isn't possible because they are in a bad state. we could try re-announcing the successfully added blocks, but there would still be a problem with the failed block (should it be re-announced?) and the addition announcements may also fail
The user getting this notification won't have any visibility into the updated block history yet. should we announce new blocks in a `setTimeout`? should we provide block history with new logs? an announcement failure will result in unwinding the stack and returning the original blockHistory, if we are in the process of backfilling we may have already announced previous blocks that won't actually end up in history (they won't get removed if a re-org occurs and may be re-announced). we can't catch errors thrown by the callback because it may be trying to signal to use that the block has become invalid and is un-processable