# Induced Inertia due to the Alpha Beta Filter Parameters ###### tags: `Analysis`, `Phase 2` :::info Up to date by August 2020 ::: *Author: Danilo Lessa Bernardineli (BlockScience)* ## Synopsis The choice of the parameters for the alpha-beta filter can induce chaotic behaviour in the filtered variables due to the updating inertia, which is associated with lower parameter values. Mitigation is done through larger alpha-beta parameter choices, which can be justified through the fact that simulations with different sampling intervals must have different alpha beta values, and this is captured through the usage of a geometrical relation. ## Chaotic behaviour due to updating inertia We have performed an simulation with a 5-day time granularity of the system model with the usual parameters choices, and by using $\alpha = 9.25e-4$ and $\beta = 2.84e-7$. Those parameters ### Simulation behaviour Within the above parameters, the initial pledge mechanism fails to adjust to network changes in a satisfactory way due to inertia when updating the variables. This induces chaotic behaviour when the locked fraction is near 100%, which causes the TCS mechanism to use negative values. #### Initial pledge per unit of power ![](https://i.imgur.com/hhPjsh5.png) #### FIL locked fraction ![](https://i.imgur.com/Q705eHg.png) #### TCS factor of the total initial pledge ![](https://i.imgur.com/Wt50dYU.png) ## Mitigation We have found empirically that higher values of $\alpha$ and $\beta$ tends to make the mechanism work closer to the expected behaviour. Specifically, the closer $\alpha$ is to 1.0, the better the system behaves according to the expectations. #### LF with $\alpha=1$ and $\beta=0.004$ ![](https://i.imgur.com/EYj5Trc.png) #### LF for a sweep on $\alpha$ ![](https://i.imgur.com/iNkd0ws.png) ### Relation of $\alpha$ and $\beta$ for different sampling intervals When selecting parameter for a steady-state Kalman filter, there is a non-trivial relation between them and other parameters, like the noise and sampling interval. This generates the hypothesis that different timescales must necessarily use different parameters, and there is no trivial relation between them. (https://en.wikipedia.org/wiki/Alpha_beta_filter#Choice_of_parameters) As a mitigation workaround, it is proposed that one way of connecting different sampling intervals is through the map $1-(1-\alpha_0)^{\frac{T}{T_0}} \rightarrow \alpha$, where $T_0$ is the sampling interval of a source simulation, and $\alpha_0$ is the filter parameter of it. $T$ and $\alpha$ are associated with the simulation with a different sampling interval. The above map have geometrical properties that makes it interesting for a working relation: * It is approximadly linear for $T$ in the neighborhood of $T_0$. * The mapped $\alpha$ is always into the $[0, 1]$ domain * It is consistent when applying multiple sucessive transformations * It is consistent with the limits $T \rightarrow 0$ and $T \rightarrow \infty$ When we apply the above map to the original parameters for a sampling interval of 5 days = 14400 epochs, we have the following values: $\alpha_c \approx 1.0$ $\beta_c \approx 0.004$ Which is compatible with the values that we had to adopt to make the model match empirically with the expectancies.