# FEL with parametric bootstrap and much better visualization. > Improving method performance for smaller alignments/test sets (HyPhy versions 2.5.33 and later) FEL (Fixed Effects Likelihood) is a tool that we [originally developed in 2005](https://academic.oup.com/mbe/article/22/5/1208/1066893) to perform a "non-parametric" test of natural selection acting on individual alignment sites. The method essentially estimates (site-by-site) a pair of evolutionary substitution rates: &alpha; (synonymous substitutions) and &beta; (non-synonymous substitutions) and performs a statisitical hypothesis test <tt>is &alpha; = &beta;?</tt>. If the null hypothesis is rejected at some significance level (e.g. p≤0.05), then selection is inferred: negative/purifying if &alpha; < &beta; and positive/diversifying otherwise. The significance for the test is derived using standard asymptotics which [work well if the sample size is large enough](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0094534). In this case, the sample size is ~the number of tested branches, which can be small (even ONE!). Motivated in part by own analysis of small samples (~20 sequences) of canine and feline coronavirues, we modified FEL to use parametric bootstrap at each site to obtain significance. This is of course much more expensive (xK, where K is the number of replicates for bootstrap replicates), but should result in a more accurate definition of the null distribution of the test statistic and better detection of non-neutral evolution. Another context in which FEL is often used is the estimation of site-by-site dN/dS (&omega;). These estimates are going to be quite noisy, and generally we do not recommend using them directly. But when coupled with some degree of uncertainty assessment, the estimates will be more useful. To make this possible, we added an option to compute profile likelihood estimates for each site. These options are available via command line arguments (`--resample N` where `N` is the number of bootstrap replicates to draw; and `--ci Yes` to compute CI) and the updated www.datamonkey.org submission page Finally, we completely reworked the FEL visualization page (http://vision.hyphy.org/FEL) based the ObservableHQ framework with targeted visualization design. ### Site-by-site estimates of dN/dS with uncertainty quantification >Maximum likelihood estimates of dN/dS at each site, together with estimated profile condifence intervals (if available). dN/dS = 1 (neutrality) is depicted as a horizontal gray line. ![](https://i.imgur.com/Hbg0n9Y.png) ### Site-by-site estimates of individual rates >Maximum likelihood estimates of synonymous (α) and non-synonymous rates (β) at each site shown as bars. The line shows the estimates under the null model (α=β). Estimates above 10 are censored at this value. ![](https://i.imgur.com/0cHMLO7.png) ### Comparing asymptotic and bootstrapped p-values Compare the level of agreement and identify which sites differ in classification (crosses) ![](https://i.imgur.com/mT8D3wx.png) ### Identify the degree of agreement between asymptotic and bootstrap p-value distributions Here's an example of good and poor agreement, depending on the site. ![](https://i.imgur.com/1DDympc.png)