# Computation in Superposition: Math Overview
"Toward a Mathematical Framework for Computation in Superposition" (TMF) [[link](https://www.lesswrong.com/posts/2roZtSr5TGmLjXMnT/#1_4_ANDs_with_many_inputs__computation_of_small_boolean_circuits_in_a_single_layer)] provides a proof of concept for how superposition can allow parallel on many features in parallel. As a mathematical description, it serves as a starting place for understanding more complex types of superposition computation.
Here we give a condensed version of the points made in the post.
## Computing ANDs in superposition

* **Features:** Say we have a set of $m$ sparse, boolean features (eg, a set of topics that are either present or not in a text).
* Denote features by $f_{\alpha}$. Let $\ell \ll m$ be the number of features "on" at the same time.
* **Neurons:** In the MLP layer, $d$ neurons each take a random subset of the features as inputs. Neurons are only nonzero if at least two features are "on".
* $z_i = ReLU( \sum_{\alpha} W_{i\alpha} f_{\alpha}- 1)$ where $W_{i\alpha}$ is 1 with probability $p$ and 0 otherwise.
* Each neuron's inputs define a set $S_i$ of features.
* $z_i = 1$ if only two features in $S_i$ are "on". More generally, $z_i = k-1$ for $k \geq 2$ features "on" in $S_i$ and 0 otherwise.
* **Read off ANDs:** To linearly read off the AND operation for two features, we take the average over neurons that have both features as inputs.
* AND estimate $\widehat{f_{\alpha} \land f_{\beta}} = \displaystyle \mathop{\mathbb{E}}_{f_{\alpha}, f_{\beta} \in S_i}[z_i] = \sum_i z_i \frac{W_{i\alpha}W_{i\beta}}{\sum_j W_{j\alpha}W_{j\beta}}$
* We expect the $z_i$ included in the sum to typically be $1$ when $f_{\alpha}$ and $f_{\beta}$ are "on" and $0$ otherwise, giving the correct AND behavior. Rare events when other "on" features are included in the sums add noise to these values.
* **Interference:** Noisy contributions to the read-off from other neurons being "on" are negligible if the features are sufficiently sparse and the random subsets of feature inputs are small enough.
* Overall, the noise contributions are small if the expected number of nonzero terms in a neuron's sum over features is small. Since $\ell$ neurons are "on" and contribute to the sum with probability $p$, then this requires $p\ell \ll 1$.
* Consider a neuron $z_i$ with both $f_{\alpha}$ and $f_{\beta}$ as inputs.
* If both features are "on", then the neuron is in the linear piece of the ReLU, so $z_i = 1 + \sum_{\gamma\neq \alpha,\beta} W_{i\gamma}f_{\gamma}= 1 + \Delta$. There are $\ell-2$ other features that are "on" and a probability of $p$ of each feature contributing to the sum. So $E[\Delta]$ contributes $p(\ell -2)$ to $\widehat{f_{\alpha} \land f_{\beta}}$. This is only small if $p \ell \ll 1$, ie the probability of extra features being "on" and connected to the neuron is small.
* If only one feature is "on", then a similar calculation shows the noise contribution is also $\sim p \ell$.
* If both features are "off", then there is only noise when two other features in the sum are "on". This is more rare, so the noise contribution is smaller, $\sim (p\ell)^2$ if $p\ell \ll 1$.
* **Universal AND**: It's possible to read-off all pairwise ANDs with a layer size that is a power of $\log(m)$.
* To ensure a high probability that any pair of features can be read-off we need $p > \sqrt{\frac{2\log m}{d}}$.
* A given pair of features will be inputs into $dp^2$ neurons on average. The probability of **not** appearing in any neuron is $\exp(-dp^2)$, from the poisson distribution.
* Across ${m}\choose{2}$ pairs, the probability of **all** pairs being in at least one neuron's inputs is $(1-\exp(-dp^2))^{\binom{m}{2}} \approx 1 - \binom{m}{2}\exp(-dp^2)$. For the second term to be small, we need $p > \sqrt{\frac{2\log m}{d}}$.
* The post makes the particular choice $p = \log^2 m/\sqrt{d}$. Then, each estimate $\widehat{f_{\alpha} \land f_{\beta}}$ is an average over $\log^4 m$ terms.
* If $\ell$ is also a power of $\log m$, then $d$ can be a power of $\log m$ and be large enough for small interference terms.
* **Realistic regimes:**
* Based on values from Anthropic's "Towards Monosemanticity", it wouldn't be possible to use the U-AND construction to compute ANDs for all features.
* $m \sim 4,000$ (range 500-130K), technically these are features found in the neuron activations.
* $\ell \sim 10-20$.
* $d\sim 500$
* Constraints on $p$:
* $p \ll 1/\ell = 0.05-0.1$
* $p > \sqrt{\frac{2\log m}{d}} = 0.18$
* If we take $p = 0.03$, then we would only be able to extract $1 - \exp(-dp^2) = 36\%$ of all ANDs.
## Extensions
* **When inputs are in superposition:** The construction also works if the inputs are already in superposition. However, in this case the number of ANDs that can be computed is of order the number of parameters in the layer $O(d_{in} d)$.
* Let the $m$ sparse, boolean features be an "overbasis" for a $d_{in}$-dim space. So $\vec{f_1},\dots, \vec{f_m} \in \mathbb{R}^{d_{in}}$. The input is a sum of $\ell$ feature vectors.
* Decompose the neuron's weight matrix into $W = \hat{W}F$ where $F_{\alpha i} = (\vec{f_\alpha})_i$ ($m \times d_{in}$ dims) puts the input back into the feature basis via a dot product. $\hat{W}$ is same as the U-AND construction above.
* After applying $F$, the features are either "on" with value 1 or "off" with a small random value of standard deviation $1/\sqrt{d_{in}}$.
* Unlike before, the interference terms now add up for "off" features. If neurons connect to $\sim d_{in}$ features via $\hat{W}$ then the interference terms are order 1 and cause "misfires". This requires $p < \frac{d_{in}}{m}$.
* For large $m$, we cannot compute all pairwise ANDs since $p$ would be too small for neurons to contain all pairs of features.
* Instead, we have to choose $d_{in}d$ ANDs to compute. It's basically the same construction but subsetting to features involved in the chosen ANDs.
* **Random Gaussian weights for inputs not in superposition**:
* A layer with random Gaussian weights would have neuron activations $z_i = ReLU(\sum_\alpha W_{i\alpha} f_\alpha - b_i)$ for boolean feature values $f_\alpha$.
* We need readoff weights $r_i$ that are correlated with $z_i$ when two features, $f_\alpha$ and $f_\beta$ are active, but uncorrelated otherwise.
* To readoff $f_{\alpha} \land f_{\beta}$, we can use weights $$r_i = \frac{W_{i \alpha}W_{i\beta}}{E[\sum_i W_{i\alpha}W_{i\beta}z_i |\ \alpha, \beta \text{ active}]},$$ where the expectation is over inputs. Because of the expectation, the readoff weights would need to be learned.
* If both features are active, then by design we'd expect $E[\sum_i r_i z_i] = 1$.
* If one or zero features are active, then $E[\sum_i r_i z_i] = 0$ since either $W_{i\alpha}$ or $W_{i\beta}$ is uncorrelated with $z_i$.
* In general, the noise in the numerator sum ($\sum_i z_i W_{i\alpha}W_{i\beta}$) should scale as $\sqrt{\ell \ k}\sigma_W^3$ for $\ell$ active features and $k$ nonzero neuron activations, $z_i$. The denominator goes as $k \sigma_W^3$, so the readoff noise is $\sim \sqrt{\ell/k}$.
* If the bias is learned, then it's value is likely determined by minimizing the noise term $E[-b_i + y \ | \ z_i = x - b_i + y > 0]$ where $x, y$ are random gaussian variables with mean zero and variances $\sigma_W \sqrt{2}$ and $\sigma_W (\ell-2)$ that represent the sums over the two AND features and the other active features, respectively.
* It's possible to also readoff $f_{\alpha} \land f_{\beta}$ using the weights $$r_i = \frac{W_{i\alpha}+W_{i\beta}}{E[(W_{i\alpha}+W_{i\beta})z_i |\ \alpha, \beta \text{ active}]}.$$
* This approach has the advantage of a higher correlation between $r_i$ and $z_i$ but the disadvantage that the readoff is nonzero when only one feature is active, going something like $$E[\sum_i r_i z_i] \approx \frac{E[\sum_i (W_{i\alpha})^2 z_i]}{E[(W_{i\alpha}+W_{i\beta})^2 z_i]}\approx \frac{k \sigma^2}{k(2\sigma)^2} \approx \frac{1}{4}$$.
* This approach is perhaps favorable when $d$ is not large, and the noise for the first approach is large.
* **Stacking U-AND layers:** It might be possible to stack multple AND layers in a row to start computing more complex boolean functions. If interference is not carefully managed then errors are likely to propogate.
* **ANDs of three or more features:** The same type of construction can be used to compute ANDs up to order $k$ (eg $f_1 \land f_2 \land f_3 \dots \land f_k$) simultaneously.
* If a neuron's inputs contain three features, then its value is 2 if all three are "on" and 1 if only two are "on".
* An AND of three features (say $f_1$, $f_2$, $f_3$) can be computed by adding or subtracting averages over neurons with two or three of the features:
* $AND_{1,2,3} = E_{1,2\in S_i}[z_i] + E_{1,3\in S_i}[z_i] + E_{2,3\in S_i}[z_i] - E_{1,2,3 \in S_i}[z_i]$.
* Can make similar constructions for higher order ANDs.