# Small Fields in Plonky3
[Plonky3](https://github.com/Plonky3/Plonky3) is a general purpose toolkit for implementing polynomial IOP's. In it we find implementations of several different finite fields. While these fields look superficially similar, the choice of finite field can make a major difference in proving times so it's essential to choose the right field for the given application. The goal of this note is to give a quick rundown of the different options and their advantages/disadvantages along with some concrete timing data.
Currently, Plonky3 contains `5` finite fields:
- One large field `Bn254`,
- One `64` bit field `Goldilocks`,
- Three `31` bit fields `BabyBear, KoalaBear` and `Mersenne31`.
Whilst `Bn254` and `Goldilocks` are useful in some circumstances, Plonky3 is mostly concerned with its `31`-bit fields. These are the fields for which the surrounding code has been optimised and the proofs using these fields are noticeably faster. Due to this, these are the fields which we focus on here.
# 31 Bit Fields
One of the main lessons from the development of STARKs over the last few years has been that STARKs over smaller fields produce smaller and faster proofs. While it is possible to take this to the natural mathematical limit of the field of `2` elements (See [Binius](https://eprint.iacr.org/2023/1784)), there are also a few reasons to stop with a field which fits nicely into `32` bits.
The most natural reason is that modern CPU's contain a lot of support for `32`-bit integer operations. These can then be leveraged for relatively cheap field operations. Indeed when we look at SIMD Intel instructions we find that some operations (such as [widening multiplication](https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm256_mul_ep)) exist only for `32`-bit integers.
Another reason is that there are some drawbacks to passing to smaller primes. When verifying things such as integer addition/multiplication (Say using the [Casting out Primes](https://eprint.iacr.org/2022/1470) method), the cost increases dramatically as the size of the prime decreases.
In practice we find that there are a couple of additional advantages possessed by `31` bit primes when compared to their `32` bit cousins. In particular we avoid having to handle overflow when performing SIMD addition.
There are two different implementations of `31` bit fields within Plonky3. The `MontyField31` struct is a generic implementation which produces prime fields over any `31`-bit prime. There is also a specialized implementation `Mersenne31` for the prime $p = 2^{31} - 1$ which takes advantage of its special structure.
## Monty-31
In the `MontyField31` struct, elements are saved in [Montgomery form](https://en.wikipedia.org/wiki/Montgomery_modular_multiplication) allowing for a more efficient reduction algorithm for multiplication.
The speed of addition/multiplication in these fields is identical regardless of the prime and so this gives leeway to optimise the choice of prime. Due to this, Plonky3 contains two different options for Monty-31 fields, the `BabyBear` field and the `KoalaBear` field which solve slightly different optimisation problems.
### BabyBear
One important feature to keep track of in a prime field is the two-adicity which is the largest power of `2` which divides `p - 1`. This gives a bound on the maximum trace length for proofs over this field. The `BabyBear` prime, defined to be $$p = 2^{31} - 2^{27} + 1,$$ was introduced by [RISC Zero](https://dev.risczero.com/proof-system-in-detail.pdf) and has the maximal two-adicity (`27`) of all primes of `31`-bits and below. Due to this, the `BabyBear` prime has been the standard `31`-bit prime for the last couple of years.
### KoalaBear
The `KoalaBear` prime comes from solving a slightly more complicated optimisation function involving arithmetic hash functions.
An arithmetic hash function (E.g. [Poseidon2](https://eprint.iacr.org/2023/323) or [Rescue](https://eprint.iacr.org/2019/426)) is a hash function which aims to mix data using finite field operations. This makes STARKs for these hash functions much smaller than STARKs for more standard hash functions (E.g. [KECCAK](https://keccak.team/index.html) or [Blake3](https://github.com/BLAKE3-team/BLAKE3-specs/blob/master/blake3.pdf)).
One requirement of arithmetic hash functions is a choice of integer `1 < d < p - 1` relatively prime to `p - 1`. Given such a `d`, the map $x \to x^d$ will be a permutation on $\mathbb{F}_p$. In general minimising `d` both speeds up the hash function and shrinks the STARK which proves the hashes correctness without compromising security[^1].
One small disadvantage of the `BabyBear` prime is that, $$2^{31} - 2^{27} + 1 = 15 \times 2^{27} + 1$$ and so the smallest available choice for `d` will be `7`. Due to this, [in Plonky3 we introduce](https://github.com/Plonky3/Plonky3/pull/329) the `KoalaBear` prime defined by $$p = 2^{31} - 2^{24} + 1 = 127 \times 2^{24} + 1.$$ At the cost of a slightly lower two-adicity `24`, this allows us to pick `d = 3`.
It's interesting to note that, the `KoalaBear` prime is not the unique solution to this particular optimisation problem. Two other options which are worth investigating are: $2^{31} - 2^{30} + 2^{27} + 2^{24} + 1$ and $2^{29} - 2^{26} + 1$. Both allow for `d = 3` and the second even has higher two-adicity though with the notable drawback of being two orders of magnitude smaller.
## Mersenne31
Finally we come to the more unusual choice of $p = 2^{31} - 1$. The advantage of this field is that modular reductions can be implemented using shifts as $$2^{31} \equiv 1 \mod p.$$ This makes SIMD multiplication have noticeably lower latency and higher throughput. However the two-adicity of this field is the optimally bad value of `1`. This makes the standard method of constructing a univariate STARK impossible. Luckily for use, the recent work [Circle STARK](https://eprint.iacr.org/2024/278), provides a way around this by using the circle subgroup of $\mathbb{F}_{p^2}$ as the underlying FFT group instead of the standard multiplicative subgroup.
Unfortunately, like `BabyBear`, `p - 1` is divisible by `3`. Hence the smallest available power map for arithmetic hash functions is $x \to x^5$. This is a little better than `BabyBear` but a little worse than `KoalaBear`.
# Comparisons
There are a variety of different ways to compare these three field options[^2]. The simplest approach is to compare the cost of the basic field operations of addition and multiplication. These depend on the architecture as Plonky3 makes use of SIMD instructions to speed up operations. We find that the speed of addition is the same for all fields but, as mentioned earlier, `Mersenne31` has faster multiplication. We also report multiple latencies in cases where these differ for the lhs and rhs. These tables [previously appeared](https://eprint.iacr.org/2023/824) though the numbers have been improved slightly over the interim.
#### Addition: Throughput and Latency
| | Throughput (ele/cyc) | Latency (cyc) |
| ------------ | ---------- | -------- |
| Neon | 5.33 | 6 |
| AVX-2 | 8 | 3 |
| AVX-512 | 10.67 | 3 |
#### Multiplication: Throughput and Latency
| | Mersenne31: Throughput (ele/cyc) | Mersenne31: Latency (cyc) | MontyField31: Throughput (ele/cyc) | MontyField31: Latency (cyc) |
|-|-|-|-|-|
| Neon | 3.2 | 10 |2.29|11, 14|
| AVX-2 | 2 | 13 |1.71|21|
| AVX-512 | 2.91 | 15, 14 |2.46|21|
Outside of the basic operations, the majority of proof times are taken up by Discrete Fourier Transforms (DTFs) and hash functions.
### DFT's
Long term we expect the DFT's should all end up a similar speed. However, currently, the standard DFT is faster (particularly when working in a parallelized setting) compared to the circle DFT used for the Circle STARK. Due to this proofs involving `Mersenne31` are currently a little slower.
### Hash Functions
There are two different cases here. When we work with a hash function like `Keccak` the field choice is immaterial. However, it matters a lot for arithmetic hash functions like `Poseidon2`. As mentioned earlier, these rely on a choice of `d` such that $x \to x^d$ is a permutation on field elements. For our fields, the optimal choices are:
$$
\begin{align*}
\text{KoalaBear} : \quad &x \to x^3,
\\ \text{Mersenne31}: \quad & x \to x^5,
\\ \text{BabyBear}: \quad & x \to x^7.
\end{align*}
$$ Using these we find that the speed of the arithmetic hash `Poseidon2` is roughly identical for `KoalaBear` and `Mersenne31` but is slower in the case of `BabyBear`
#### Poseidon2 Timings[^3]
| Field |AVX-2: WIDTH 16 | AVX-2: WIDTH 24 |AVX-512: WIDTH 16 | AVX-512: WIDTH 24 |
|-|-|-|-|-|
| Mersenne31 | 0.71μs | 1.3μs | 1μs | 1.7μs |
| KoalaBear | 0.78μs | 1.3μs | 1.1μs | 1.7μs |
| BabyBear | 1μs | 1.7μs | 1.3μs | 2.3μs |
Additionally, `KoalaBear` has a major advantage when it comes to proving that a arithmetic hash is correct. As the constraint degree in the Plonky3 system is is `3`, to verify an operation of the form $z = x^5$ or $z = x^7$ requires saving an intermediary element $y= x^3$. Then we verify that $y = x^3$ and either $z = x^2 \times y$ or $z = x \times y^2$ depending on the case. This makes these operation twice as expensive to verify when compared to $z = x^3$. Hence the trace for arithmetic hash proofs is roughly `50%` smaller when the base field is `KoalaBear`.
### End to End Proofs
Whilst the previous discussion is interesting and gives a good intuition for what we should expect, fundamentally what matters are concrete benchmarks. Hence in the following table I give the current proving times[^4] for a collection of different proof statements and merkle hashes. These will likely improve as we continue to optimise Plonky3 and so I'll update this table over time. These numbers are current as of `11-Dec-2024`.
|Proof Statement|Merkle Hash|Field Used|Trace Dimensions|Time|
|-|-|-|-|-|
|$2^{19}$ Native Poseidon2 Permutations|Keccak|KoalaBear|$1320\times 65536$|530ms|
|||Mersenne31|$2408\times 65536$|1.9s|
|||BabyBear|$2392\times 65536$|1.17s|
||Poseidon2|KoalaBear|$1320\times 65536$|700ms|
|||Mersenne31|$2408\times 65536$|2.2s|
|||BabyBear|$2392\times 65536$|1.65s|
|$1365$ Keccak Permutations|Keccak|KoalaBear|$2633 \times 32768$|700ms|
|||Mersenne31|$2633 \times 32768$|1.15s|
|||BabyBear|$2633 \times 32768$|700ms|
||Poseidon2|KoalaBear|$2633 \times 32768$|860ms|
|||Mersenne31|$2633 \times 32768$|1.2s|
|||BabyBear|$2633 \times 32768$|940ms|
||Keccak|Goldilocks|$2633 \times 32768$|1.73s|
|$2^{13}$ Blake3 Permutations|Keccak|KoalaBear|$9168 \times 8192$|670ms|
|||Mersenne31|$9168 \times 8192$|950ms|
|||BabyBear|$9168 \times 8192$|670ms|
||Poseidon2|KoalaBear|$9168 \times 8192$|810ms|
|||Mersenne31|$9168 \times 8192$|1s|
|||BabyBear|$9168 \times 8192$|890ms|
Due to the above, we can see that the precise choice of field will depend on the use case but in the most part users should gravitate towards `KoalaBear`.
- If you are planning to recursively verify proofs, at some point the key bottleneck will be verifying arithmetic hashes. Hence for these systems `KoalaBear` should be preferred. This is also currently the fastest in all cases.
- If you want the fastest general purpose prime, you might want to gravitate towards `Mersenne31` in the long term. It is currently being held back due to using a less optimised Fast Fourier Transform but we expect it to be slightly faster than `KoalaBear` eventually thanks to faster field operations.
- If you are likely to encounter trace lengths above $2^{24}$ but want to use a more tried and tested (and currently faster) proof system, `BabyBear` is the best option.
[^1]: See [the Poseidon paper](https://eprint.iacr.org/2023/323) or [our implementation](https://github.com/Plonky3/Plonky3/blob/main/poseidon2/src/round_numbers.rs) for more details. Essentially, as `d` changes there are a few parameters which need to be tuned to maintain security but these turn out to have only minor effects on hash speed and the size of the STARK.
[^2]: The comparisons between the different fields will change as Plonky3 continues to be optimised. Hence I will endeavor to update this section over time. The current numbers were taken on December 11 using [PR #576](https://github.com/Plonky3/Plonky3/pull/576).
[^3]: Data from [PR #528](https://github.com/Plonky3/Plonky3/pull/528).
[^4]: All tests run on my laptop which has an Intel core i9 CPU with Raptor Lake and supports `AVX-2`. Additionally all tests run with the parallel feature enabled and optimal DFT chosen (Either `Radix2DitParallel` or `RecursiveDFT` depending on the trace dimensions.) All `BabyBear` and `KoalaBear` tests were performed using the command line interface introduced in [PR #576](https://github.com/Plonky3/Plonky3/pull/576).