Kecccak implementations

Implementations here: https://github.com/Brechtpd/zkevm-circuits/tree/keccak-playground/zkevm-circuits/src/keccak_circuit

We use 1 keccak_f == 24 internal rounds == hashing up to 136 bytes.

Both implementations dynamically create the largest lookup tables the circuit height allows. The implementation then packs the most it can to reduce the number of lookups. This is why often no specific lookup numbers are given because that very much depends on the circuit height.

Bit implementation

The general idea here is that we can do bitwise operations using simple (and efficient) custom gates, but only if we can work on bit values.

a.expr() + b.expr() - 2.expr() * a.expr() * b.expr()

However, as you can see above the expression does require a multiplication of the inputs. To make sure we don't blow up the degree of the expressions we have to be careful not to do too much inside a single custom gate and use intermediate cell values where necessary. The aim for now is a max expression degree of 5.

Internal round implementation

All 25 state words are stored as bits (b array in the circuit). A word is 64 bits so that's 25*64 = 1600 cells.

Theta

First we have to xor 10 bits together. Doing this with just expressions would give an expression of degree 10. Instead we add most of them together and then use a lookup to normalize the result:

c[i][k] = normalize(
    xor::expr(b[pi][0][k].clone(), b[pi][1][k].clone())
    + xor::expr(b[pi][2][k].clone(), b[pi][3][k].clone())
    + xor::expr(b[pi][4][k].clone(), b[ni][0][pk].clone())
    + xor::expr(b[ni][1][pk].clone(), b[ni][2][pk].clone())
    + xor::expr(b[ni][3][pk].clone(), b[ni][4][pk].clone())
);

We can still calculate a single xor with an expression within our expression degree limit of 5 (lookups have input expression degree 4 for a single cell input), this way our lookup table is smaller (instead of inputs in [0,10] we have inputs in [0,5]). The other xors are done using simple additions and so we need to reduce the input from [0,5] to [0,1] using a lookup. We only have to do 320 of these, and depending how big we can make the lookup table (limited by the circuit height), we can pack multiple of these in a single lookup.

Now that we have c we can easily calculate the post theta state using a simple xor using a custom gate:

b[i][j][k] = xor::expr(b[i][j][k], c[i][k]);

After this step b[i][j][k] has degree 2.

Rho/Pi

Nothing to do here because the state bits just get shuffled. Because we operate on bits we can do any rotation for free.

Chi/Iota

For this step we also just use custom gates. We have to be bit careful to make efficient use of the available expressions degree. For Chi we have to check that next_b[i][j][k] == b[i][j][k] ^ ((~b[(i+1)%5][j][k]) & b[(i+2)%5][j][k])). We can use the fact that the initial state of the next internal round is stored in the next row. This is the same thing as checking that b[i][j][k] ^ next_b[i][j][k] == ((~b[(i+1)%5][j][k]) & b[(i+2)%5][j][k])) which splits the expression in two expressions of a lower degree (4 and 3).

cb.require_equal(
  not::expr(b[(i + 1) % 5][j][k]) * b[(i + 2) % 5][j][k],
  xor::expr(b[i][j][k].clone(), b_next[i][j][k].clone()),
);

The iota step for the first state word is done very similarly as above by just xoring the round constant (stored in a fixed column) with the next state bit as well:

cb.require_equal(
  not::expr(b[(i + 1) % 5][j][k]) * b[(i + 2) % 5][j][k],
  xor::expr(
      xor::expr(b[i][j][k].clone(), b_next[i][j][k].clone()),
      meta.query_fixed(iota_bits[iota_counter], Rotation::cur())
  )
);

This only needs to be done for a small number of bits (see IOTA_ROUND_BIT_POS). This extra xor increased the degree of the expression of the second part by 1, and so both expressions how have degree 4.

Absorption

For each keccak_f we have to absorb 17 words of data every 24 internal rows. To reduce the number of columns needed to store this data we spread that data over the 24 rows we need to do the internal rounds. The absorption is done using a simple xor using custom gates:

cb.require_equal(
    xor::expr(b[i][j][k].clone(), a_next[a_slice][k].clone()),
    b_next[i][j][k].clone(),
);

Putting it all together

To makes things easy we use 25 rows per keccak_f:

  • We enable the round selector for calculating the 24 internal rounds
  • We enable the absorb selector for calculating the absorption a single time

Characteristics

We do most of the calculations on boolean (or very small) values which is sometimes very helpful to speed things up.

Columns

Per row we need:

  • 1600 columns to store the state bits
  • 320 columns to store the theta c bits
  • 17*64/24 columns to store the absorb bits

~2000 columns when doing an internal round/row (but an internal round could be split up over multiple rows without too much loss of efficiency). This is a large amount of columns, but at least for the prover this isn't bad at all. These columns only contain 0 or 1 as value and so calculating the MSM for these columns, normally the biggest cost, is actually very cheap.

Lookups

We only need a small number of lookups to calculate the theta c values. Alternatively we could do this using custom gates as well, but the number of lookups is quite small so probably won't make that much of a difference.

Custom gates

We have a large amount of custom gates, but because these are very simple expressions the prover cost for these is relatively small. We also make sure the expression degree doesn't exceed 5 so the extended domain is only 4x larger.

Packed implementation

Very similar to the current implementation, just slightly different way to do things.

Multiple bits are packed inside a single field element. The value per bit will never exceed 5, so for simplicity we always store the per bit value in 3 bits.

Because we don't work on bit values any more, almost all operations need to be implemented making use of lookups in some way. We make sure that we don't needlessly increase the input expression of the lookups by requiring extra selectors to enable/disable lookups. Instead we make sure each lookup has its own dedicated column that is always can remain enabled. This allows us to achieve the lowest degree possible (when using lookups).

Ans so this implementation depends on lookups being cheap, and currently they are still quite expensive. However with things like caulk lookups are likely only to get cheaper and cheaper (or at least will allow very large lookup tables so we can pack a lot of bits in a single lookup to reduce the number of lookups we have to do).

Internal round implementation

All 25 state words are stored as single field elements. We can do this because 64 * 3 = 192 < 253 bit field element. Though often these are stored in multiple smaller parts to be able to do lookups on them.

Theta

We first calculate

bc[i] = normalize(b[i][0] + b[i][1] + b[i][2] + b[i][3] + b[i][4])

We only have to calculate 5 of these for each i. The normalization is done using lookups ([0,5] -> [0,1]), and so to do this we have do the additions and then split up the input into multiple parts (each containing multiple bits). c is then calculated as an expression as c[i] = bc[(i + 4) % 5] + rot(bc[(i + 1)% 5], 1) from those parts.

Because we know we need to have bc[i] and the rotated value rot(bc[i], 1) we already take this into account when splitting up the b[i][0] + b[i][1] + b[i][2] + b[i][3] + b[i][4] value, which we do by having the last bit as its own part. This way both can be calculated from the same lookup parts simply by moving the last part (containing the single bit value) from the end to the start to get the rotated value. This allows us to calculate rot(bc[i], 1) "for free", though in most cases we need to store a single extra part to be able to do so.

Finally we calculate the post theta state using an expression

b[i][j] = b[i][j] + c[i];

After this step b[i][j] has degree 1 and is in the range [0,3] (we don't normalize the result here just yet).

Rho/Pi/Chi

Now we have to do the Rho/Pi rotations and the Chi logic operations.

For the Rho/Pi rotations we have to split up the words in parts to be able to shuffle the bits.

For Chi we have to calculate next_b[i][j] == b[i][j] ^ ((~b[(i+1)%5][j]) & b[(i+2)%5][j])) using lookups which also requires us to split up the words in multiple parts. We calculate the Chi transform a ^ ((~b) & c) as chi_lookup[3 - 2*a + b - c] with chi_lookup = [0, 1, 1, 0, 0].

We handle them together because of the following observations:

  • To be able to do the necessary rotations we have to split up the words into parts.
  • b[i][j] is currently in the range [0,3] so we may as well normalize the output at the same time.
  • To calculate the Chi operations we would normally recombine and split up the words into parts again to use as inputs in the chi lookups.

Instead of splitting up the words into parts, then recombining and the splitting up again we just split up the words once in a way that allows us to do both the rotation and Chi on the same parts. This saves a significant amount of columns.

For example using 12345678abcd as the 12-bit word. For different rotations we can make sure that the parts after rotations always nicely line up so the Chi transform and lookups can still be done as efficiently without having to recombine and split again:

  • rot 1: 123|4567|8abc|d -> d|123|4567|8abc
  • rot 2: 12|3456|78ab|cd -> cd|12|3456|78ab
  • rot 5: 123|4567|8abc|d -> 8abc|d|123|4567

Notice how the parts for the rotated values nicely line up in parts of bits of 4 (when combining the parts that are smaller than 4 which are always adjacent to each other). We then calculate3 - 2*a + b - c on those part values and pass that into the chi_lookup tables to finally get the results after Chi.

This is also the reason why we do theta and its normalization like we do. The normalization needs lookups in [0,3]. Chi needs lookups in [0,4]. By both needing similarly sized lookup tables the part sizes we need to split the words into to do both operations on are very similar.

To split this up over multiple rows we can make use of the fact that in next_b[i][j] == b[i][j] ^ ((~b[(i+1)%5][j]) & b[(i+2)%5][j])) j remains static and i is accessed in a wrap around manner (and so should ideally all be stored on the same row so each row can do exactly the same thing). So when splitting the words into parts over multiple rows we just have to make sure that all b[i]s are stored on the same row. This isn't that strong of a requirement actually because the words are split into multiple parts, and so only the parts at the same position of those words actually need to be on the same row.

Iota

Done simply by splitting up b[0] in parts and doing the xor using lookups. This only needs to be done on a single word so it doesn't really matter much how this is done.

Absorption

For each keccak_f we have to absorb 17 words of data every 24 internal rows. To reduce the number of columns needed to store this data, and to reduce the number of lookups per row, we spread the absorption over 17 rows, each row absorbing exactly 1 word. The absorption is done using a simple xor using lookups.

The way this is done is by storing absorb_from, absorb_data, and absorb_result per row.

  • absorb_from is a state word pre absorption.
  • absorb_data is the data word to absorb.
  • absorb_result is the state word post absorption, calculated by doing the xor using lookups.

This data is then used when we actually need to absorb the data (using rotations to the next rows). We verify that the state words at this time matches the absorb_from values, and then we enforce the next state words to be absorb_result.

Characteristics

By packing multiple bits in a single field element we can reduce the amount of cells we need to use to store all necessary data. However by doing so we need to depend on lookups and so we have to use a large amount of them.

In a prover without zk we can limit the expression degree to just 3 (we pretty much only add cell together, or multiple with constants), which makes the extended domain only 2x larger.

Columns

Per row we need:

  • 25 columns to store the state words
  • ~750 colums to store the intermediate part value
  • 3+ columns to store the absorb data + intermediate parts

~800 columns when doing an internal round/row (but can be split up over multiple rows).

Lookups

We need ~500 lookups per row. The extended domain is small, but this amount of lookups is still expensive to do.

Custom gates

We only need a small amount of custom gates to link the lookups together. The impact of these on the prover performance is pretty much insignificant. This is why when splitting the work of a single internal round over multiple rows we don't really care about splitting up the custom gates over the multiple rows and simply just enable them on one row and disable them on all other rows.

Performance report printed in code

Some stats are printed in the code to help see how expensive some parts are.

  • println!("Lookups: {}", lookup_counter): prints how many lookups are done in this part.
  • println!("Columns: {}", cell_manager.get_width()): prints the total width of the circuit (the height/internal round is configurable and is fixed, the circuit grows in width).

At the end some extra stats are printed:

  • println!("Degree: {}", meta.degree()): the circuit expression degree (impacts how big the extended domain is)
  • println!("Minimum rows: {}", meta.minimum_rows()): ~amount of rows that are unusable, limits the size of the lookup tables a bit (but will be 0 after zk is removed)
  • println!("num unused cells: {}", cell_manager.get_num_unused_cells()): To split the work over multiple rows, we sometimes leave some gaps on some rows.
  • println!("part_size x: {}", get_num_bits_per_x_lookup()): The larger the height of the circuit, the larger we can make the lookup tables. The implementations can dynamically make the lookup tables are large as possible to minimize the amount of lookups that need to be done.
Select a repo