changed 9 months ago
Published Linked with GitHub

Overview

Let \(F\) be a file with \(b\) blocks. For given parameters \(k\) and \(m\), erasure coding proceeds as follows.

We begin by computing, from \(b\) and \(k\), the number \(s = \lceil \frac{b}{k} \rceil\) of encoding steps. We then split the \(b\) blocks of \(F\) into \(k\) buckets of \(s\) blocks each (Figure 1). Since \(b\) may not in general be divisible by \(k\), we pad \(F\) with \(b_e\) empty blocks such that \(b_e + b\) is. We refer to this padded file as \(F_r\), and its block count \(b_r = b_e + b = s \times k\).

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Figure 1. Splitting of blocks into step buckets.

The file is then extended with \(m \times s\) parity blocks, where the input for each parity block depends on the indexing strategy.

Code

Encoding parameters are computed from a Manifest as follows:

proc init*( _: type EncodingParams, manifest: Manifest, ecK: Natural, ecM: Natural, strategy: StrategyType): ?!EncodingParams = if ecK > manifest.blocksCount: return failure( "Unable to encode manifest, not enough blocks, ecK = " & $ecK & ", blocksCount = " & $manifest.blocksCount) let rounded = roundUp(manifest.blocksCount, ecK) steps = divUp(rounded, ecK) blocksCount = rounded + (steps * ecM) success EncodingParams( ecK: ecK, ecM: ecM, rounded: rounded, steps: steps, blocksCount: blocksCount, strategy: strategy )

this uses somewhat funny math to compute \(\lceil b/k \rceil\):

func divUp*[T: SomeInteger](a, b : T): T = ## Division with result rounded up (rather than truncated as in 'div') assert(b != T(0)) if a == T(0): T(0) else: ((a - T(1)) div b) + T(1) func roundUp*[T](a, b : T): T = ## Round up 'a' to the next value divisible by 'b' divUp(a, b) * b

it basically says that, for \(a \neq 0\), we have that:

\[ \left\lfloor \frac{a - 1}{b} \right\rfloor + 1 = \left\lceil \frac{a}{b}\right\rceil \]

this can be seen to hold if \(a\) is divisible by \(b\): if \(q = a/b\), then \(\left\lfloor \frac{a - 1}{b} \right\rfloor = q - 1\) and \(q - 1 + 1 = q = \left\lfloor \frac{a}{b}\right\rfloor = \left\lceil \frac{a}{b}\right\rceil\). If \(0 < a < b\) it is also trivially true. If \(0 < b < a\) and \(a\) is not divisible by \(b\), on the other hand:

\[ \left\lfloor \frac{a - 1}{b} \right\rfloor + 1 = \left\lfloor \frac{(qb + r)- 1}{b} \right\rfloor + 1 \]

where \(1 \leq r < b\) and \(\lceil a/b \rceil = q + 1\). Taking the expression above, we see that:

\[ \left\lfloor \frac{qb + (r - 1)}{b} \right\rfloor + 1= \left\lfloor q + \frac{(r - 1)}{b} \right\rfloor + 1= q + 1 \]

since \(1 \leq r < b\) and hence \((r - 1) / b < 1\). \(_{\blacksquare}\)

In the code above, we have the following mapping for the EncodingParams:

  • ecK \(= k\);
  • ecM \(= m\);
  • steps \(= s = \lceil \frac{b}{k} \rceil\);
  • rounded \(= b_r = s \times k\);
  • blocksCount \(= b_r + b_p\) (includes empty and parity blocks).

Note that this can be confusing, as Manifest also has a blocksCount attribute, but in the non-encoded manifest this refers to \(b\) and not \(b_r + b_p\). This is because blocksCount is calculated from datasetSize:

func blocksCount*(self: Manifest): int = divUp(self.datasetSize.int, self.blockSize.int)

and datasetSize does get updated when we create the encoded Manifest:

let encodedManifest = Manifest.new( manifest = manifest, treeCid = treeCid, datasetSize = (manifest.blockSize.int * params.blocksCount).NBytes, ecK = params.ecK, ecM = params.ecM, strategy = params.strategy )

The encoded manifest does keep track of the original dataset size under the originalDatasetSize attribute. I would argue that for consistency and to avoid confusion, having an originalBlocksCount akin to blocksCount but which operates on originalDatasetSize would go a long way.

Mapping from API requests. API requests specify parameters nodes and tolerance. Nodes is the total number of nodes, and tolerance the maximum number of nodes we are willing to see failing before we lose any data. Considering the above, it should be clear that:

  • \(k\) = nodes \(-\) tolerance;
  • \(m\) = tolerance.

Explaining the Numbers in The EC Decoding Test

For the EC decoding test I am using Eric's integration test with a version of the King James' Bible truncated to \(262\,144\) bytes. Using a plain text file helps distinguish actual content from gibberish in output, and it makes it easy to search, say, for the end of the file and visually checking that all the content is there.

I am getting a file with \(786\,432\) bytes back from the API, or about \(3\) times the size of the original file. This contains the original text plus gibberish which amounts to \(2\) times the original file.

We make an API call with settings nodes = 3 and tolerance = 1, which means \(k = 2\) and \(m = 1\).

We are using the default block size, which is set to:

DefaultBlockSize* = NBytes 1024*64 # 65536

This means our file has exactly 4 blocks. Since \(k = 2\), this means we should have:

  • \(2\) encoding steps;
  • no padding (no extra empty blocks);
  • \(2\) parity blocks of size \(65\,536\).

The total expected size for the encoded file, therefore, is \(6\) blocks.

The fact that we get an output of size \(786\,432\) is therefore rather puzzling: this amounts to \(12\) blocks, of which the first \(4\) are the content of the file which we can confirm by inspecting the output and the next \(8\) are jumbled gibberish.

Since Leopard will not even allow coding rates below \(0.5\), we know those cannot be all parity bits, unless padding is broken.

Encoding

Analysing instrumentation output from Codex, we get:

-------------------- ENCODING ------------------------
 * ecM is: 1
 * ecK is: 2
 * Steps are: 2
 * Original blocks are: 4
 * Rounded blocks are: 4
 * Total blocks are: 6
   INTERLEAVING STEP (0)
   * Blocks resolved: 2
   * Padding blocks added: 0
   DONE INTERLEAVING STEP (0)
TRC 2024-06-19 11:26:18.327-03:00 Erasure coding data
TRC 2024-06-19 11:26:18.336-03:00 Adding parity block
   INTERLEAVING STEP (1)
   * Blocks resolved: 2
   * Padding blocks added: 0
   DONE INTERLEAVING STEP (1)
TRC 2024-06-19 11:26:18.343-03:00 Erasure coding data
TRC 2024-06-19 11:26:18.344-03:00 Adding parity block
 * added a total of parity blocks: 2
 * treeCid is: zDzSvJTfAh9dU5pnpYpULXAzfsonLZ9iJMNzEBXT2Uw25g6JVbwP
 * original manifest blocksCount is: 4
 * manifest blocksCount is: 6
 * manifest blockSize is:65536'NByte
 * manifest datasetSize is: 393216'NByte
 * manifest original dataset size is: 262144'NByte
TRC 2024-06-19 11:26:18.351-03:00 Encoded data successfully                  topics="codex erasure" tid=107940 treeCid=zDz*6JVbwP blocksCount=6 steps=2 ecK=2 rounded_blocks=4 ecM=1
-------------------- ENCODING DONE ------------------------

These top-level numbers seem roughly correct - the size of the new dataset is correct, as is \(b_r\) (4), the number of steps (2), parity blocks (2), and empty padding blocks (0).

Decoding

Decoding has a similar yet slightly more complex structure than encoding. prepareDecodingData shows the code for a decoding step, which fetches blocks and arranges their content into an array of arrays. Each inner array should contain the contents of a block, or be empty if the block is missing. Missing arrays will be later interpreted as erasures by Leopard.

Since this produces the input for a single decoding interleaved step, output arrays have size \(k\) (data) and \(m\) (parity), and the total number of non-empty inner arrays for decoding to succeed should be \(\geq k\). In case it is not, decoding will fail and an error will be generated at a later step.

proc prepareDecodingData( self: Erasure, encoded: Manifest, step: Natural, data: ref seq[seq[byte]], parityData: ref seq[seq[byte]], cids: ref seq[Cid], emptyBlock: seq[byte]): Future[?!(Natural, Natural)] {.async.} = ## Prepare data for decoding ## `encoded` - the encoded manifest ## `step` - the current step ## `data` - the data to be prepared ## `parityData` - the parityData to be prepared ## `cids` - cids of prepared data ## `emptyBlock` - the empty block to be used for padding ## let strategy = encoded.protectedStrategy.init( firstIndex = 0, lastIndex = encoded.blocksCount - 1, iterations = encoded.steps ) indicies = toSeq(strategy.getIndicies(step)) pendingBlocksIter = self.getPendingBlocks(encoded, indicies) var dataPieces = 0 parityPieces = 0 resolved = 0 for fut in pendingBlocksIter: # Continue to receive blocks until we have just enough for decoding # or no more blocks can arrive if resolved >= encoded.ecK: break let (blkOrErr, idx) = await fut without blk =? blkOrErr, err: trace "Failed retreiving a block", idx, treeCid = encoded.treeCid, msg = err.msg continue let pos = indexToPos(encoded.steps, idx, step) logScope: cid = blk.cid idx = idx pos = pos step = step empty = blk.isEmpty cids[idx] = blk.cid if idx >= encoded.rounded and pos >= encoded.ecK: trace "Retrieved parity block" shallowCopy(parityData[pos - encoded.ecK], if blk.isEmpty: emptyBlock else: blk.data) parityPieces.inc else: trace "Retrieved data block" shallowCopy(data[pos], if blk.isEmpty: emptyBlock else: blk.data) dataPieces.inc resolved.inc return success (dataPieces.Natural, parityPieces.Natural)

The code above initializes the indexing strategy with blocksCount and steps, which we know to be computed correctly in encoding: this will provide us with the block mapping for this interleaving step.

It then issues an async fetch for all required blocks, and processes those as they arrive in the main for loop (lines \(31\) - \(62\)). It will stop immediately once enough (\(k\)) blocks have been fetched (line \(34\)), effectively trading IO for CPU.

Line \(37\) processes a block retrieval. idx represents the position of the block into the overall file, and should be a number between \(0\) and \(b_r + b_e + b_p\). In the case of our file, this should be a value between \(0\) and \(5\).

Line \(43\) then performs a key transformation which is extracting the reverse mapping for the interleaving. One key aspect of this transformation is that it does not work for any interleaving strategy.

The stepping strategy maps index \(0 \leq j < s\) at step \(0 \leq i < k\) into the \(i^{th}\) element of the \(j^{th}\) \(s\)-sized bucket (Figure 1). If we let \(l\) be the mapped index, we get:

\[ \begin{equation} i + j\times s = l \iff j = \frac{l - i}{s} \label{eq:stepped} \tag{1} \end{equation} \]

where the equation for \(j\) is the reverse mapping. Not surprisingly, the mapping strategy in indexToPos is defined as:

func indexToPos(steps, idx, step: int): int {.inline.} = ## Convert an index to a position in the encoded ## dataset ## `idx` - the index to convert ## `step` - the current step ## `pos` - the position in the encoded dataset ## (idx - step) div steps

One implication of this is that the only interleaving strategy that currently will work with the EC module is the stepped strategy.

In that same vein, the decode function also has one bit which will only work with the stepped strategy:

for i in 0..<encoded.ecK: let idx = i * encoded.steps + step if data[i].len <= 0 and not cids[idx].isEmpty: without blk =? bt.Block.new(recovered[i]), error: trace "Unable to create block!", exc = error.msg return failure(error) trace "Recovered block", cid = blk.cid, index = i if isErr (await self.store.putBlock(blk)): trace "Unable to store block!", cid = blk.cid return failure("Unable to store block!") cids[idx] = blk.cid recoveredIndices.add(idx)

This scans through the data blocks fed into the current step and checks that there are matching recovery blocks in the recovered array. Note that the index mapping that is used in line \(2\) to pick the blocks that took part in step \(i\) is exactly the same formula we had deduced in Eq. \(\eqref{eq:stepped}\).

Select a repo