or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Syncing
xxxxxxxxxx
Overview
Let \(F\) be a file with \(b\) blocks. For given parameters \(k\) and \(m\), erasure coding proceeds as follows.
We begin by computing, from \(b\) and \(k\), the number \(s = \lceil \frac{b}{k} \rceil\) of encoding steps. We then split the \(b\) blocks of \(F\) into \(k\) buckets of \(s\) blocks each (Figure 1). Since \(b\) may not in general be divisible by \(k\), we pad \(F\) with \(b_e\) empty blocks such that \(b_e + b\) is. We refer to this padded file as \(F_r\), and its block count \(b_r = b_e + b = s \times k\).
- The image was uploaded to a note which you don't have access to
- The note which the image was originally uploaded to has been deleted
Learn More →Figure 1. Splitting of blocks into step buckets.
The file is then extended with \(m \times s\) parity blocks, where the input for each parity block depends on the indexing strategy.
Code
Encoding parameters are computed from a Manifest as follows:
this uses somewhat funny math to compute \(\lceil b/k \rceil\):
it basically says that, for \(a \neq 0\), we have that:
\[ \left\lfloor \frac{a - 1}{b} \right\rfloor + 1 = \left\lceil \frac{a}{b}\right\rceil \]
this can be seen to hold if \(a\) is divisible by \(b\): if \(q = a/b\), then \(\left\lfloor \frac{a - 1}{b} \right\rfloor = q - 1\) and \(q - 1 + 1 = q = \left\lfloor \frac{a}{b}\right\rfloor = \left\lceil \frac{a}{b}\right\rceil\). If \(0 < a < b\) it is also trivially true. If \(0 < b < a\) and \(a\) is not divisible by \(b\), on the other hand:
\[ \left\lfloor \frac{a - 1}{b} \right\rfloor + 1 = \left\lfloor \frac{(qb + r)- 1}{b} \right\rfloor + 1 \]
where \(1 \leq r < b\) and \(\lceil a/b \rceil = q + 1\). Taking the expression above, we see that:
\[ \left\lfloor \frac{qb + (r - 1)}{b} \right\rfloor + 1= \left\lfloor q + \frac{(r - 1)}{b} \right\rfloor + 1= q + 1 \]
since \(1 \leq r < b\) and hence \((r - 1) / b < 1\). \(_{\blacksquare}\)
In the code above, we have the following mapping for the
EncodingParams
:Note that this can be confusing, as
Manifest
also has ablocksCount
attribute, but in the non-encoded manifest this refers to \(b\) and not \(b_r + b_p\). This is becauseblocksCount
is calculated fromdatasetSize
:and
datasetSize
does get updated when we create the encodedManifest
:The encoded manifest does keep track of the original dataset size under the
originalDatasetSize
attribute. I would argue that for consistency and to avoid confusion, having anoriginalBlocksCount
akin toblocksCount
but which operates onoriginalDatasetSize
would go a long way.Mapping from API requests. API requests specify parameters
nodes
andtolerance
. Nodes is the total number ofnodes
, andtolerance
the maximum number of nodes we are willing to see failing before we lose any data. Considering the above, it should be clear that:nodes
\(-\)tolerance
;tolerance
.Explaining the Numbers in The EC Decoding Test
For the EC decoding test I am using Eric's integration test with a version of the King James' Bible truncated to \(262\,144\) bytes. Using a plain text file helps distinguish actual content from gibberish in output, and it makes it easy to search, say, for the end of the file and visually checking that all the content is there.
I am getting a file with \(786\,432\) bytes back from the API, or about \(3\) times the size of the original file. This contains the original text plus gibberish which amounts to \(2\) times the original file.
We make an API call with settings
nodes = 3
andtolerance = 1
, which means \(k = 2\) and \(m = 1\).We are using the default block size, which is set to:
This means our file has exactly 4 blocks. Since \(k = 2\), this means we should have:
The total expected size for the encoded file, therefore, is \(6\) blocks.
The fact that we get an output of size \(786\,432\) is therefore rather puzzling: this amounts to \(12\) blocks, of which the first \(4\) are the content of the file – which we can confirm by inspecting the output – and the next \(8\) are jumbled gibberish.
Since Leopard will not even allow coding rates below \(0.5\), we know those cannot be all parity bits, unless padding is broken.
Encoding
Analysing instrumentation output from Codex, we get:
These top-level numbers seem roughly correct - the size of the new dataset is correct, as is \(b_r\) (4), the number of steps (2), parity blocks (2), and empty padding blocks (0).
Decoding
Decoding has a similar yet slightly more complex structure than encoding.
prepareDecodingData
shows the code for a decoding step, which fetches blocks and arranges their content into an array of arrays. Each inner array should contain the contents of a block, or be empty if the block is missing. Missing arrays will be later interpreted as erasures by Leopard.Since this produces the input for a single decoding interleaved step, output arrays have size \(k\) (
data
) and \(m\) (parity
), and the total number of non-empty inner arrays for decoding to succeed should be \(\geq k\). In case it is not, decoding will fail and an error will be generated at a later step.The code above initializes the indexing strategy with
blocksCount
andsteps
, which we know to be computed correctly in encoding: this will provide us with the block mapping for this interleaving step.It then issues an async fetch for all required blocks, and processes those as they arrive in the main for loop (lines \(31\) - \(62\)). It will stop immediately once enough (\(k\)) blocks have been fetched (line \(34\)), effectively trading IO for CPU.
Line \(37\) processes a block retrieval.
idx
represents the position of the block into the overall file, and should be a number between \(0\) and \(b_r + b_e + b_p\). In the case of our file, this should be a value between \(0\) and \(5\).Line \(43\) then performs a key transformation which is extracting the reverse mapping for the interleaving. One key aspect of this transformation is that it does not work for any interleaving strategy.
The stepping strategy maps index \(0 \leq j < s\) at step \(0 \leq i < k\) into the \(i^{th}\) element of the \(j^{th}\) \(s\)-sized bucket (Figure 1). If we let \(l\) be the mapped index, we get:
\[ \begin{equation} i + j\times s = l \iff j = \frac{l - i}{s} \label{eq:stepped} \tag{1} \end{equation} \]
where the equation for \(j\) is the reverse mapping. Not surprisingly, the mapping strategy in
indexToPos
is defined as:One implication of this is that the only interleaving strategy that currently will work with the EC module is the stepped strategy.
In that same vein, the
decode
function also has one bit which will only work with the stepped strategy:This scans through the data blocks fed into the current step and checks that there are matching recovery blocks in the
recovered
array. Note that the index mapping that is used in line \(2\) to pick the blocks that took part in step \(i\) is exactly the same formula we had deduced in Eq. \(\eqref{eq:stepped}\).