# 2.1 Upload/Download API
## Uploading Chunks
### POST /chunk
#### Body:
```
{
"data_root": "<Base64URL encoded data merkle root>",
"data_size": "a number, the size of transaction in bytes",
"data_path": "<Base64URL encoded inclusion proof>",
"chunk": "<Base64URL encoded data chunk>",
"offset": "<a number from [start_offset, start_offset + chunk size), relative to other chunks>"
}
```
**NOTE** data_size is requested in addition to data root, because one may submit the same data root
with different transaction sizes. To avoid chunks overlap, data root always comes together with the size.
#### Responses:
#### success
```
200
```
#### chunk_too_big
```
400 {"error": "chunk_too_big"}
```
When chunk is bigger than 256 KiB.
#### data_path_too_big
```
400 {"error": "data_path_too_big"}
```
When the proof is bigger than 256 KiB.
#### offset_too_big
```
400 {"error": "offset_too_big"}
```
When the offset is bigger than 2 ^ 256.
#### data_size_too_big
```
400 {"error": "data_size_too_big"}
```
When the data size is bigger than 2 ^ 256.
#### chunk_proof_ratio_not_attractive
```
400 {"error": "chunk_proof_ratio_not_attractive"}
```
When data_path is bigger than the chunk.
**NOTE** If the original data is too small, it should not be uploaded in chunks.
#### data_root_not_found
```
400 {"error": "data_root_not_found"}
```
When the node hasn't seen the header of the corresponding transaction yet.
#### exceeds_disk_pool_size_limit
```
400 {"error": "exceeds_disk_pool_size_limit"}
```
The corresponding transaction is pending and it is either of:
- 50 MiB worth of chunks have been already accepted by this node for this (data root, data size);
- 2 GiB worth of all pending chunks have been accepted by this node.
#### invalid_proof
```
400 {"error": "invalid_proof"}
```
#### not_joined
```
503 {"error": "not_joined"}
```
When the node has not joined the network yet.
#### timeout
```
503 {"error": "timeout"}
```
## Downloading Chunks
### GET /tx/<id>/data
The endpoint serves data irregarding how it was uploaded.
#### Responses:
#### success
```
200 <Base64URL encoded data>
```
#### tx_data_too_big
```
400 {"error": "tx_data_too_big"}
```
When data is bigger than 12 MiB.
**NOTE** Data bigger than that has to be downloaded chunk by chunk.
#### 404
```
404
```
#### not_joined
```
503 {"error": "not_joined"}
```
When the node has not joined the network yet.
#### timeout
```
503 {"error": "timeout"}
```
### GET /tx/<id>/offset
Get the absolute end offset and size of a transaction.
**NOTE** The client may use this information to collect transaction chunks. Start with the end offset and fetch a chunk via `GET /chunk/<offset>`. Subtract its size from the transaction size - if there are more chunks to fetch, subtract the size of the chunk from the offset and fetch the next chunk.
#### Responses:
#### success
```
200 {"offset": "...", "size": "..."}
```
#### invalid_address
```
400 {"error": "invalid_address"}
```
When TX ID is not a valid Base64URL string.
#### 404
```
404
```
#### not_joined
```
503 {"error": "not_joined"}
```
When the node has not joined the network yet.
#### timeout
```
503 {"error": "timeout"}
```
### GET /chunk/<offset>
#### Responses:
#### success
```
200 {
"chunk": "<Base64URL encoded chunk>",
"data_path": "<Base64URL encoded proof>",
"tx_path": "<Base64URL encoded proof>"
}
```
#### 404
```
404
```
#### offset_out_of_bounds
```
400 {"error": "offset_out_of_bounds"}
```
When offset is outside of the `[0, 2 ^ 256]` range.
#### invalid_offset
```
400 {"error": "invalid_offset"}
```
When offset is not an integer.
#### not_joined
```
503 {"error": "not_joined"}
```
When the node has not joined the network yet.
#### timeout
```
503 {"error": "timeout"}
```