# Protocol Stuff
## The Problem
To create a protocol that can handle (reliable?) data transfer over a very lossy/high packet loss network, where the main type of error is burst errors rather than bit errors
Bit Error: When there is a mismatch of 1 bit (in multiple places) over a datastream.
```
0000 0110 0001 0011 is sent
1000 0110 0000 0010 is received
There are 3 bit errors
```
Burst Error: When there is a mismatch of more than 1 bit consecutively (in multiple places) over a datastream.
```
0000 0110 0001 0011 is sent
1111 0XXX 0001 XXXX is received
There are 3 burst errors
(1) -> 1111
(2) -> XXX 3 bits are lost
(3) -> XXXX 4 bits are lost
```
Reliable: When you send a packet, the recipient will ACKnowledge, or NAK (not acknowledge) if there is an error
## Ideas
## Sample Code 1 (using pyeclib)
```python
from pyeclib.ec_iface import ECDriver
import hashlib
ec_driver = ECDriver(k=20, m=10, ec_type="liberasurecode_rs_vand")
def encode(filebytes):
fragments = ec_driver.encode(filebytes)
print("frags: %d" % len(fragments))
for i in range(len(fragments)):
print("[%d] len=%d" % (i, len(fragments[i])))
return fragments
def decode(recvfrags):
return ec_driver.decode(recvfrags)
with open("/home/pi/team01/data/data1", "rb") as fp:
whole_file_str = fp.read(25600)
print("before FEC: %s" % hashlib.md5(whole_file_str).hexdigest())
frags = encode(whole_file_str)
reconstructed = decode(frags)
print("after FEC: %s" % hashlib.md5(reconstructed).hexdigest())
```
This is a quick and dirty sample, working on 25600 bytes, with 20 data shards (k) and 10 parity shards (m), using pyeclib @ https://opendev.org/openstack/pyeclib
It produces roughly 1360 bytes per shard, which can fit into a single packet nicely. The total size is 1360 x (20+10) = 40800, which means that there is about 15200 additional bytes added per 25600 bytes, or roughly 59% of overhead per 25600 bytes.
With the chosen parameters of k and m, it is able to correct up to 10 shards worth of data in any order