owned this note
owned this note
Published
Linked with GitHub
---
tags: datdot
---
```flow
e=>end: Data is indeed hosted
chain=>operation: CHAIN
io1=>inputoutput: Selects random (X=10) attestors
att=>operation: ATTESTORS
io2a=>inputoutput: Attestors join secret swarm
io2b=>inputoutput: Attestors make shared pseudo random number generator
io2c=>inputoutput: generate X random numbers
io2b1=>inputoutput: Each selected Attestor joins feed swarm at a pseudo random time between NOW and and END of challenge derived from the random numbers.
io2d=>inputoutput: Attestor observes request form random swarm peers or makes one up if no peers are present
io2d1=>inputoutput: Attestor identifies the hoster based on the NoiseKey
io2e=>inputoutput: Attestor makes a commonly observed request to hoster
host=>operation: HOST
io0a=>inputoutput: Hoster after (N~100?) chunks sends new signed ext message*
io2f=>inputoutput: Attestor sends Hosters signed ext message + hashes of received chunks with their merkle proofs proof and the REPORT* to the Chain
io2g=>inputoutput: When last attestor reports before deadline, the new attestation starts
io2h=>inputoutput: Chain rewards attestors and hoster on successful challenge
chain->io1->att->io2a->io2b->io2c->io2b1->io2d->io2d1->io2e->host->io0a->io2f
```
@TODO: write detailed reasons why the process is/was chosen as described in aboves flow diagram
**some details:**
* **New signed ext message*:**
* Hoster signs indices and hashes of chunks they served
* **Report*:**
* throughput, latency etc.
**implementation idea**
- x attestors selected + interval deadline (block nr: e.g. **12345**)
- attestors join the custom swarm
- attestors measure/benchmark each other (throughput, latency, ...) => attestor_metrics
- attestors collectivelly decide when to audit the hoster
- attestors randomly check the performance (join the hosters swarm)
- they report results to the chain
- when last one reports, chain starts new attestation
- if !lastReport, chain starts new Attestation after the deadline
```jsx
const crypto = require('crypto')
const chain = require('chain')
const hash = require('hash')
const jitterTime = require('jitter-time')
const keypair = crypto.keypair()
const id = await chain.signup(keypair.publickey)
// @TODO: implement:
const coinflipswarm = require('_coinflipswarm') // uses _benchmark_attestor
const audit_hoster = require('_audit_hoster')
const audit_attestors = require('_audit_attestors')
chain.on('proof-of-performance', executeProofOfPerformance)
function executeProofOfPerformance (event) {
// @TODO: implement this as a proper module
const report = {}
const { attestors, NOW, hoster } = event.data
const END = NOW + 12345
coinflipswarm(attestors, result => {
const { randomnumbers, attestor_metrics } = result
report.attestor_metrics = attestor_metrics
var X = attestors.length
var Y = attestors.indexOf(id)
var random = PRNG(randomnumbers)
const jointime = NOW + ([END-NOW]*(Y/X) + jitterTime(Y, 0.2)) )
audit_hoster({ hoster, jointime }, data => {
const { hoster_metrics, proofs } = data
report.hoster_metrics = hoster_metrics
report.proofs = proofs
report.random = random
// reveal the random number to the chain
// SUBMIT AFTER deadline / "END"
chain.submit('result', [event.id, result])
})
})
})
function PRNG (randomnumbers) { // for EXAMPLE
// @TODO: implement this as a proper module
var seed = ''
for (var i = 0; i < randomnumbers.length; i++) {
const x = randomnumbers[i]
seed = hash(seed + randomnumber[i])
}
return function next () {
const number = Number(seed)
seed = hash(seed + i++)
return number
}
}
```
### require('_audit_hoster') // + uses require('_benchmark_peer')
```js
// see flow diagram above
// and:
require('_benchmark_peer')()
```
### require('_audit_attestors')
```js
observe the chain if other attestors report around the correct time.
if they dont, report proof to the chain after deadline ends to get reward
```
### require('_benchmark_peer')
```js
// @TODO: measure latency,throughput,... of peer and compile report
// Purpose of the report:
// At some point in the future, the report should be used with other reports to improve the quality of the hosting service when selecting hosting, etc... peers to be able to give requested service level agreements and make sure hosted data is available with enough bandwidth, requested latency, etc...
```
### require('_coinflipswarm') // + uses require('_benchmark_peer')
```js
1. use opportunity to measure latency, throughput, etc.. of hoster
require('_benchmark_peer')()
2. ...
```
#### Short GIST (in CabalChat Style)
1. e.g. 20 peers join one "cabal chat room"
2. reveal who you are "on chain" (1st msg)
3. every peer makes a random number and says its **hash** to the chat (2nd msg)
4. after seeing another peers 2nd message, the observer copies and says it
5. after saying for all 19 other peers their 2nd message wait until
6. observe everyone else said also for 19 others their 2nd message (and only a single value for their second message!), then:
7. Check if all 19 peers said everyone elses random hash and none conflict
8. A: if true: say your random number
9. B: else report the ones who failed to the chain
-----
**additional information:**
# Spontaneous Random Number "Consensus"
Given a known, online, swarm of possibly byzantine peers, come to consensus on a single shared random number. Partially inspired by Tendermint consensus.
## Exit Conditions
1) success
2) failure: offline participant(s)
3) failure: provably malicious participant(s)
## Process
(It is assumed that you already have a method to select peers and create topics and channels between them, messages in topics are authenticated/verifiable, and messages in channels are private)
* Join swarm including all peers.
* Generate random number, x
* Generate hash(x), y
* Post y to topic
* Repost y to topic whenever you see a *new* expected peer post sender.y
* Send (topic, x) directly to a peer the first time you see them post their sender.y
* if you see a peer post two different values of sender.y at any point, exit with condition (3)
* for every message sender.x *you receive* over a direct channel, verify that hash(sender.x) = sender.y for the declared topic, if this assumption does not hold, exit with condition (3)
* If you do not have values for sender.x for all expected peers before some defined timeout, exit with condition (2)
* else take all values of sender.x and... use magic(sender1.x, sender2.x, ..., sendern.x) to generate your proposed final random number magicx
extension for "confirmation":
* select a random peer, ping them hash(magicx+1). ("I know magicx, do you?")
* if you receive ping hash(magicx+1), pong with hash(magicx)+1. ("I also know magicx, see?")
* if you receive ping any other number z, send a fraud proof ("look, they said they know magicx, but they got it wrong!") to every other known peer, and exit with condition (3)
* if you receive a fraud proof, exit with condition (3)
* if you get to some timeout without failing, exit with condition (1)
## candidates for magic()
magic() should be something that has the property of, given at least one peer is honest about supplying true randomness as x, having cryptographically random output.
this can either be done by:
concatenation of all inputs x, and then calculating the hash of the resulting large value (this has no assumptions on the actual randomness of the inputs, only the hash algo used, but some ordering is required to make randomness replicable)
or
calculating the xor of all inputs x, and then calculating the hash of the resultant smaller value (this has no assumptions on the actual randomness of the inputs, but is probably much faster to calculate if the set of peers coming to consensus is larger)
or
calculating the xor of all inputs x, and then directly using the resulting value ([this assumes at least one input is high quality randomness](https://security.stackexchange.com/questions/195156/when-you-xor-a-random-number-with-non-random-number-does-that-give-you-a-new-ra) but is objectively the fastest/cheapest and doesn't need an additional hashing step), but if all peers have the same flaw, randomness will be compromised.
## Handling failure:
It's probably reasonable in the scenario of exit condition (2) to simply retry with backoff (using new topics, so messages are not mixed!) after some period of time, after some number of retries, just fail completely.
in the scenario of exit condition (3), since this is objectively provable (two conflicting messages on the same topic, or x on a channel that doesn't correspond to y on the declared topic) just drop this peer and create a new topic without the peer present. If there are too few peers to continue or do so, just fail completely.