###### tags: `Meeting Summary` # Meeting summary 26-27/11/2020 and 15/12/2020 ![](https://i.imgur.com/cBFsmpJ.jpg) *Whiteboard at on the 26th* ## Topic we discussed about 1. Tradeoffs between uncoordinated and coordinated task selection strategies 2. Policies on how to handle unsolved tasks 3. Metrics to measure the coordinated/uncoordinated strategy tradeoffs 4. Interesting classes of tasks to consider and considerations about solvability 5. (still to discuss) How to decide the value of the Task reward provided by the Supplier 6. (new topic) who should provide the target for the optimization of useful tasks? the Supplier? the system? 7. (new topic) what weight to give to the proposed useful block for the longest chain rule? ### Discussion on topic 1 The selection strategies are: - Uncoordinated - Coordinated - Coordinated We differentiate between: - How to implement task selection strategies technically - Heuristics for task selection strategies #### On how to implement task selection strategies technically Technically, we might use VRF in Algorand style to drive the task selection. However it differes from algorand in the meaning of two input parameters of the algorithm, w and W. Additionally, *role* and $\tau$ are not needed. Using VRF algorand might require modification to the algo. There is a In the coordinated case we can force: - Everybody to work on the same task - Solve different tasks Forcing everybody to work on the same task means that we are using a lot of computational power to solve different problem instances of the same problem template. So, depending on the technique used to guarantee **sensitivity**, the actualy problem solved (even in the case of the same task) might be quite different between themselves. #### On :::info **Observation**: The difficulty to solve a task is a function (combination) of the input data provided by the Supplier and the algorithm/approach to solve the task. Fixing the input data and using a more/less efficient algorithm makes the solution more/less quick to compute. Parameters that affect Bitcoin's solution proposal: - Hash/s of the hardware - Implementation of the SHA256 function - Block input size Example of SHA256 duration due to input size in a Texas Instrument [technical report](https://www.ti.com/lit/an/swra667/swra667.pdf?ts=1606403132194&ref_url=https%253A%252F%252Fwww.ti.com%252Fproduct%252FCC2642R): ![](https://i.imgur.com/LfTYyGJ.png) In ML for example, [single layer perceptrons are only capable of learning linearly separable patterns](https://medium.com/@lucaspereira0612/solving-xor-with-a-single-perceptron-34539f395182) and [suffieciently overspecified networks are easy to train](https://papers.nips.cc/paper/2014/file/3a0772443a0739141292a5429b952fe6-Paper.pdf) Should we differentiate between difficulty due to input data and difficulty due to solution method? ::: Issue: in the uncoordinated task selection, the Supplier has insight about the difficulty due to input data Is it really a problem if we do not satify creator-free for tasks? What is the impact to security if a solver could partially reduce the difficulty for the next block because of the computation executed on the current block? ### Discussion on topic 2 Tasks that are not solved can be managed in a few different ways. In any case, it makes sense to penalize old tasks in the selection because they might be practically unsolvable. For coordinated-multi task selection: - We assign the task to be solved only for a specific amount of rounds per solvers. - Unsolved task are reinserted in the "waiting for solution tasks pool". We can either vary or keep constant the probability to select a task based on its age - Give a fixed amount of For coordinated-same task seletion: - Reschedule the unsolved task at some point in the future - Drop the unsolved task For uncoordinated task selection: - Up to Solver's policy ### Discussion on topic 3 Possible metrics are: - Amount of wasted computation - Amount of resources used to propose a task solution - Fairness of solvers - Validation efficiency - Delay to start solving a task (due to task metadata fetching) - Starvation of tasks - Time to propose a solution of a task - Number of attempts to solve a useful task () - Latency (of consensus for example) - Throughput - Scalability. Number of nodes To discuss the validation efficiency we need to consider the metadata necessary for validation. How to measure starvation of a task? Proposal: at $t_{0}$ we start measuring the time (number of blocks) it takes for $n$ tasks to be solved. The task starvation at the system level is the average time used to solve $n$ tasks. #### Amount of wasted computation #### Validation efficiency Validation needs the Supplier data. What is the bandwidth requirement for validation? Room for optimization in coordinated-same task ![](https://i.imgur.com/TlMggDD.jpg) *Whiteboard at on the 27th* ### Discussion on topic 4 Classes of tasks we want to consider: - kOV - ML tasks. Probably need to restrict to a subclass of ML tasks Properties to classify a task: - Solvability: possible to tell in a short amount of time is a task is solvable at all. Solvability can be deterministic/probabilistic, and practical/theoretical. - Switchability: there is no disadvantage for a solver to switch the problem instance. The importance of switchability is tied to the transaction pattern and the solution frequency. - Transferring costs of problem Inputs and Outputs - Memorylessness: the work executed to solve a problem cannot be reused. Related to *Unsolved tasks reuse coefficient*. ![](https://i.imgur.com/JAZI3zm.png) ### Discussion on topic 5 The approaches are: - The Supplier is free to choose the amount of the Problem Reward - The amount of the Problem Reward is fixed at system level --- kOV might my too hard to model analytically (the block solution proposal) so we would need a plan B, another class of tasks for the experiments. It is easier to find the parameters if the probability distribution is known, rather that to prove that some samples belong to a probability distribution. :::danger Tasks - Determine if kOV is a memoryless puzzle. - Determine the probability distribution of the kOV algorithm proposed ::: :::info **Structure of the paper** In the first part we describe the general design principles and the tradeoff. In the second part we fix a specific class of tasks (the one OV belongs) and go deeper in the design targeted for that specific set of tasks. Implementation of the ::: In the paper, we need to motivate why our useful proof of work system is hybrid. ### 15/12/2020 ![](https://i.imgur.com/j7WjNq1.jpg) *Board on the 15th of December* Topic and beginning of the discussion: justify why hybrid Definition of usefulness: The solution has value externally to the blockchain system. - Task generation model - Internally generated and no externally supplier - Multiple independent suppliers - Single organization - More complex schemes - Payment model - Internally provided payments with no external payer - Externally provided payments - Single organization that pays the solvers Q: **What do we regard as "controllable" complexity? What is fine-grained complexity?** A: is it controllable if we can midify a parameter in the task istance so that we adjust expected time to produce a valid solution. Q: **At which point the complexity is so uncontrollable that we need to go hybrid?** A: :::warning TASK: review how other systems control control complexity of the tasks (adjustability). :::