# nn training optimisations ## Intro We ant to make proofs about various optimisations challenges. Here we define a commit reveal mentod so that a prover can convince a verifier that their solution to the optimisation challenge falls inside a certin bounds. The key inovation here is that we don't require a prover to prove about everything but only a sub set. This reduces by orders of magnitudes the prover work that is required. ## Example Setting Neutral Network: There exists a neural netowrk that we have a ZKP to prove evaluations for Testing set: There is a set of data that was used to train this netowrk. The trainign algorithim works by doing many evaluations of the network on differnt weights in the nural network. Then taking the step that reduces the error the most. ## Intuition * With POW minging pools we don't check every proof of work a miner perfoms. Instead we ask them to give us a few proofs that were really hard to make. How hard a proof was to make is called the difficulty ### Probability Distrubution Image Table 1: Shows the probability of not finidng a satisfying solution given a challenge of difficulty 50% after n attempts. This means that every time you try to find a solution you have a 50% chance of success. Fig 1: Plots this https://docs.google.com/spreadsheets/d/1ExPjMY08n-Mx8dyox3FL84LSR4W6XrBO2b-hQkCG7xM/edit?usp=sharing * Note that with many evaluations the probability of finding a solution sapproaches 100% * We can use this to ask our prover to give us 1/2 the number of solutions and still get roughly the same error bounds. * If we use a difficulty functoin 99.9999% rather than 50%. We will grow the work the number of evalutaions the prover needs to do to find a satisfying solution. But this will also reduce the number of evaluations we check in order to be convinced of the bound. ## Protocol We want to prove that for a neural netorks that our evalution score is inside a certain bound. Prover commits to 1. Neural network 2. Evaluation set 3. A merkle tree of their claimed evaluations and the result. 4. The total result of the neural network 5. They commit to these and generat a random challenge r. ### Difficlty How can we define difficulty. We take the hash of the weights in the neural network % a random challenge. This is our diffilty. We then ask our prover to reveal x elements of their merkle tree that have the correct difficlulty and also provide a ZKP that their evaluations are correct. ## Attacks ### Attack 1 : Hidden elements A prover includes extra elements in the set that are not correct. These elements basically reduce the size of the total training set which makes it harder for the prover to reach the claimed difficulty. This is the same as trying to find an even better evaluation of the neural network. ## Conclustion * We can use this to avoid making proofs about all our NN evaluations. And instead just makng proofs about the ones that are requested in ou challenge * We can leverage this to make proofs of validity about neural network training with lower costs * This can be used to build a incnvtized netowrk for training neuralnetworks that are public goods. Like neural network for drug discovary or disease recognition