# NEAT and the toric code
## Results: NN trained at specific error rates

*Legend:* Fraction of correctly solved error configurations (over 100 error instances) for different code distance and simulation parameters. For each error rate, a population of NN has been evolved according to the NEAT algorithm, trained on error configurations generated at this error rate. The figure shows the fitness of the best individual averaged over 5 different NEAT runs (cross-validation).
**Commentary:** Not so relevant maybe, because training of NN should be done on puzzles generated from different error rates.
## Results: NN trained on all error rate situations
All NNs are trained on 100 puzzles generated from [0.01, 0.05, 0.1, 0.15] error rates.

*Legend:* Fraction of correctly solved error configurations among 1000 newly generated puzzles. The figure shows the evaluation of the best NN obtained after 100 generations for d=3 and d=5.
> [Evert]: Can we add the MWPM curves in this figure?
> And can we make absolutely sure that the toric code environment is 'correct', i.e. that the check for logical error is correct also in the $d=5$ case?
**Commentary:** The fitness is obviously worse than in the previous case, because it is harder for a less-specialized NN to perform equally well on varying difficulty puzzles.
The fact that there is no crossing could come from bad training due to suboptimal hyperparameters (grid-search!)

*Legend:* This is the fitness of the population of the d=5 NEAT run (from above).
### Possible improvements
- 100 Generations do not seem enough, it takes 20 seconds per generation for a population of 100 NN on 4 cores, so 30 minutes for d=5, and 5 minutes for d=3
- I launched: d=5 with 200 generations + training on 400 puzzles for each NN, to see whether it improves
- Maybe optimize the input, removing star operators information