# Review of state-of-the-art
MPWM is biased towards the lowest energy errors.
Whereas NN-decoders are biased towards most probable errors.
Versatile approach that can accomodate different error models (X/Y correlations or depolarization or spin flips)
In the near future, only small code distances will be experimentally viable, so heuristic approaches are welcome.
## Deep Neural Network Probabilistic Decoder for Stabilizer Codes, Stefan Krastanov & Liang Jiang
- NN: a deep neural network (18 hidden layers). They show that the performance increases monotically with the depth up to a point of diminishing returns (15 hidden layers for distance=5)
- input data: the syndrome: L^2 plaquette and L^2 star operators
- output data: 2L^2 physical qubits with 2 neurons each corresponding to eigenvalue of Z and X operators
- dataset size: 1 billion of (syndrome, error)
- they trained one network per error rate (although they find some robustness when applied to other error rate)
- does not use the symmetries of the toric code
- distance=9 maximum
- results: 16.4\% threshold for the depolarization error model (comparable to the best known in [Cianci-Poulin, Fast decoders..])

- sampling difficulties since the decoder can suggest correcting chains that do not have the same syndrome see fig below

> The give-up threshold is the number of allowed resampling to get a satisfying correcting chain
## Decoding small surface codes with feedforward neural networks [Varsamopoulos, Criger, Bertels]
- shallow neural networks as seen below

- $10^6$ samples for large codes
- QEC and fault-tolerant (probability of error and measurement error is the same on each qubit) error models
- up to distance=7
- surface code
- always chooses the fewest amount of corrections (like MWPM)
## Comparing neural network based decoders for the surface code [Varsamopoulos, Bertels, Almudever]
- surface code
- -depolarizing error, circuit noise model
- Distinguish two categories of decoders: (i) a low-level one that searches for exact corrections at the physical level (which physical qubits to flip) (ii) a high-level one that searches for corrections to restore the logical state (returns which logical error happened).
- the dataset is obtained by sampling at only one single error rate, which is realistic for experiments
- only one decoder for all error rate
- RNN and FFNN

## Decoding surface code with a distributed neural network based decoder [Varsamopoulos, Bertels, Almudever]
- see there for a list of deterministic algorithm Markov Chain Monte Carlo, Maximum likelihood decoder, MWPM, RG, Cellular automaton
## Fast Decoders for Topological Quantum Codes [Duclos-Cianci, Poulin]
State-of-the-art for the 2D toric code
- Bit-flip error threshold: 7.9%
- Depolarizing threshold: 15.5%
- Simulate up to distance 1024
## Neural Network Decoders for Large-Distance 2D Toric Codes [Xiaotong Ni]
- toric code
- CNN to use translation-invariance
- able to reach MWPM
## Machine-learning-assisted correctionqubit errors in a topological code [P. Baireuther1 , T. E. O’Brien1 , B. Tarasinski2 , and C. W. J. Beenakker1]
I don't understand the paper
## Quantum error correction for the toric code using deep rein-forcement learning [Philip Andreasson, Joel Johansson, Simon Liljestrand, and Mats Granath]
- deep CNN with 573028 parameters for d=5, 1 228388 for d=7
- Q-learning
- toric code up to distances $d=7$
- Results similar to MWPM for bit-flip errors
