# Why Quantum Error Correction Matters?
Quantum error correction is a crucial part of scaling quantum computers up to realistic scale. QEC, while being over 30 years old, is just now beginning to see serious experimental application. However, present error rates and connectivity in real-world devices present a challenge for these quantum error-correction solutions. Currently we are here:
![](https://i.imgur.com/bCVfWUe.png)
# ***Definition of Physical and Logical Qubits***
Quantum computing requires the use of qubits to encode information. Most quantum algorithms created in the last several decades believed that these qubits are perfect: they can be produced in whatever state we choose and controlled with absolute accuracy. *Logical qubits* are qubits that satisfy these assumptions.
Over the last few decades, there have also been significant developments in the discovery of physical systems that act as qubits, with higher quality qubits being discovered all the time. However, the faults can never be completely removed. These qubits will always be much too imperfect to function as logical qubits directly. We call them *physical qubits* instead.
We attempt to utilise physical qubits despite their faults in the current era of quantum computing by building customized algorithms and applying error mitigation effects. However, for the future era of fault-tolerance, we must develop techniques to create logical qubits from physical qubits. This will be accomplished using the quantum error correcting method, in which logical qubits are embedded in a huge number of physical qubits.
# **QEC Surface Code**
To be successful, large calculations, whether classical or quantum, require each of its processes to have a low logical error rate. If the wrong results of a few operations are passed forward, the entire computation will produce random outcomes. This is an example of computer science's well-known "*Garbage In, Garbage Out*" concept. To avoid this, the states of the qubits we wish to calculate must be encoded into quantum error-correcting codes. These codes enable for the detection of faults and the reversal of their effects, preventing error propagation.
Well, there are many classes of codes and there is no one single answer to which one is best. But what quantum error correcting codes are there and which one is in fact the best?
Let's see some example:
*A code can in general encode k into n qubits.*
Furthermore, the code has some distance d which means that it can correct almost up to d/2 errors. So typically we would like to have a high distance and high k versus n.
# ***Error Types of Qubits***
First of all, a qubit can undergo two types of elementary errors,there is bit flip and there are phase flip errors. All other errors on a qubit are linear combinations and/or products of these errors.
Now quantum error correction codes are designed such that only errors on small subsets of qubits can be corrected.
For example, for the 7 qubits, an error on any single qubit can be corrected,but not an error on any subset of 2 or more qubits.
Error correction is then effective when errors on single qubits are much more likely than errors on pairs of qubits.
![](https://i.imgur.com/2wE6jfr.png)
# Surface Code Examples
For example representing a single logical qubit by, say, 49 qubits, one can correct larger subsets of errors.
And this means that the failure rate of the logical qubit, which is determined by the errors which do not get corrected, can get really small.
With 10.000 qubits it may be possible to have a failure rate of 10^{-15}, particularly with a code called the surface code.
An attractive feature of the surface code is that its qubits can be physically placed on a 2D planar chip and only local connections are needed for error correction and logical gates.
Shown on the below is a distance-7 surface code with 49 qubits, there is a qubit on each lattice site:
![](https://i.imgur.com/KsJNvkz.png)
# ***Noise Threshold / Fault -Tolerance***
The logical failure probability of the logical qubit is determined by how well we do this. Now we can imagine using larger and larger surface codes with distance 3 and up. If the error rate on each gate is below some critical value,the error correction process or these error correction cycles, improve with larger d.
Namely what we see is that the logical failure rate will decrease exponentially in d. However, if the error rate on each gate is above some critical value,then instead error correction becomes worse with larger d.
The whole error correction code makes errors more likely rather than less likely.
The critical error rate per gate is called the noise threshold. This threshold depends on the method of processing error information and it depends on what types of errors we have. It lies somewhere between 0.5 and 1% for the surface code.
Reaching error rates (<0.5) this low or lower is thus an important target for qubit hardware.
We are currently on a long and steep road towards fault-tolerant quantum computing. And on this road there are many steps and milestones.
Future of Quantum Computing relies on how well we do QEC. I'm sure that we will find many brilliant ways to do better. *Stay Curious and Think Quantum!*
I would love to hear your thoughts and happy to answer your questions. Feel free
to contact with me from hello@qunicorn.co.uk
*Resources*
Ted Yoder *(IBM Quantum Research)*
Prof.Barbara Terhal( *QuTech Academy/ TU Delft*)
Qiskit Documentation (*IBM Quantum*)