Quantum computers, which operate based on the fundamental laws of quantum mechanics first observed at the atomic-scale, are an emerging technology believed to be capable of computational tasks out of the reach of classical digital computers. Potential applications of quantum computers range from physical simulation – like calculating electronic energies of complex molecules – to cryptography – such as cracking RSA encryption [1]. Despite their revolutionary potential, quantum computers are extremely sensitive to noise introduced through interactions with their environment. This noise leads to computational errors, and today’s quantum computers are too error-prone to compute the answers to problems that are of practical utility where they outperform their classical digital counterparts. We anticipate that we will need quantum error correction to protect quantum computers from noise.
Quantum error correction is a powerful tool for combating the effects of noise. As with error correction in classical systems, quantum error correction can exponentially suppress the rate of errors by encoding information redundantly. Redundancy protects against noise, but it comes at a price: an increase in the number of physical quantum bits (qubits) used for computation, and an increase in the complexity and duration of computations. The overhead associated with error correction can be significant (at the scale of 1000x increases in qubit counts) when implemented using the error-prone hardware of a quantum computer. This has led to increasing levels of interest in so-called “hardware-efficient” strategies for quantum error correction [2].
In this post, we dive into the results of one of our latest experiments at the AWS Center for Quantum Computing, published today in Physical Review X [3]. We introduce a type of qubit developed at AWS that converts the majority of errors into a class of errors called “erasure errors”. Erasure error detection and correction, under the right circumstances, can lead to significant reductions in error-correction overhead. Our work demonstrates initial steps towards implementing this strategy using existing quantum hardware based on superconducting quantum circuits, and indicates a potential accelerated path forward for building quantum computers of practical utility.
The types of errors that affect qubits
When we talk about protecting quantum computers from errors, what types of errors are we talking about? Quantum computers are built from qubits, which can be in one of two quantum states (often labeled |0⟩ or |1⟩), or any superposition of these states. Just like a classical bit can accidentally flip from 0 to 1 or from 1 to 0, a quantum bit can also experience a so-called “bit-flip” error where |0⟩ flips to |1⟩ or vice versa. But unlike classical bits, quantum bits can also suffer from “phase-flip” errors, where a superposition |0⟩ + |1⟩ flips to |0⟩ – |1⟩. In fact, the difficulty of correcting both bit-flip and phase-flip errors is one reason why error correction is much harder for quantum computers than for classical computers.
Bit-flip and phase-flip errors share a common trait, which is that they silently corrupt the state of the qubit without the operator knowing. But recently, another framework for errors has drawn significant attention, based on what are called “erasure errors.” These errors also corrupt the state of the qubit, but unlike bit-flip and phase-flip errors, erasure errors raise a flag to indicate that the qubit was corrupted.
If we can build a quantum computer out of qubits that only have erasure errors, it becomes much easier to correct errors as they occur since we actually know which qubits were corrupted. This extra information allows us to reduce the redundancy and overhead that would be needed to correct an equal number of silent bit-flip and phase-flip errors. This idea has recently been gaining traction as a compelling strategy to reduce error-correction overhead in several platforms for experimental quantum computing, including in neutral atoms [4-6], superconducting circuits [7-8], and trapped ions [9].
Designing an “erasure qubit” with superconducting transmons
An “erasure qubit” is a qubit which is designed to be primarily limited by erasure errors, with only a minimal contribution of bit-flip or phase-flip errors. To build such a qubit, we need to encode our qubit in a protected way such that the physical processes that drive errors in our hardware can only cause erasure errors.
Our approach at the AWS Center for Quantum Computing was to build erasure qubits out of standard qubit components, called transmons [10]. Transmons are superconducting circuit elements whose discrete quantum states can be controlled and used for computation. Typically, it is common to rely on the lowest two energy states of the transmon, |0⟩ and |1⟩, to encode a single qubit. These states can be thought of as the transmon containing either 0 photons or 1 photon, respectively.
One of the main sources of errors in transmons is the photon leaking out, which causes relaxation of the transmon from |1⟩ to |0⟩. This is one example of a bit-flip error that scrambles the information in a single transmon. To avoid such silent errors, we use an alternate encoding proposed by our AWS team [7] that allows us to flag relaxation events when they occur.
Specifically, we use a “dual-rail” encoding in two transmons, where the qubit is defined by the two states in which just the left transmon has a photon (|10⟩), or just the right transmon has a photon (|01⟩). In such an encoding, photon loss does not cause silent errors between |01⟩ and |10⟩, but instead causes a transition into a third state in which there are no photons left in the system (|00⟩). As long as we can check the system to see if it still has one photon, we can find out if an error occurred and “flag” an erasure error if no photons are detected.
We’re almost done with our qubit construction – but unfortunately relaxation is not the only error mechanism in transmons which requires attention. A second important source of errors is so-called “dephasing”, or fluctuations in the transmon energy, which would cause the states |01⟩ and |10⟩ to experience silent phase-flip errors.
The solution to this problem requires a quick technical deep-dive: rather than letting the transmons operate independently, we couple them together. The single photon is now shared by the two transmons, either symmetrically or anti-symmetrically, which defines our “logical” qubit states: |0L⟩=|01⟩−|10⟩ and |1L⟩=|01⟩+|10⟩. Since these states both contain half a photon on average in each transmon, it turns out that their energy gap is largely insensitive to the underlying transmon energy fluctuations. While transmon dephasing can still cause both bit-flips and phase-flips of the dual-rail qubit, these rates can be suppressed by multiple orders of magnitude compared to the error rates on the underlying transmons. This enables the dual-rail qubit to be highly coherent, even if the underlying transmon qubit building blocks are themselves very noisy.
At this stage, we have the core ingredients of an erasure qubit: a pair of states |0L⟩,|1L⟩ which should have very few bit-flip and phase-flip errors, and which should primarily be affected just by erasure errors (leakage to the state |00⟩).
The next step is to test it experimentally and show:
- How rare are the residual bit-flip and phase-flip errors relative to the erasure errors?
- Can we detect the leakage to |00⟩ in order to truly flag photon loss as erasure errors?
Erasure qubit performance
To test the use of the dual-rail system as an erasure qubit, our team at the AWS Center for Quantum Computing designed and fabricated a device composed of three transmons – two of which encode the dual-rail qubit, while the third is used as an ancilla for detecting and flagging erasure errors. This device was cooled down to 10 millikelvin in a dilution fridge.
Error rates on the dual-rail qubit
We examine the rates of the different types of errors by preparing the system in various states and tracking its evolution over time. In particular, our goal is to determine the fraction of total errors which are erasure errors versus silent bit-flip and phase-flip errors. The erasure error rate is measured by initializing the system in any state like |0L⟩ or |1L⟩ and measuring the probability of finding the system in |00⟩ after some time. We find that this decay occurs on a typical timescale of around 30 µs, consistent with independent measurements of the rate of photons leaking out of these transmons.
The bit-flip and phase-flip error rates are extracted by focusing on the instances where a photon was not lost. Bit-flip errors are measured by preparing the system in |1L⟩ and measuring the probability of finding it in |0L⟩ at a later time (or vice versa). Similarly, to measure phase-flip errors we prepare the dual-rail qubit in a superposition |0L⟩+|1L⟩ and measure the probability of finding it in the wrong superposition (|0L⟩−|1L⟩) at a later time. These two types of measurements reveal that bit-flip and phase-flip errors take place on a much longer time scale of close to 1 ms, roughly 30x longer than the timescale for errors on the underlying transmons composing the dual-rail qubit.
As anticipated, the ratio of error timescales indicates that the vast majority of errors (> 96%) in our dual-rail qubit are erasure errors (leakage to |00⟩), with only a small fraction (< 4%) of residual (silent) bit-flip and phase-flip errors. This is a strong indicator that if we can accurately flag the erasures, then we can efficiently correct most errors that occur in this system.
Detecting erasure errors using an ancilla qubit
To check if a qubit had an erasure error, we have to do a measurement to see if the dual-rail system is in the |00⟩ state. In particular, we want to distinguish |00⟩ from the |0L⟩,|1L⟩ states. However, it is important that we do not reveal the dual-rail qubit’s logical state if no erasure had occurred. For example, if the dual-rail pair is in |0L⟩+|1L⟩, the erasure check should indicate that no erasure has occurred and leave the system in this superposition (without collapsing it into either |0L⟩ or |1L⟩, which would effectively introduce new silent phase-flip errors into the system).
This is a rather delicate type of measurement to perform. To accomplish this, we use a third “ancilla” transmon which is weakly coupled to the dual-rail qubit. The presence or absence of the single photon in the dual-rail pair causes a shift of the energy of the ancilla. This shift is roughly the same for the two dual-rail states |0L⟩ and |1L⟩ which each contain a single photon, but the shift is different for the state |00⟩ in which there are no photons. As a result, by measuring the energy of the ancilla transmon, we can determine if the dual-rail has decayed to |00⟩ without distinguishing |0L⟩ and |1L⟩.
This approach, at least in principle, allows high-fidelity detection of erasure errors without causing additional errors on the dual-rail qubit. To verify this point experimentally, we prepare the system in a superposition |0L⟩+|1L⟩ and perform a sequence of repeated erasure checks to see if the superposition is destroyed. We find very low rates of errors (<0.1%) induced by each check, validating this approach as a viable method of erasure detection.
Conclusion
These experiments, described in our recent publication [9], complete the picture of our dual-rail qubit as an “erasure qubit” – most errors that occur are indeed erasure errors, and those erasure errors can be detected without introducing new errors into the system. This work is just the beginning of an exciting path with transmon-based erasure qubits, with major next steps needed to complete the error-correction toolbox with these new qubits and scale up to larger systems. By incorporating new ideas like erasure errors into standard transmon processors, we hope to improve error correction performance in ways that are compatible with other state-of-the-art developments in the field.
Indeed, this strategy of hardware-efficient error correction is just one part of a large sphere of technical challenges in scaling up quantum computers, complementing other critical work such as systematic improvement of device performance and construction of scalable controls. All pieces of technological development must come together on this incredibly exciting road towards large-scale, useful quantum computing.
References
[1] A. Dalzell, et. al. arXiv:2310.03011 (2023); https://aws.amazon.com/blogs/quantum-computing/constructing-end-to-end-quantum-algorithm/
[2] A. Kubica. https://aws.amazon.com/blogs/quantum-computing/quantum-error-correction-in-the-presence-of-biased-noise/
[3] H. Levine, A. Haim, et. al. Phys. Rev. X 14, 011051 (2024)
[4] Y. Wu, et. al. Nature Communications 13, 4657 (2022)
[5] S. Ma, et. al. Nature 622, 279 (2023)
[6] P. Scholl, et. al. Nature 622, 273 (2023)
[7] A. Kubica, et. al. Physical Review X 13, 041022 (2023)
[8] J. Teoh et. al., PNAS 120, 41 (2023)
[9] M. Kang, et. al. PRX Quantum 4, 020358 (2023)
[10] P. Krantz, et. al. Applied Physics Reviews 6, 021318 (2019)