Ultrahigh Error Threshold for Surface Codes with Biased Noise

Ultrahigh Error Threshold for Surface Codes with Biased Noise

David K. Tuckett    Stephen D. Bartlett Centre for Engineered Quantum Systems, School of Physics, The University of Sydney, Sydney, NSW 2006, Australia    Steven T. Flammia Centre for Engineered Quantum Systems, School of Physics, The University of Sydney, Sydney, NSW 2006, Australia Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
15 December 2017
Abstract

We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli errors occur more frequently than or errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of instead of around the faces, as this doubles the number of useful syndrome bits associated with the dominant errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

For quantum computing to be possible, fragile quantum information must be protected from errors by encoding it in a suitable quantum error correcting code. The surface code Bravyi and Kitaev (1998) (and related topological stabilizer codes Terhal (2015)) are quite remarkable among the diverse range of quantum error correcting codes in their ability to protect quantum information against local noise. Topological codes can have surprisingly large error thresholds—the break-even error rate below which errors can be corrected with arbitrarily high probability—despite using stabilizers that act on only a small number of neighboring qubits Dennis et al. (2002). It is the combination of these high error thresholds and local stabilizers that make topological codes, and the surface code in particular, popular choices for many quantum computing architectures.

Here we demonstrate a significant increase in the error threshold for a surface code when the noise is biased, i.e., when one Pauli error occurs at a higher rate than others. For qubits defined by nondegenerate energy levels with a Hamiltonian proportional to , the noise model is typically described by a dephasing (-error) rate that is much greater than the rates for relaxation and other energy-nonpreserving errors. Such biased noise is common in many quantum architectures, including superconducting qubits Aliferis et al. (2009), quantum dots Shulman et al. (2012), and trapped ions Nigg et al. (2014), among others. The increased error threshold is achieved by tailoring the standard surface code stabilizers to the noise in an extremely simple way and by employing a decoder that accounts for correlations in the error syndrome. In particular, using the tensor network decoder of Bravyi, Suchara and Vargo (BSV) Bravyi et al. (2014), we give evidence that the error correction threshold of this tailored surface code with pure noise is , a fourfold increase over the optimal surface code threshold for pure noise of  Bravyi et al. (2014).

These gains result from the following simple observations. For a error in the standard formulation of the surface code, the stabilizers consisting of products of around each plaquette of the square lattice contribute no useful syndrome information. Exchanging these -type stabilizers with products of around each plaquette still results in a valid quantum surface code, since these -type stabilizers will commute with the original -type stabilizers. But now there are twice as many bits of syndrome information about the errors. Taking advantage of these extra syndrome bits requires an optimized decoder that can use the correlations between the two syndrome types. The standard decoder based on minimum-weight matching breaks down at this point, but the BSV decoder is specifically designed to handle such correlations. We show that the parameter , which defines the scale of correlation in the BSV decoder, needs to be large to achieve optimal decoding, so in that sense accounting for these correlations is actually necessary. These two ideas—doubling the number of useful syndrome bits and a decoder that makes optimal use of them—give an intuition that captures the essential reason for the increased threshold. It is nonetheless remarkable just how large an effect this simple change makes.

We also consider more general Pauli error models, where errors occur more frequently than and errors with a nonzero bias ratio of the error rates. We show that the tailored surface code exhibits these significant gains in the error threshold even for modest error biases in physically relevant regimes: for biases of (meaning dephasing errors occur 10 times more frequently than all other errors), the error threshold is already . Figure 1 presents our main result of the threshold scaling as a function of bias. Notably, we find that the tailored surface code together with the BSV decoder performs near the hashing bound for all values of the bias.

Error correction with the surface code.—

The surface code Bravyi and Kitaev (1998) is defined by a 2D square lattice having qubits on the edges with a set of local stabilizer generators. In the usual prescription, for each vertex (or plaquette), the stabilizer consists of the product of the (or ) operators acting on the neighboring edges. We simply exchange the roles of and , as shown in Fig. 2. By choosing appropriate “rough” and “smooth” boundary conditions along the vertical and horizontal edges, the code space encodes one logical qubit into the joint eigenspace of all the commuting stabilizers with a code distance given by the linear size of the lattice.

A large effort has been devoted to understanding error correction of the surface code and the closely related toric code Kitaev (2003). The majority of this effort has focused on the cases of either pure noise, or depolarizing noise where , , and errors happen with equal probability; see Refs. Terhal (2015); Brown et al. (2016) for recent literature reviews. Once a noise model is fixed, one must define a decoder, and the most popular choice is based on minimum-weight matching (MWM). This decoder treats and noise independently, and it has an error threshold of around for pure noise with a naive implementation Dennis et al. (2002); Wang et al. (2003), or with some further optimization Stace and Barrett (2010). Many other decoders have been proposed, however, and these are judged according to their various strengths and weaknesses, including the threshold error rate, the logical failure rate below threshold, robustness to measurement errors (fault tolerance), speed, and parallelizability. Of particular note are the decoders of Refs. Duclos-Cianci and Poulin (2010, 2014); Fowler (2013); Wootton and Loss (2012); Delfosse and Tillich (2014); Hutter et al. (2014); Torlai and Melko (2017); Baireuther et al. (2017); Krastanov and Jiang (2017), since these either can handle, or can be modified to handle, correlations beyond the paradigm of independent and errors.

Figure 1: Threshold error rate as a function of bias . The dark gray line is the zero-rate hashing bound for the associated Pauli error channel. Lighter gray lines show the hashing bound for rates and for comparison; the surface code family has rate for qubits. Blue points show the estimates for the threshold using the fitting procedure described in the main text together with 1-standard-deviation error bars. The point at the largest bias value corresponds to infinite bias, i.e., only errors.

The BSV decoder.—

Our choice of the BSV decoder Bravyi et al. (2014) is motivated by the fact that it gives an efficient approximation to the optimal maximum likelihood (ML) decoder, which maximizes the a posteriori probability of a given logical error conditioned on an observed syndrome. This decoder has also previously been used to do nearly optimal decoding of depolarizing noise Bravyi et al. (2014), achieving an error threshold close to estimates from statistical physics arguments that the threshold should be  Bombin et al. (2012). [In fact, our own estimate of the depolarizing threshold using the BSV decoder is .] Because it approximates the ML decoder, the BSV decoder is a natural choice for finding the maximum value of the threshold for biased noise models.

The decoder works by defining a tensor network with local tensors associated with the qubits and stabilizers of the code. The geometry of the tensor network respects the geometry of the code. Each index on the local tensors has dimension 2 initially, but during the contraction sequence, this dimension grows until it is bounded by , called the bond dimension. When is exponentially large in , the number of physical qubits, then the contraction value of the tensor network returns the exact probabilities conditioned on the syndrome of each of the four logical error classes. Such an implementation would be highly inefficient, but using a truncation procedure during the tensor contraction allows one to work with any fixed value of with a polynomial runtime of . In this way, the algorithm provides an efficient and tunable approximation of the exact ML decoder, and in practice small values of were observed to work well Bravyi et al. (2014). We refer the reader to Ref. Bravyi et al. (2014) for the full details of this decoder.

Figure 2: The modified surface code, tailored for biased noise, with logical operators given by a product of along the top edge and a product of along the left edge. The stabilizers are shown at right.

Biased Pauli error model.—

A Pauli error channel is defined by an array corresponding to the probabilities for each Pauli operator (no error), , , and , respectively. We define to be the probability of any single-qubit error, and we always consider the case of independent, identically distributed noise. We define the bias to be the ratio of the probability of a error occurring to the total probability of a non- Pauli error occurring, so that . For simplicity, we consider the special case in what follows. Then for total error probability , errors occur with probability , and . When , this gives the standard depolarizing channel with probability for each nontrivial Pauli error, and taking the limit gives only errors with probability . Biased Pauli error models have been considered by a number of authors Aliferis and Preskill (2008); Aliferis et al. (2009); Röthlisberger et al. (2012); Napp and Preskill (2013); Brooks and Preskill (2013); Webster et al. (2015); Robertson et al. (2017), but we note that there are several different conventions for the definition of bias. Comparison between channels with different bias but the same total error rate is facilitated by the fact that the channel fidelity to the identity is a function only of .

Hashing bound.—

The quantum capacity is the maximum achievable rate at which one can transmit quantum information through a noisy channel Wilde (2013). The hashing bound Lloyd (1997); Shor (2002); Devetak (2005) is an achievable rate which is generally less than the quantum capacity DiVincenzo et al. (1998). For Pauli error channels, the hashing bound takes a particularly simple form Wilde (2013) and says that there exist quantum stabilizer codes that achieve a rate , with being the Shannon entropy. The proof of achievability involves using random codes, and it is generally hard to find explicit codes and decoders that perform at or above this rate for an arbitrary channel, especially if one wishes to impose additional constraints such as local stabilizers. The quantum capacity itself is still unknown for any Pauli channel where at least two of are nonzero.

Figure 3: Exponential decay of the logical failure rate with respect to code distance in the regime for and . We observe scaling behavior of the form where depends on the bias and is an increasing function of . In this bias regime, the decoder performance is likely farthest from optimal, but the decay is still clearly exponential over this range. Other values of show the same general scaling behavior, though with different decay rates . The statistical error bars from 30 000 trials per point are smaller than the individual plot points in every case.

Numerics.—

Our numerical implementation makes only a minor modification to the BSV decoder. To avoid changing the definitions of the tensors used in Ref. Bravyi et al. (2014), we use the symmetry by which we can exchange the role of noise in the modified surface code with the role of noise in the standard surface code. Then all of the definitions in Ref. Bravyi et al. (2014) carry over unchanged. The only difference is that we perform two tensor network contractions for each decoding sequence. There is an arbitrary choice as to whether to contract the network row-wise or column-wise. Rather than pick just one, we average the values of both contractions. We empirically observe improved performance with this modification.

For each value of the bias 0.5, 1, 3, 10, 30, 100, 300, 1000, , we estimate the logical failure rate using the BSV decoder to obtain the sample mean failure rate on 30 000 random trials for a selection of physical error rates in the region near the threshold for code distances 9, 13, 17, 21. We use a rather large value of the bond dimension for our simulations, specifically , although for bias we already observe that the decoder converges well with . However, we still do not observe complete convergence of the decoder at in the regime of intermediate bias around . The decoder convergence with is displayed in Fig. 4, which shows the estimate of the logical failure rate for the code near the threshold. Performance of the decoder and convergence with generally improve as bias increases again beyond , but it is likely that further improvements are possible in the intermediate bias regime. Although the decoder at is not achieving an optimal failure rate in the intermediate regime, we see excellent convergence for most of the range of bias and across the full range of bias we observe threshold behavior. Moreover, this threshold is at the hashing bound for all . In the regions that are a fixed distance below the threshold, as in Fig. 3, we observe an exponential decay in the logical failure rate , where may depend on the bias and is an increasing function of . This constitutes strong evidence of an error correction threshold.

Figure 4: Convergence of the decoder as a function of near the threshold for distance . We observe that the logical failure rates stabilize with increasing for both low and high biases. However, in the intermediate bias regime is still decreasing noticeably between increments of , suggesting that would be required for a good approximation to the optimal ML decoder.

We note that was the largest used in our simulations, so we do not know if the saturation of the decoder performance for bias is a real effect, or a side effect of having too small a value of . Although we observe convergence and threshold behavior, we do not know how much the performance might improve for larger values of since, as seen in Fig. 4, there is apparently still some room for improvement. It is possible that the saturation is a real effect, however, since even at infinite bias there are still logical errors of weight that consist only of errors. This is in contrast to the classical repetition code, which has a threshold of and a distance . One possibility to address this is to use a surface code with side lengths , where and are relatively prime, for example just choosing . We empirically observe that the -distance (i.e., the distance when restricted only to errors) of the code scales like for this modification of the surface code. In fact, on a toric code with and both odd and relatively prime, the -distance is provably  Bravyi (2017). These observations are currently being explored, and will be addressed in more detail in forthcoming work.

Figure 5: Logical failure rate as a function of the rescaled error rate for biases 10, 100, . The solid line is the best fit to the model . The insets show the raw sample means over 30 000 runs for various values of , and the dotted gray vertical line indicates the hashing bound. Even for the case of where the decoder performance was likely furthest from optimal we still see good agreement with the fit model.

To obtain an explicit estimate of the threshold , we use the critical exponent method of Ref. Wang et al. (2003). If we define a correlation length for some critical exponent , then in the regime where we expect that the behavior of the code is scale invariant. In this regime, since the code distance corresponds to a physical length, the failure probability should depend only on the dimensionless ratio , a conjecture that was first empirically verified in Ref. Wang et al. (2003). This suggests defining a rescaled variable so that the failure rate expanded as a power series in is explicitly scale invariant at the critical point corresponding to . It is then natural to consider a model for the failure rate given by a truncated Taylor expansion in the neighborhood around . We use a quadratic model, , and then fit to this model to find and the nuisance parameters . A discussion on the limits of the validity of this universal scaling hypothesis can be found in Ref. Watson and Barrett (2014). We plot our estimates of for various values of and for the representative cases of 10, 100, in Fig. 5 together with rescaled data as a function of . A visual inspection confirms good qualitative agreement with the model.

The critical exponents method gives precise estimates of with low statistical uncertainty. However, systematic biases might affect the accuracy of the estimate and must be accounted for. Finite-size effects typically cause threshold estimates to decrease as larger and larger code distances are added to the estimate. Additionally, the suboptimality of the decoder due to small values in the intermediate bias regime may have overestimated each individual logical failure rate. This latter effect does not directly imply that we have also overestimated the threshold , and the data remain consistent with the fit model in spite of this as can be seen in Fig. 5. On balance, we expect that our estimates might decrease somewhat in the intermediate bias regime. Our final error bars were obtained by jackknife resampling, i.e. by computing, for each fixed , the spread in estimates for when rerunning the fit procedure with a single distance removed, for each choice of . Our results are summarized in Fig. 1.

Fault tolerant syndrome extraction.—

Our study has focused on the error correction threshold under the assumption of ideal syndrome extraction. To see if the gains observed in this setting carry over to applications in fault-tolerant quantum computing, one would need to consider the effects of faulty syndrome measurements and gates. A full fault-tolerant analysis is beyond the scope of this work, but we briefly consider the key issues here.

First, the BSV decoder that we have used to investigate this ultrahigh error threshold is not fault tolerant, but some clustering decoders are Duclos-Cianci and Poulin (2014). Developing efficient, practical fault-tolerant decoders with the highest achievable thresholds remains a significant challenge for the field.

An added complication with a biased noise model is that the gates that perform the syndrome extraction must at least approximately preserve the noise bias in order to maintain an advantage Aliferis et al. (2009). For the tailored surface code studied here, one could appeal to the techniques of Refs. Aliferis et al. (2009); Brooks and Preskill (2013), where we note that -type syndromes can be measured using a minor modification of the -syndrome measurement scheme. We note that these syndrome extraction circuits are significantly more complex (involving the use of both ancilla cat states and gate teleportation) compared with the standard approach for the surface code with unbiased noise, and this added complexity will undoubtedly reduce the threshold.

More optimistically, we note that the standard method for syndrome extraction in the surface code Fowler et al. (2012) can be directly adapted to this tailored code and maintains biased noise on the data qubits. Ancilla qubits are placed in the centers of both the plaquette and vertex stabilizers of Fig. 2, and they will be both initialized and measured in the basis. Sequences of controlled- (vertex) and controlled- (plaquette) gates, with the ancilla as the control and data qubits as the target, yield the required syndrome measurements analogous to the standard method. In this scheme, we note that high-rate errors on the ancilla are never mapped to the data qubits; low-rate and errors on the ancilla can cause errors on the data qubits but the noise remains biased. Measurement errors will occur at the high rate, but this can be accommodated by repeated measurement. Note that, as argued by Aliferis and Preskill Aliferis et al. (2009), native controlled- and controlled- gates are perhaps not well motivated in a system with a noise bias, but nonetheless this simple scheme illustrates that, in principle, syndromes can be extracted in this code while preserving the noise bias. To develop a full fault-tolerant syndrome extraction circuit in a noise-biased system would require a complete specification of the native gates in the system and an understanding of their associated noise models.

Discussion.—

Our numerical results strongly suggest that in systems that exhibit an error bias, there are significant gains to be had for quantum error correction with codes and decoders that are tailored to exploit this bias. It is remarkable that the tailored surface code performs at the hashing bound across a large range of biases. This means that it is not just a good code for a particular error model, but broadly good for any local Pauli error channel once it is tailored to the specific noise bias. It is also remarkable that a topological code, limited to local stabilizers, does so well in this regard.

Many realizations of qubits based on nondegenerate energy levels of some quantum system have a bias—often quite significant—towards dephasing ( errors) relative to energy-nonconserving errors ( and errors). This suggests tailoring other codes, and in particular other topological codes, to have error syndromes generated by - and -type stabilizers. Even larger gains might be had by considering biased noise in qudit surface codes Anwar et al. (2014); Watson et al. (2015).

For qubit topological stabilizer codes, the threshold for exact ML decoding with general Pauli noise can be determined using the techniques of Ref. Bombin et al. (2012), which mapped the ML decoder’s threshold to a phase transition in a pair of coupled random-bond Ising models. It would be interesting to explore this phase boundary for general Pauli noise beyond the depolarizing channel that was studied numerically in Ref. Bombin et al. (2012).

We have employed the BSV decoder to obtain our threshold estimates because of its near-optimal performance, but it is not the most efficient or practical decoder for many purposes. One outstanding challenge is to find good practical decoders that can work as well or nearly as well across a range of biases. The clustering-type decoders Duclos-Cianci and Poulin (2010, 2014) appear well suited for this task, and they have the added advantage that some versions of these decoders (e.g., Ref. Bravyi and Haah (2013)) generalize naturally to all Abelian anyon models such as the qudit surface codes.

The most pressing open question related to this work is whether the substantial gains observed here can be preserved in the context of fault-tolerant quantum computing.

Acknowledgements.—

This work is supported by the Australian Research Council (ARC) via Centre of Excellence in Engineered Quantum Systems (EQuS) Project No. CE110001013 and Future Fellowship No. FT130101744, by the U.S. Army Research Office Grants No. W911NF-14-1-0098 and No. W911NF-14-1-0103, and by the Sydney Informatics Hub for access to high-performance computing resources.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
114340
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description