Ultrahigh Error Threshold for Surface Codes with Biased Noise
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli errors occur more frequently than or errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of instead of around the faces, as this doubles the number of useful syndrome bits associated with the dominant errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
For quantum computing to be possible, fragile quantum information must be protected from errors by encoding it in a suitable quantum error correcting code. The surface code Bravyi and Kitaev (1998) (and related topological stabilizer codes Terhal (2015)) are quite remarkable among the diverse range of quantum error correcting codes in their ability to protect quantum information against local noise. Topological codes can have surprisingly large error thresholds—the break-even error rate below which errors can be corrected with arbitrarily high probability—despite using stabilizers that act on only a small number of neighboring qubits Dennis et al. (2002). It is the combination of these high error thresholds and local stabilizers that make topological codes, and the surface code in particular, popular choices for many quantum computing architectures.
Here we demonstrate a significant increase in the error threshold for a surface code when the noise is biased, i.e., when one Pauli error occurs at a higher rate than others. For qubits defined by nondegenerate energy levels with a Hamiltonian proportional to , the noise model is typically described by a dephasing (-error) rate that is much greater than the rates for relaxation and other energy-nonpreserving errors. Such biased noise is common in many quantum architectures, including superconducting qubits Aliferis et al. (2009), quantum dots Shulman et al. (2012), and trapped ions Nigg et al. (2014), among others. The increased error threshold is achieved by tailoring the standard surface code stabilizers to the noise in an extremely simple way and by employing a decoder that accounts for correlations in the error syndrome. In particular, using the tensor network decoder of Bravyi, Suchara and Vargo (BSV) Bravyi et al. (2014), we give evidence that the error correction threshold of this tailored surface code with pure noise is , a fourfold increase over the optimal surface code threshold for pure noise of Bravyi et al. (2014).
These gains result from the following simple observations. For a error in the standard formulation of the surface code, the stabilizers consisting of products of around each plaquette of the square lattice contribute no useful syndrome information. Exchanging these -type stabilizers with products of around each plaquette still results in a valid quantum surface code, since these -type stabilizers will commute with the original -type stabilizers. But now there are twice as many bits of syndrome information about the errors. Taking advantage of these extra syndrome bits requires an optimized decoder that can use the correlations between the two syndrome types. The standard decoder based on minimum-weight matching breaks down at this point, but the BSV decoder is specifically designed to handle such correlations. We show that the parameter , which defines the scale of correlation in the BSV decoder, needs to be large to achieve optimal decoding, so in that sense accounting for these correlations is actually necessary. These two ideas—doubling the number of useful syndrome bits and a decoder that makes optimal use of them—give an intuition that captures the essential reason for the increased threshold. It is nonetheless remarkable just how large an effect this simple change makes.
We also consider more general Pauli error models, where errors occur more frequently than and errors with a nonzero bias ratio of the error rates. We show that the tailored surface code exhibits these significant gains in the error threshold even for modest error biases in physically relevant regimes: for biases of (meaning dephasing errors occur 10 times more frequently than all other errors), the error threshold is already . Figure 1 presents our main result of the threshold scaling as a function of bias. Notably, we find that the tailored surface code together with the BSV decoder performs near the hashing bound for all values of the bias.
Error correction with the surface code.—
The surface code Bravyi and Kitaev (1998) is defined by a 2D square lattice having qubits on the edges with a set of local stabilizer generators. In the usual prescription, for each vertex (or plaquette), the stabilizer consists of the product of the (or ) operators acting on the neighboring edges. We simply exchange the roles of and , as shown in Fig. 2. By choosing appropriate “rough” and “smooth” boundary conditions along the vertical and horizontal edges, the code space encodes one logical qubit into the joint eigenspace of all the commuting stabilizers with a code distance given by the linear size of the lattice.
A large effort has been devoted to understanding error correction of the surface code and the closely related toric code Kitaev (2003). The majority of this effort has focused on the cases of either pure noise, or depolarizing noise where , , and errors happen with equal probability; see Refs. Terhal (2015); Brown et al. (2016) for recent literature reviews. Once a noise model is fixed, one must define a decoder, and the most popular choice is based on minimum-weight matching (MWM). This decoder treats and noise independently, and it has an error threshold of around for pure noise with a naive implementation Dennis et al. (2002); Wang et al. (2003), or with some further optimization Stace and Barrett (2010). Many other decoders have been proposed, however, and these are judged according to their various strengths and weaknesses, including the threshold error rate, the logical failure rate below threshold, robustness to measurement errors (fault tolerance), speed, and parallelizability. Of particular note are the decoders of Refs. Duclos-Cianci and Poulin (2010, 2014); Fowler (2013); Wootton and Loss (2012); Delfosse and Tillich (2014); Hutter et al. (2014); Torlai and Melko (2017); Baireuther et al. (2017); Krastanov and Jiang (2017), since these either can handle, or can be modified to handle, correlations beyond the paradigm of independent and errors.
The BSV decoder.—
Our choice of the BSV decoder Bravyi et al. (2014) is motivated by the fact that it gives an efficient approximation to the optimal maximum likelihood (ML) decoder, which maximizes the a posteriori probability of a given logical error conditioned on an observed syndrome. This decoder has also previously been used to do nearly optimal decoding of depolarizing noise Bravyi et al. (2014), achieving an error threshold close to estimates from statistical physics arguments that the threshold should be Bombin et al. (2012). [In fact, our own estimate of the depolarizing threshold using the BSV decoder is .] Because it approximates the ML decoder, the BSV decoder is a natural choice for finding the maximum value of the threshold for biased noise models.
The decoder works by defining a tensor network with local tensors associated with the qubits and stabilizers of the code. The geometry of the tensor network respects the geometry of the code. Each index on the local tensors has dimension 2 initially, but during the contraction sequence, this dimension grows until it is bounded by , called the bond dimension. When is exponentially large in , the number of physical qubits, then the contraction value of the tensor network returns the exact probabilities conditioned on the syndrome of each of the four logical error classes. Such an implementation would be highly inefficient, but using a truncation procedure during the tensor contraction allows one to work with any fixed value of with a polynomial runtime of . In this way, the algorithm provides an efficient and tunable approximation of the exact ML decoder, and in practice small values of were observed to work well Bravyi et al. (2014). We refer the reader to Ref. Bravyi et al. (2014) for the full details of this decoder.
Biased Pauli error model.—
A Pauli error channel is defined by an array corresponding to the probabilities for each Pauli operator (no error), , , and , respectively. We define to be the probability of any single-qubit error, and we always consider the case of independent, identically distributed noise. We define the bias to be the ratio of the probability of a error occurring to the total probability of a non- Pauli error occurring, so that . For simplicity, we consider the special case in what follows. Then for total error probability , errors occur with probability , and . When , this gives the standard depolarizing channel with probability for each nontrivial Pauli error, and taking the limit gives only errors with probability . Biased Pauli error models have been considered by a number of authors Aliferis and Preskill (2008); Aliferis et al. (2009); Röthlisberger et al. (2012); Napp and Preskill (2013); Brooks and Preskill (2013); Webster et al. (2015); Robertson et al. (2017), but we note that there are several different conventions for the definition of bias. Comparison between channels with different bias but the same total error rate is facilitated by the fact that the channel fidelity to the identity is a function only of .
The quantum capacity is the maximum achievable rate at which one can transmit quantum information through a noisy channel Wilde (2013). The hashing bound Lloyd (1997); Shor (2002); Devetak (2005) is an achievable rate which is generally less than the quantum capacity DiVincenzo et al. (1998). For Pauli error channels, the hashing bound takes a particularly simple form Wilde (2013) and says that there exist quantum stabilizer codes that achieve a rate , with being the Shannon entropy. The proof of achievability involves using random codes, and it is generally hard to find explicit codes and decoders that perform at or above this rate for an arbitrary channel, especially if one wishes to impose additional constraints such as local stabilizers. The quantum capacity itself is still unknown for any Pauli channel where at least two of are nonzero.
Our numerical implementation makes only a minor modification to the BSV decoder. To avoid changing the definitions of the tensors used in Ref. Bravyi et al. (2014), we use the symmetry by which we can exchange the role of noise in the modified surface code with the role of noise in the standard surface code. Then all of the definitions in Ref. Bravyi et al. (2014) carry over unchanged. The only difference is that we perform two tensor network contractions for each decoding sequence. There is an arbitrary choice as to whether to contract the network row-wise or column-wise. Rather than pick just one, we average the values of both contractions. We empirically observe improved performance with this modification.
For each value of the bias 0.5, 1, 3, 10, 30, 100, 300, 1000, , we estimate the logical failure rate using the BSV decoder to obtain the sample mean failure rate on 30 000 random trials for a selection of physical error rates in the region near the threshold for code distances 9, 13, 17, 21. We use a rather large value of the bond dimension for our simulations, specifically , although for bias we already observe that the decoder converges well with . However, we still do not observe complete convergence of the decoder at in the regime of intermediate bias around . The decoder convergence with is displayed in Fig. 4, which shows the estimate of the logical failure rate for the code near the threshold. Performance of the decoder and convergence with generally improve as bias increases again beyond , but it is likely that further improvements are possible in the intermediate bias regime. Although the decoder at is not achieving an optimal failure rate in the intermediate regime, we see excellent convergence for most of the range of bias and across the full range of bias we observe threshold behavior. Moreover, this threshold is at the hashing bound for all . In the regions that are a fixed distance below the threshold, as in Fig. 3, we observe an exponential decay in the logical failure rate , where may depend on the bias and is an increasing function of . This constitutes strong evidence of an error correction threshold.
We note that was the largest used in our simulations, so we do not know if the saturation of the decoder performance for bias is a real effect, or a side effect of having too small a value of . Although we observe convergence and threshold behavior, we do not know how much the performance might improve for larger values of since, as seen in Fig. 4, there is apparently still some room for improvement. It is possible that the saturation is a real effect, however, since even at infinite bias there are still logical errors of weight that consist only of errors. This is in contrast to the classical repetition code, which has a threshold of and a distance . One possibility to address this is to use a surface code with side lengths , where and are relatively prime, for example just choosing . We empirically observe that the -distance (i.e., the distance when restricted only to errors) of the code scales like for this modification of the surface code. In fact, on a toric code with and both odd and relatively prime, the -distance is provably Bravyi (2017). These observations are currently being explored, and will be addressed in more detail in forthcoming work.
To obtain an explicit estimate of the threshold , we use the critical exponent method of Ref. Wang et al. (2003). If we define a correlation length for some critical exponent , then in the regime where we expect that the behavior of the code is scale invariant. In this regime, since the code distance corresponds to a physical length, the failure probability should depend only on the dimensionless ratio , a conjecture that was first empirically verified in Ref. Wang et al. (2003). This suggests defining a rescaled variable so that the failure rate expanded as a power series in is explicitly scale invariant at the critical point corresponding to . It is then natural to consider a model for the failure rate given by a truncated Taylor expansion in the neighborhood around . We use a quadratic model, , and then fit to this model to find and the nuisance parameters . A discussion on the limits of the validity of this universal scaling hypothesis can be found in Ref. Watson and Barrett (2014). We plot our estimates of for various values of and for the representative cases of 10, 100, in Fig. 5 together with rescaled data as a function of . A visual inspection confirms good qualitative agreement with the model.
The critical exponents method gives precise estimates of with low statistical uncertainty. However, systematic biases might affect the accuracy of the estimate and must be accounted for. Finite-size effects typically cause threshold estimates to decrease as larger and larger code distances are added to the estimate. Additionally, the suboptimality of the decoder due to small values in the intermediate bias regime may have overestimated each individual logical failure rate. This latter effect does not directly imply that we have also overestimated the threshold , and the data remain consistent with the fit model in spite of this as can be seen in Fig. 5. On balance, we expect that our estimates might decrease somewhat in the intermediate bias regime. Our final error bars were obtained by jackknife resampling, i.e. by computing, for each fixed , the spread in estimates for when rerunning the fit procedure with a single distance removed, for each choice of . Our results are summarized in Fig. 1.
Fault tolerant syndrome extraction.—
Our study has focused on the error correction threshold under the assumption of ideal syndrome extraction. To see if the gains observed in this setting carry over to applications in fault-tolerant quantum computing, one would need to consider the effects of faulty syndrome measurements and gates. A full fault-tolerant analysis is beyond the scope of this work, but we briefly consider the key issues here.
First, the BSV decoder that we have used to investigate this ultrahigh error threshold is not fault tolerant, but some clustering decoders are Duclos-Cianci and Poulin (2014). Developing efficient, practical fault-tolerant decoders with the highest achievable thresholds remains a significant challenge for the field.
An added complication with a biased noise model is that the gates that perform the syndrome extraction must at least approximately preserve the noise bias in order to maintain an advantage Aliferis et al. (2009). For the tailored surface code studied here, one could appeal to the techniques of Refs. Aliferis et al. (2009); Brooks and Preskill (2013), where we note that -type syndromes can be measured using a minor modification of the -syndrome measurement scheme. We note that these syndrome extraction circuits are significantly more complex (involving the use of both ancilla cat states and gate teleportation) compared with the standard approach for the surface code with unbiased noise, and this added complexity will undoubtedly reduce the threshold.
More optimistically, we note that the standard method for syndrome extraction in the surface code Fowler et al. (2012) can be directly adapted to this tailored code and maintains biased noise on the data qubits. Ancilla qubits are placed in the centers of both the plaquette and vertex stabilizers of Fig. 2, and they will be both initialized and measured in the basis. Sequences of controlled- (vertex) and controlled- (plaquette) gates, with the ancilla as the control and data qubits as the target, yield the required syndrome measurements analogous to the standard method. In this scheme, we note that high-rate errors on the ancilla are never mapped to the data qubits; low-rate and errors on the ancilla can cause errors on the data qubits but the noise remains biased. Measurement errors will occur at the high rate, but this can be accommodated by repeated measurement. Note that, as argued by Aliferis and Preskill Aliferis et al. (2009), native controlled- and controlled- gates are perhaps not well motivated in a system with a noise bias, but nonetheless this simple scheme illustrates that, in principle, syndromes can be extracted in this code while preserving the noise bias. To develop a full fault-tolerant syndrome extraction circuit in a noise-biased system would require a complete specification of the native gates in the system and an understanding of their associated noise models.
Our numerical results strongly suggest that in systems that exhibit an error bias, there are significant gains to be had for quantum error correction with codes and decoders that are tailored to exploit this bias. It is remarkable that the tailored surface code performs at the hashing bound across a large range of biases. This means that it is not just a good code for a particular error model, but broadly good for any local Pauli error channel once it is tailored to the specific noise bias. It is also remarkable that a topological code, limited to local stabilizers, does so well in this regard.
Many realizations of qubits based on nondegenerate energy levels of some quantum system have a bias—often quite significant—towards dephasing ( errors) relative to energy-nonconserving errors ( and errors). This suggests tailoring other codes, and in particular other topological codes, to have error syndromes generated by - and -type stabilizers. Even larger gains might be had by considering biased noise in qudit surface codes Anwar et al. (2014); Watson et al. (2015).
For qubit topological stabilizer codes, the threshold for exact ML decoding with general Pauli noise can be determined using the techniques of Ref. Bombin et al. (2012), which mapped the ML decoder’s threshold to a phase transition in a pair of coupled random-bond Ising models. It would be interesting to explore this phase boundary for general Pauli noise beyond the depolarizing channel that was studied numerically in Ref. Bombin et al. (2012).
We have employed the BSV decoder to obtain our threshold estimates because of its near-optimal performance, but it is not the most efficient or practical decoder for many purposes. One outstanding challenge is to find good practical decoders that can work as well or nearly as well across a range of biases. The clustering-type decoders Duclos-Cianci and Poulin (2010, 2014) appear well suited for this task, and they have the added advantage that some versions of these decoders (e.g., Ref. Bravyi and Haah (2013)) generalize naturally to all Abelian anyon models such as the qudit surface codes.
The most pressing open question related to this work is whether the substantial gains observed here can be preserved in the context of fault-tolerant quantum computing.
This work is supported by the Australian Research Council (ARC) via Centre of Excellence in Engineered Quantum Systems (EQuS) Project No. CE110001013 and Future Fellowship No. FT130101744, by the U.S. Army Research Office Grants No. W911NF-14-1-0098 and No. W911NF-14-1-0103, and by the Sydney Informatics Hub for access to high-performance computing resources.
- Bravyi and Kitaev (1998) S. B. Bravyi and A. Y. Kitaev, “Quantum codes on a lattice with boundary,” (1998), quant-ph/9811052 .
- Terhal (2015) B. M. Terhal, ‘‘Quantum error correction for quantum memories,” Rev. Mod. Phys. 87, 307 (2015), arXiv:1302.3428 .
- Dennis et al. (2002) E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,” J. Math. Phys. (N.Y.) 43, 4452 (2002), quant-ph/0110143 .
- Aliferis et al. (2009) P. Aliferis, F. Brito, D. P. DiVincenzo, J. Preskill, M. Steffen, and B. M. Terhal, “Fault-tolerant computing with biased-noise superconducting qubits: A case study,” New J. Phys. 11, 013061 (2009), arXiv:0806.0383 .
- Shulman et al. (2012) M. D. Shulman, O. E. Dial, S. P. Harvey, H. Bluhm, V. Umansky, and A. Yacoby, “Demonstration of entanglement of electrostatically coupled singlet-triplet qubits,” Science 336, 202 (2012), arXiv:1202.1828 .
- Nigg et al. (2014) D. Nigg, M. Müller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz, M. A. Martin-Delgado, and R. Blatt, “Quantum computations on a topologically encoded qubit,” Science 345, 302 (2014), arXiv:1403.5426 .
- Bravyi et al. (2014) S. Bravyi, M. Suchara, and A. Vargo, “Efficient algorithms for maximum likelihood decoding in the surface code,” Phys. Rev. A 90, 032326 (2014), arXiv:1405.4883 .
- Kitaev (2003) A. Y. Kitaev, “Fault-tolerant quantum computation by anyons,” Ann. Phys. (Amsterdam) 303, 2 (2003), quant-ph/9707021 .
- Brown et al. (2016) B. J. Brown, D. Loss, J. K. Pachos, C. N. Self, and J. R. Wootton, “Quantum memories at finite temperature,” Rev. Mod. Phys. 88, 045005 (2016), arXiv:1411.6643 .
- Wang et al. (2003) C. Wang, J. Harrington, and J. Preskill, “Confinement-Higgs transition in a disordered gauge theory and the accuracy threshold for quantum memory,” Ann. Phys. (Amsterdam) 303, 31 (2003), quant-ph/0207088 .
- Stace and Barrett (2010) T. M. Stace and S. D. Barrett, ‘‘Error correction and degeneracy in surface codes suffering loss,” Phys. Rev. A 81, 022317 (2010), arXiv:0912.1159 .
- Duclos-Cianci and Poulin (2010) G. Duclos-Cianci and D. Poulin, “Fast Decoders for Topological Quantum Codes,” Phys. Rev. Lett. 104, 050504 (2010), arXiv:0911.0581 .
- Duclos-Cianci and Poulin (2014) G. Duclos-Cianci and D. Poulin, “Fault-tolerant renormalization group decoder for Abelian topological codes,” Quantum Inf. Comput. 14, 0721 (2014), arXiv:1304.6100 .
- Fowler (2013) A. G. Fowler, “Optimal complexity correction of correlated errors in the surface code,” (2013), arXiv:1310.0863 .
- Wootton and Loss (2012) J. R. Wootton and D. Loss, “High Threshold Error Correction for the Surface Code,” Phys. Rev. Lett. 109, 160503 (2012), arXiv:1202.4316 .
- Delfosse and Tillich (2014) N. Delfosse and J.-P. Tillich, “A decoding algorithm for CSS codes using the X/Z correlations,” in 2014 IEEE International Symposium on Information Theory (IEEE, 2014) arXiv:1401.6975 .
- Hutter et al. (2014) A. Hutter, J. R. Wootton, and D. Loss, “Efficient Markov chain Monte Carlo algorithm for the surface code,” Phys. Rev. A 89, 022326 (2014), arXiv:1302.2669 .
- Torlai and Melko (2017) G. Torlai and R. G. Melko, “Neural Decoder for Topological Codes,” Phys. Rev. Lett. 119, 030501 (2017), arXiv:1610.04238 .
- Baireuther et al. (2017) P. Baireuther, T. E. O’Brien, B. Tarasinski, and C. W. J. Beenakker, “Machine-learning-assisted correction of correlated qubit errors in a topological code,” (2017), arXiv:1705.07855 .
- Krastanov and Jiang (2017) S. Krastanov and L. Jiang, “Deep neural network probabilistic decoder for stabilizer codes,” Sci. Rep. 7, 11003 (2017), arXiv:1705.09334 .
- Bombin et al. (2012) H. Bombin, R. S. Andrist, M. Ohzeki, H. G. Katzgraber, and M. A. Martin-Delgado, “Strong Resilience of Topological Codes to Depolarization,” Phys. Rev. X 2, 021004 (2012), arXiv:1202.1852 .
- Aliferis and Preskill (2008) P. Aliferis and J. Preskill, “Fault-tolerant quantum computation against biased noise,” Phys. Rev. A 78, 052331 (2008), arXiv:0710.1301 .
- Röthlisberger et al. (2012) B. Röthlisberger, J. R. Wootton, R. M. Heath, J. K. Pachos, and D. Loss, “Incoherent dynamics in the toric code subject to disorder,” Phys. Rev. A 85, 022313 (2012), arXiv:1112.1613 .
- Napp and Preskill (2013) J. Napp and J. Preskill, “Optimal Bacon-Shor codes,” Quantum Inf. Comput. 13, 0490 (2013), arXiv:1209.0794 .
- Brooks and Preskill (2013) P. Brooks and J. Preskill, “Fault-tolerant quantum computation with asymmetric Bacon-Shor codes,” Phys. Rev. A 87, 032310 (2013), arXiv:1211.1400 .
- Webster et al. (2015) P. Webster, S. D. Bartlett, and D. Poulin, “Reducing the overhead for quantum computation when noise is biased,” Phys. Rev. A 92, 062309 (2015), arXiv:1509.05032 .
- Robertson et al. (2017) A. Robertson, C. Granade, S. D. Bartlett, and S. T. Flammia, “Tailored Codes for Small Quantum Memories,” Phys. Rev. Applied 8, 064004 (2017), arXiv:1703.08179 .
- Wilde (2013) M. Wilde, Quantum Information Theory (Cambridge University Press, Cambridge, England, 2013) arXiv:1106.1445 .
- Lloyd (1997) S. Lloyd, “Capacity of the noisy quantum channel,” Phys. Rev. A 55, 1613 (1997), quant-ph/9604015 .
- Shor (2002) P. W. Shor, “The quantum channel capacity and coherent information,” lecture notes, MSRI Workshop on Quantum Computation, San Francisco (November 2002).
- Devetak (2005) I. Devetak, “The private classical capacity and quantum capacity of a quantum channel,” IEEE Trans. Inf. Theory 51, 44 (2005), quant-ph/0304127 .
- DiVincenzo et al. (1998) D. P. DiVincenzo, P. W. Shor, and J. A. Smolin, “Quantum-channel capacity of very noisy channels,” Phys. Rev. A 57, 830 (1998), quant-ph/9706061 .
- Bravyi (2017) S. Bravyi, private communication (2017).
- Watson and Barrett (2014) F. H. E. Watson and S. D. Barrett, “Logical error rate scaling of the toric code,” New J. Phys. 16, 093045 (2014), arXiv:1312.5213 .
- Fowler et al. (2012) A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,” Phys. Rev. A 86, 032324 (2012), arXiv:1208.0928 .
- Anwar et al. (2014) H. Anwar, B. J. Brown, E. T. Campbell, and D. E. Browne, “Fast decoders for qudit topological codes,” New J. Phys. 16, 063038 (2014), arXiv:1311.4895 .
- Watson et al. (2015) F. H. E. Watson, H. Anwar, and D. E. Browne, “Fast fault-tolerant decoder for qubit and qudit surface codes,” Phys. Rev. A 92, 032309 (2015), arXiv:1411.3028 .
- Bravyi and Haah (2013) S. Bravyi and J. Haah, “Quantum Self-Correction in the 3D Cubic Code Model,” Phys. Rev. Lett. 111, 200501 (2013), arXiv:1112.3252 .