# Numerical Techniques for Finding the Distances of Quantum Codes

###### Abstract

We survey the existing techniques for calculating code distances of classical codes and apply these techniques to generic quantum codes. For classical and quantum LDPC codes, we also present a new linked-cluster technique. It reduces complexity exponent of all existing deterministic techniques designed for codes with small relative distances (which include all known families of quantum LDPC codes), and also surpasses the probabilistic technique for sufficiently high code rates.

## I Introduction

Quantum error correction (QEC) [1, 2, 3] is a critical part of quantum computing due to fragility of quantum states. To date, surface (toric) quantum codes [4, 5] and related topological color codes[6, 7, 8] have emerged as prime contenders [9, 10] in efficient quantum design due to two important advantages. Firstly, they only require simple local gates for quantum syndrome measurements, and secondly, they efficiently correct errors below a threshold of about 1% per gate. Unfortunately, the locality also limits[11] such codes to an asymptotically zero rate . This would make a useful quantum computer prohibitively large. Therefore, there is much interest in designing of feasible quantum codes with no locality restrictions.

A more general class of codes is quantum low-density-parity-check (LDPC) codes [12, 13]. These codes assume no locality but only require that stabilizer generators (parity checks) have low weight. Unlike surface or color codes, quantum LDPC codes can have a finite rate . Also, long LDPC codes have a nonzero error probability threshold, both in the standard setting when a syndrome is measured exactly, and in a fault-tolerant setting, when syndrome measurements include errors[14]. This non-zero error threshold is even more noteworthy given that known quantum LDPC codes have distances scaling as a square root of unlike linear scaling in the classical LDPC codes[15, 16, 17, 18]. LDPC codes can have finite rate and linear distance[19] if weights of stabilizer generators scale as a square root of . An important open problem is to find the bounds on distance of quantum LDPC codes with limited-weight stabilizer generators.

This paper addresses numerical algorithms for finding distances of quantum and classical LDPC codes. To make a valid comparison, we first survey several existing classical algorithms that were used before for generic random codes meeting the Gilbert-Varshamov (GV) bound. Here we re-apply these techniques to find the distances of quantum codes. Then we turn to the new techniques that are specific for LDPC codes. Note that most error patterns for such codes form small clusters that affect disjoint sets of stabilizer generators [14]. While some errors can have huge weight, they can be always detected if the size of each cluster is below the code distance . We then design an algorithm that verifies code distance by checking the error patterns that correspond to the connected error clusters. For any error weight , such clusters form an exponentially small fraction of generic errors of the same weight. Therefore, we consider the worst-case scenario that holds for any LDPC code and can be applied in quantum setting. This cluster-based algorithm exponentially reduces the complexity of the known deterministic techniques for sufficiently small relative distance, which is the case for all known families of weight-limited quantum LDPC codes. The new algorithm also outperforms probabilistic techniques for high-rate codes with small relative distance.

## Ii Background

Let be a linear -ary code of length and dimension in the vector space over the field . This code is uniquely specified by the parity check matrix , namely . Let denote the Hamming distance of code .

A quantum (qubit) stabilizer code is a -dimensional subspace of the -qubit Hilbert space , a common eigenspace of all operators in an Abelian stabilizer group , , where the -qubit Pauli group is generated by tensor products of the and single-qubit Pauli operators. The stabilizer is typically specified in terms of its generators, ; measuring the generators produces the syndrome vector. The weight of a Pauli operator is the number of qubits it affects. The distance of a quantum code is the minimum weight of an operator which commutes with all operators from the stabilizer , but is not a part of the stabilizer, .

A Pauli operator , where and , , can be mapped, up to a phase, to a quaternary vector, , where . A product of two quantum operators corresponds to a sum () of the corresponding vectors. Two Pauli operators commute if and only if the trace inner product of the corresponding vectors is zero, where . With this map, generators of a stabilizer group are mapped to the rows of a parity check matrix of an additive code over , with the condition that any two rows yield a nil trace inner product [20]. The vectors generated by rows of correspond to stabilizer generators that act trivially on the code; these vectors form the degeneracy group and are omitted from the distance calculation.

An LDPC code, quantum or classical, is a code with a sparse parity check matrix. For a regular LDPC code, every column and every row of have weights and respectively, while for a -limited LDPC code these weights are limited from above by and . A huge advantage of classical LDPC codes is that they can be decoded in linear time using belief propagation (BP) and the related iterative methods[21, 22]. Unfortunately, this is not necessarily the case for quantum LDPC codes, which have many short loops of length in their Tanner graphs. In turn, these loops cause a drastic deterioration in the convergence of the BP algorithm[23]. This problem can be circumvented with specially designed quantum codes[24, 18], but a general solution is not known. One alternative that has polynomial complexity in and approaches linear complexity for very low error rates is the cluster-based decoding of [14].

## Iii Generic techniques for distance calculation

The problem of verifying the distance of a linear code (finding a minimum-weight codeword) is related to the decoding problem: find an error of minimum weight that gives the same syndrome as the received codeword. The number of required operations usually scales as an exponent in blocklength , and we characterize the complexity by the exponent ( as . For example, for a linear -ary code with information qubits, inspection of all distinct codewords has (time) complexity exponent , where is the code rate. Given substantially large memory, one can instead consider the syndrome table that stores the list of all syndromes and coset leaders. This setting gives (space) complexity .

### Iii-a Sliding window (SW) technique

This technique has been proposed in Ref. [25] for correction of binary errors and generalized in Ref. [26] for soft-decision decoding (where more reliable positions have higher error costs). A related technique has also been considered in Refs. [27, 28]. The following proposition addresses this technique for quantum codes. Let be the -ary entropy function. Below we consider both generic stabilizer codes and those that meet the quantum GV bound

(1) |

###### Proposition 1.

Code distance of a random quantum stabilizer code can be found with complexity exponent

(2) |

For random stabilizer codes that meet the GV bound (1),

(3) |

###### Proof.

SW technique uses only consecutive positions to recover a codeword of a -ary linear code. For example, any consecutive positions suffice in a cyclic code. It is also easy to verify that in most (random) generator matrices any consecutive columns form a submatrix of a maximum rank . Thus, in most random codes, a codeword can be recovered by encoding its (error free) consecutive bits. To find a codeword of any given weight , we choose a sliding window of length that begins in position . Note that a sliding window can change its weight only by one when it moves from any position to ; thus at least one of the windows will have the average Hamming weight . Our algorithm takes all possible positions and weights . We assume that the current window is corrupted in positions and encode all

(4) |

vectors of length and weight . Procedure stops for some once we find an encoded codeword of weight . Finally, such vector is tested on linear dependence with the rows of the parity check matrix . This gives the overall SW-complexity of order with complexity exponent .

To apply SW procedure to a (degenerate) quantum code, note that an
stabilizer code is related to some additive quaternary code that is defined in
a space of vectors and has only distinct syndromes,
where is the redundancy of the quantum code. Thus, the effective
rate is^{1}^{1}1This construction is analogous to pseudogenerators
introduced in Ref. [29]. ,
which gives binary complexity exponent (2). Finally,
estimate (3) follows from (1).
∎

Note that classical codes that meet the GV bound have complexity exponent that achieves its maximum at . By contrast, quantum codes achieve maximum complexity at the rate . Note also that quantum codes of low rate and small relative distance have complexity exponent logarithmic in .

### Iii-B Random window (RW) technique[30, 31, 32]

###### Proposition 2.

Code distance of a random quantum stabilizer code can be found with complexity exponent

(5) |

###### Proof.

Given a random -ary linear code, we randomly choose positions, where is some small positive number, e.g., . We wish to find an -set of weight in some unknown codeword of weight . Let denote the number of random trials needed to find such a set with a high probability . Also, let be the minimum number of -sets needed to necessarily cover any (unknown) -set. It is easy to check[33] that

(6) |

and that . Below .

RW-algorithm performs trials of choosing random positions. Each trial gives a random submatrix of a (random) generator matrix . It is easy to verify that has full rank with a high probability (also, most matrices have all possible submatrices of rank or more. Thus, a typical -set has a subset of information bits. If the current -set includes such a subset, we consider vectors of weight and re-encode them into the codewords of length . Otherwise, we discard the -set and proceed further. Algorithm stops once we obtain a codeword of weight . The overall complexity has the order of with the binary complexity exponent

For stabilizer codes, we obtain (5) using their effective rate . ∎

### Iii-C Bipartition match (BM) technique[34]

###### Proposition 3.

Code distance of any quantum stabilizer code can be found with complexity exponent

(7) |

For random stabilizer codes that meet the GV bound (1),

(8) |

###### Proof.

We use a sliding (“left”) window of length starting in any position . For any unknown vector of weight , at least one position produces a window of the average weight (down to the closest integer) . The remaining (right) window of length will have the weight . We calculate the syndromes of all vectors and of weights and on the left and right windows, respectively, and try to find two vectors that give identical syndromes, and therefore form a codeword. Clearly, each set and have size of order . Finding two elements , with equal syndromes requires complexity of order , by sorting the elements of the combined set. Thus, finding a code vector of weight in any classical code requires complexity of order , where . For binary codes on the GV bound, . The arguments of the previous propositions then give exponents (7) and (8) for stabilizer codes. ∎

Note that BM-technique works for any linear code, unlike two previous techniques provably valid for random codes. It is also the only technique that can be transferred to quantum codes without any performance loss. Note also that is always below the SW exponent , and is below the RW exponent for very high rates. This is due to the fact that for , and is twice the value of .

### Iii-D Punctured bipartition technique [35]

###### Proposition 4.

Code distance of a random quantum stabilizer code can be found with complexity exponent

(9) |

For random stabilizer codes that meet the GV bound (1),

(10) |

###### Proof.

We now combine the SW and BM techniques and consider a sliding window of length that exceeds by a factor of . It is easy to verify that most random codes include at least one information -set on any sliding -window . Thus, any such window forms a punctured linear code with a smaller redundancy . Also, any codeword of weight has the average weight in one or more sliding windows. For simplicity, let and be even. We then use bipartition on each -window and consider all vectors and of weight on each half of length . The corresponding sets and have size . We then seek all matching pairs that have the same syndrome . Each such pair represents some code vector of the punctured code and is re-encoded to the full length . For each , we stop the procedure once we find a re-encoded vector of weight . Thus, we use punctured codes and lower BM-complexity to the order .

However, it can be verified that some (short) syndrome of size can appear in many vectors and of length , unlike the original BM-case. It turns out [35] that our choice of parameter limits the number of such combinations to the above order . Thus, we have to encode all code vectors of weight in a random code. The end result is a smaller complexity of order , where

Transition from classical codes to quantum codes does not affect BM-complexity. However, our sliding algorithm again depends on the effective quantum code rate . This changes exponent for classical codes to exponent (9) for stabilizer codes. Quantum GV bound (1) gives (10). ∎

For random codes of high rate that meet the GV bound, this technique gives the lowest known exponents (for stabilizer codes) and (for binary codes). However, it cannot be provably applied to any linear code, unlike a simpler bipartition technique. Finally, the above propositions can be applied to a narrower class of the Calderbank-Shor-Steane (CSS) codes. Here a parity check matrix is a direct sum , and the commutativity condition simplifies to . A CSS code with has the same effective rate since both codes include information bits. It is readily verified that CSS codes have binary complexity exponents given by expressions similar to Eqs. (2), (5), (7), (9), where one must substitute with .

## Iv Linked-cluster technique

Let be an ensemble of regular LDPC codes, in which every column and every row of matrix has weight and respectively. The following technique is designed as an alternative to the BP technique used in [36] to find code distance. First, note that with quantum codes, BP can yield decoding failures[23], while our setting requires error-free guarantee. The second, more important, reason is that we consider very specific, self-orthogonal LDPC codes that can be used in quantum setting. These self-orthogonal codes represent very atypical elements of and can have drastically different parameters. In particular, the existing constructions of such codes have low distance , where , whereas a typical -code has a linearly growing distance. Thus, we consider the worst-case scenario in , which can be provably applied to any code.

For an -code, we represent all (qu)bits as nodes of a graph with vertex set and connect two nodes iff there is a parity check that includes both positions. A codeword is defined by its support and induces the subgraph that forms one or more clusters and has no edges outside of . Generally, we will make no distinction between the set and the corresponding subgraph. Note that disconnected clusters affect disjoint sets of the parity checks. This implies

###### Lemma 1 (Lemma 1 from Ref. [14]).

A minimum-weight code word of a -ary linear code forms a linked cluster on .

###### Proof.

Let a minimum-weight support include disconnected parts, say and . These parts satisfy different parity checks. Then vectors generated by and belong to our code and have smaller weights. Contradiction. ∎

Linked-cluster algorithm. The following breadth-first algorithm inspects all fully-linked clusters of a given weight . Let us assume that is the starting position in the support of an unknown codeword of weight . Position belongs to some parity checks which form the list . To satisfy the parity-check , we arbitrarily select some (odd) number of the remaining parity-check positions of . These positions are now included in the current support . Any time a new position is selected, we also append the list with the new parity checks which include this position. We then proceed with the subsequent parity-checks as follows. Let a check overlap with some of the parity checks in positions, and let be the number of 1s selected in these positions. Then can use only the remaining positions to pick up some positions. If is odd, the algorithm adds some positions from , but (temporarily) skips this check if is even. This parity check can be re-processed in some later step as a parity check if the corresponding number is odd. The process is stopped once we add positions. The result is a binary codeword with support if all processed odd-type checks are satisfied and all unprocessed checks have even overlap with . At this point, adding some symbols in any even-type check can only increase the weight of a codeword. For a -ary code, we perform summation over , and need to check the rank of a matrix formed by the corresponding columns of the check matrix. For a quantum stabilizer code, we also need to verify that any obtained codeword is linearly independent from rows of .

At step , there are ways to select positions. Thus, the total number of choices to select positions is

which in turn is bounded by

Here is the Kronecker symbol, and is the number of terms in the decomposition .

To estimate , introduce the generating function . Easy summation gives for :

(11) |

Finally, we use the contour integration of to find the coefficients . Let in the case of a binary code, and in the case of a -ary code. We have:

###### Proposition 5.

A codeword of weight in a -code can be found with complexity exponent , where grows monotonically with .

More precise estimates of also give specific numbers , which can be important for small values of . Finally note that while the cluster technique has high complexity for large and , its exponent is linear in the relative distance . In comparison, deterministic techniques of Sec. III give the higher exponents in this limit. All known quantum LDPC codes with limited-weight stabilizer generators have , and the linked-cluster technique gives the lowest complexity for these codes. Note that the RW technique also gives a linear in exponent that is bounded by . Our cluster technique still lowers this exponent for high code rates .

## V Conclusion

In this paper, we considered different techniques of finding code distances of stabilizer quantum codes. For sparse quantum LDPC codes, we proposed a new cluster-based technique. This technique reduces complexity exponents of the existing non-probabilistic algorithms for codes with sufficiently small relative distances. In particular, this is the case for all known families of quantum LDPC codes that have distances of order or less. Cluster-based technique also beats the probabilistic random-window technique for high-rate codes.

Acknowledgment. This work was supported in part by the U.S. Army Research Office under Grant No. W911NF-11-1-0027, and by the NSF under Grant No. 1018935.

## References

- [1] P. W. Shor, “Scheme for reducing decoherence in quantum computer memory,” Phys. Rev. A, vol. 52, p. R2493, 1995. [Online]. Available: http://link.aps.org/abstract/PRA/v52/pR2493
- [2] E. Knill and R. Laflamme, “Theory of quantum error-correcting codes,” Phys. Rev. A, vol. 55, pp. 900–911, 1997. [Online]. Available: http://dx.doi.org/10.1103/PhysRevA.55.900
- [3] C. Bennett, D. DiVincenzo, J. Smolin, and W. Wootters, “Mixed state entanglement and quantum error correction,” Phys. Rev. A, vol. 54, p. 3824, 1996. [Online]. Available: http://dx.doi.org/10.1103/PhysRevA.54.3824
- [4] A. Y. Kitaev, “Fault-tolerant quantum computation by anyons,” Ann. Phys., vol. 303, p. 2, 2003. [Online]. Available: http://arxiv.org/abs/quant-ph/9707021
- [5] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,” J. Math. Phys., vol. 43, p. 4452, 2002. [Online]. Available: http://dx.doi.org/10.1063/1.1499754
- [6] H. Bombin and M. A. Martin-Delgado, “Topological quantum distillation,” Phys. Rev. Lett., vol. 97, p. 180501, Oct 2006. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.97.180501
- [7] ——, “Optimal resources for topological two-dimensional stabilizer codes: Comparative study,” Phys. Rev. A, vol. 76, no. 1, p. 012305, Jul 2007.
- [8] ——, “Homological error correction: Classical and quantum codes,” Journal of Mathematical Physics, vol. 48, no. 5, p. 052105, 2007. [Online]. Available: http://scitation.aip.org/content/aip/journal/jmp/48/5/10.1063/1.2731356
- [9] R. Raussendorf and J. Harrington, “Fault-tolerant quantum computation with high threshold in two dimensions,” Phys. Rev. Lett., vol. 98, p. 190504, 2007. [Online]. Available: http://link.aps.org/abstract/PRL/v98/e190504
- [10] H. G. Katzgraber, H. Bombin, and M. A. Martin-Delgado, “Error threshold for color codes and random three-body ising models,” Phys. Rev. Lett., vol. 103, p. 090501, Aug 2009. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.103.090501
- [11] S. Bravyi, D. Poulin, and B. Terhal, “Tradeoffs for reliable quantum information storage in 2D systems,” Phys. Rev. Lett., vol. 104, p. 050503, Feb 2010. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.104.050503
- [12] M. S. Postol, “A proposed quantum low density parity check code,” 2001, unpublished. [Online]. Available: http://arxiv.org/abs/quant-ph/0108131
- [13] D. J. C. MacKay, G. Mitchison, and P. L. McFadden, “Sparse-graph codes for quantum error correction,” IEEE Trans. Info. Th., vol. 59, pp. 2315–30, 2004. [Online]. Available: http://dx.doi.org/10.1109/TIT.2004.834737
- [14] A. A. Kovalev and L. P. Pryadko, “Fault tolerance of quantum low-density parity check codes with sublinear distance scaling,” Phys. Rev. A, vol. 87, p. 020304(R), Feb 2013. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevA.87.020304
- [15] J.-P. Tillich and G. Zemor, “Quantum LDPC codes with positive rate and minimum distance proportional to ,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), June 2009, pp. 799–803.
- [16] A. A. Kovalev and L. P. Pryadko, “Improved quantum hypergraph-product LDPC codes,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), July 2012, pp. 348–352.
- [17] ——, “Quantum Kronecker sum-product low-density parity-check codes with finite rate,” Phys. Rev. A, vol. 88, p. 012311, July 2013. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevA.88.012311
- [18] I. Andriyanova, D. Maurice, and J.-P. Tillich, “New constructions of CSS codes obtained by moving to higher alphabets,” 2012, unpublished.
- [19] S. Bravyi and M. B. Hastings, “Homological product codes,” 2013, unpublished.
- [20] A. R. Calderbank, E. M. Rains, P. M. Shor, and N. J. A. Sloane, “Quantum error correction via codes over GF(4),” IEEE Trans. Info. Theory, vol. 44, pp. 1369–1387, 1998. [Online]. Available: http://dx.doi.org/10.1109/18.681315
- [21] R. Gallager, “Low-density parity-check codes,” IRE Trans. Inf. Theory, vol. 8, no. 1, pp. 21–28, Jan 1962.
- [22] D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms. New York, NY, USA: Cambridge University Press, 2003. [Online]. Available: http://www.cs.toronto.edu/~mackay/itila/p0.html
- [23] D. Poulin and Y. Chung, “On the iterative decoding of sparse quantum codes,” Quant. Info. and Comp., vol. 8, p. 987, 2008.
- [24] K. Kasai, M. Hagiwara, H. Imai, and K. Sakaniwa, “Quantum error correction beyond the bounded distance decoding limit,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 1223 –1230, Feb 2012.
- [25] G. S. Evseev, “Complexity of decoding for linear codes.” Probl. Peredachi Informacii, vol. 19, pp. 3–8, 1983, (In Russian). [Online]. Available: http://mi.mathnet.ru/ppi1159
- [26] I. Dumer, “Suboptimal decoding of linear codes: partition technique,” IEEE Trans. Inf. Theory, vol. 42, no. 6, pp. 1971 –1986, Nov 1996.
- [27] K.-H. Zimmermann, “Integral hecke modules, integral generalized reed-muller codes, and linear codes,” Technische Universit at Hamburg-Harburg, Tech. Rep. Tech. Rep. 3-96, 1996.
- [28] M. Grassl, “Searching for linear codes with large minimum distance,” in Discovering Mathematics with Magma, ser. Algorithms and Computation in Mathematics, W. Bosma and J. Cannon, Eds. Springer Berlin Heidelberg, 2006, vol. 19, pp. 287–313. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-37634-7_13
- [29] G. White and M. Grassl, “A new minimum weight algorithm for additive codes,” in 2006 IEEE Intern. Symp. Inform. Theory, July 2006, pp. 1119–1123.
- [30] J. S. Leon, “A probabilistic algorithm for computing minimum weights of large error-correcting codes,” IEEE Trans. Info. Theory, vol. 34, no. 5, pp. 1354 –1359, Sep 1988.
- [31] E. A. Kruk, “Decoding complexity bound for linear block codes,” Probl. Peredachi Inf., vol. 25, no. 3, pp. 103–107, 1989, (In Russian). [Online]. Available: http://mi.mathnet.ru/eng/ppi665
- [32] J. T. Coffey and R. M. Goodman, “The complexity of information set decoding,” IEEE Trans. Info. Theory, vol. 36, no. 5, pp. 1031 –1037, Sep 1990.
- [33] P. Erdos and J. Spencer, Probabilistic methods in combinatorics. Budapest: Akademiai Kiado, 1974.
- [34] I. I. Dumer, “Two decoding algorithms for linear codes,” Probl. Peredachi Informacii, vol. 25, pp. 24–32, 1989, (In Russian). [Online]. Available: http://mi.mathnet.ru/ppi635
- [35] I. Dumer, “Soft-decision decoding using punctured codes,” IEEE Trans. Inf. Theory, vol. 47, no. 1, pp. 59–71, Jan 2001.
- [36] X.-Y. Hu, M. P. C. Fossorier, and E. Eleftheriou, “On the computation of the minimum distance of low-density parity-check codes,” in Communications, 2004 IEEE International Conference on, vol. 2, June 2004, pp. 767–771.