Entanglementassisted quantum turbo codes
Abstract
An unexpected breakdown in the existing theory of quantum serial turbo coding is that a quantum convolutional encoder cannot simultaneously be recursive and noncatastrophic. These properties are essential for quantum turbo code families to have a minimum distance growing with blocklength and for their iterative decoding algorithm to converge, respectively. Here, we show that the entanglementassisted paradigm simplifies the theory of quantum turbo codes, in the sense that an entanglementassisted quantum (EAQ) convolutional encoder can possess both of the aforementioned desirable properties. We give several examples of EAQ convolutional encoders that are both recursive and noncatastrophic and detail their relevant parameters. We then modify the quantum turbo decoding algorithm of Poulin et al., in order to have the constituent decoders pass along only “extrinsic information” to each other rather than a posteriori probabilities as in the decoder of Poulin et al., and this leads to a significant improvement in the performance of unassisted quantum turbo codes. Other simulation results indicate that entanglementassisted turbo codes can operate reliably in a noise regime 4.73 dB beyond that of standard quantum turbo codes, when used on a memoryless depolarizing channel. Furthermore, several of our quantum turbo codes are within 1 dB or less of their hashing limits, so that the performance of quantum turbo codes is now on par with that of classical turbo codes. Finally, we prove that entanglement is the resource that enables a convolutional encoder to be both noncatastrophic and recursive because an encoder acting on only information qubits, classical bits, gauge qubits, and ancilla qubits cannot simultaneously satisfy them.
quantum communication, entanglementassisted quantum turbo code, entanglementassisted quantum error correction, recursive, noncatastrophic, entanglementassisted quantum convolutional code
1 Introduction
Classical turbo codes represent one of the great successes of the modern coding era [1, 2, 3, 4]. These near Shannonlimit codes have efficient encodings, they offer astounding performance on memoryless channels, and their iterative decoding algorithm quickly converges to an accurate error estimate. They are “probabilistic codes,” meaning that they possess sufficient structure to ensure efficient encoding and decoding, yet they have enough randomness to allow for analysis of their performance with the probabilistic method [2, 4, 5].
The theory of quantum turbo codes is much younger than its classical counterpart [6], and we still stand to learn more regarding these codes’ performance and structure. Poulin et al. set this theory on a firm foundation [6] in an attempt to construct explicit quantum codes that come close to achieving the quantum capacity of a quantum channel [7, 8, 9, 10]. The structure of a quantum serial turbo code is similar to its classical counterpart—one quantum convolutional encoder [11, 12] followed by a quantum interleaver and another quantum convolutional encoder. The encoder “closer to the channel” is the inner encoder, and the one “farther from the channel” is the outer encoder. One of the insights of Poulin et al. in Ref. [6] was to “quantize” the classical notion of a state diagram [13, 14]—this diagram helps in analyzing important properties of the constituent quantum convolutional encoders that directly affect the performance of the resulting quantum turbo code.
Despite Poulin et al.’s success in providing a solid theoretical construction, they discovered an unexpected breakdown in the theory of quantum turbo codes. They found that quantum convolutional encoders cannot be simultaneously noncatastrophic and recursive, two desirable properties that can hold simultaneously for classical convolutional encoders and are one reason underpinning the high performance of classical turbo codes [2, 4]. These two respective properties ensure that an iterative decoder performs well in estimating errors and that the turbo code family has a minimum distance growing almost linearly with the length of the code [5, 15, 16]. Quantum convolutional encoders cannot have these properties simultaneously, essentially because stabilizer operators must satisfy stringent commutativity constraints in order to form a valid quantum code (see Theorem 1 of Ref. [6] or the simplified proof in Ref. [17]). Thus, the existing quantum turbo codes with noncatastrophic constituent quantum convolutional encoders do not have a growing minimum distance, but Poulin et al. conducted numerical simulations and showed that performance of their quantum turbo codes appears to be good in practice.
The breakdown in the quantum turbo coding theory has led researchers to ponder
if some modification of the quantum turbo code construction could have
both a growing minimum distance and the iterative decoding algorithm
converging [18]. One possibility is simply to change the paradigm for
quantum error correction, by allowing the sender and the receiver access to
shared entanglement before communication begins. This paradigm is known as the
“entanglementassisted” setting, and it
simplifies both the theory of quantum error correction [19, 20] and
the theory of quantum channels [21, 22]. In
entanglementassisted quantum (EAQ) error correction, it is not necessary for
a set of stabilizer operators to satisfy the stringent commutativity
constraints that a standard quantum code should satisfy, allowing us to
produce EAQ codes from arbitrary classical codes.
“To what extent does the addition of free entanglement make quantum information theory similar to classical information theory?”
A naive attempt at constructing entanglementassisted quantum turbo codes (EAQTCs) would be to produce them from classical turbo codes simply by following the recipe given in Refs. [19, 20]. That is, one could use the parity check matrix of a classical turbo code to build an EAQTC according to the wellknown CSS construction [27, 28] and its entanglementassisted generalization [19, 20]. Though, this approach suffers from several drawbacks, which are not present when one constructs EAQTCs from first principles:

Following the recipe of Refs. [19, 20] is really just a “blind import,” and as such, it excludes us from understanding the theory of EAQTCs at a deeper level. As discussed before, there are important theoretical issues with the theory of quantum turbo coding [6], and understanding the state diagram of an EAQTC could in turn be helpful for understanding issues having to do with recursiveness and noncatastrophicity.

The “blind import” approach does not provide any insight for achieving an encoding efficiency beyond the efficiency given by the encoding algorithms from Refs. [29, 30]. In comparison, the “first principles” approach given here leads to an encoder with a complexity linear in the block length . Also, a first principles approach gives control over the number of memory qubits used by the constituent quantum convolutional encoders [31], and this is an important parameter contributing to the complexity of the encoder and the decoding algorithm.
^{2} 
The “blind import” approach does not provide any insight for constructing a decoding algorithm to take advantage of important effects such as degeneracy [32, 33], nor is it clear that the decoding algorithm will be as efficient as one could have from a first principles approach. Indeed, the first principles approach given here leads to a decoding algorithm (based on that from Ref. [6]) with a complexity linear in the block length.

The “blind import” approach does not give any clear control over the entanglement consumption rate of the resulting EAQTC, other than that which is given by the formulas in Refs. [34, 35]. Given that shared entanglement is a precious resource, it would be desirable to minimize the consumption of it. The first principles approach outlined here gives the quantum code designer precise control over the entanglement consumption rate of the resulting EAQTC.
Clearly, given all of the above, it is a worthwhile endeavor to construct a theory of EAQTCs from first principles.
2 Summary of Results
In this paper, we show that entanglement assistance simplifies the theory of quantum turbo codes in several important ways, we significantly enhance the performance of the quantum turbo decoding algorithm from Ref. [6], and we also examine the effect on the performance of quantum turbo codes by adding entanglement assistance. Specifically,

We develop a “first principles” approach to entanglementassisted quantum turbo codes. Although this theory is admittedly a straightforward extension of the theory of entanglementassisted codes [19, 20] and quantum turbo codes [6], it is necessary for us to develop it in order to understand how notions such as the state diagram, noncatastrophicity, and recursiveness change in the entanglementassisted setting.

We show how to circumvent the “nogo” theorem of Ref. [6]. In particular, we find many examples of EAQ convolutional encoders that can simultaneously be recursive and noncatastrophic.

We enhance the performance of the quantum turbo decoding algorithm from Ref. [6], by having the constituent decoders pass along “extrinsic information” to each other rather than a posteriori probabilities as in the decoding algorithm of Ref. [6]. This modification is consistent with how classical turbo decoding algorithms operate and is one of the reasons why they perform near the Shannon limit. In particular, several of our quantum turbo codes are within 1 dB or less of their hashing limits, so that the performance of quantum turbo codes is now on par with that of classical turbo codes.

Our simulations explore the effects of adding entanglement assistance in various ways to unassisted quantum turbo codes. The results of these simulations indicate that adding entanglement assistance increases their performance on a memoryless depolarizing channel (as one would expect), but they also suggest how to make judicious use of entanglement consumption in a quantum turbo code. We also consider the more practical situation in which the entanglement is noisy and find that particular entanglementassisted quantum turbo codes have a certain amount of robustness to this noise.

We broaden the scope of the “nogo” theorem of Ref. [6] to quantum convolutional encoders acting on logical qubits, classical bits, ancilla qubits, and gauge (mixedstate) qubits (i.e., we prove that all such encoders cannot be both recursive and noncatastrophic). This result implies that entanglement is the resource enabling a quantum convolutional encoder to be both recursive and noncatastrophic.

We finally explore how recursiveness, nonrecursiveness, catastrophicity, or noncatastrophicity are preserved under various resource substitutions of a quantum convolutional encoder, such as converting ancilla qubits to classical bits, converting ancilla qubits to ebits, etc. This exploration reveals the relationships underpinning different kinds of quantum convolutional encoders.
The ability of an entanglementassisted quantum turbo code to be simultaneously recursive and noncatastrophic has important implications. A “quantized” version of the result in Ref. [5] implies that the quantum serial turbo code family formed by employing such an encoder along with another noncatastrophic encoder has a minimum distance growing with the length of the code [16], and noncatastrophicity implies that it has good iterative decoding performance. This result for EAQ convolutional encoders holds partly because all four Pauli operators acting on half of a Bell state are distinguishable when performing a measurement on both qubits in the Bell state (much like the superdense coding effect [36]). The ability of EAQ convolutional encoders to be simultaneously noncatastrophic and recursive is another way in which shared entanglement aids in a straightforward quantization of a classical result—this assumption thus simplifies and enhances the existing theory of quantum turbo coding.
Regarding our simulations, we found two quantum convolutional encoders that are comparable to the first and third encoders of Poulin et al. [6], in the sense that they have the same number of memory qubits, information qubits, ancilla qubits, and a comparable distance spectrum. These encoders are noncatastrophic, and they become recursive after replacing all of the ancilla qubits with ebits. Additionally, the encoders with full entanglement assistance have a distance spectrum much improved over the unassisted encoders, essentially because entanglement increases the ability of a code to correct errors [37].
We constructed a quantum serial turbo code with these encoders and conducted four types of simulations: the first with the unassisted encoders, a second with full entanglement assistance, a third with the inner encoder assisted, and a fourth with the outer encoder assisted. Due to our enhancement of the quantum turbo decoding algorithm from Ref. [6], the unassisted quantum turbo codes perform significantly better than those in Ref. [6]. The encoders with full entanglement assistance have an improvement in performance over the unassisted ones, in the sense that they can operate reliably in a noise regime several dB beyond the unassisted turbo codes. This is due to the improvement in the distance spectrum and is also due to the encoder becoming recursive. Also, these codes come close to achieving the entanglementassisted hashing bound [21, 38], which is the ultimate limit on their performance. The quantum turbo codes with inner encoder entanglement assistance have performance a few dB below the fullyassisted code, but one advantage of them is that other simulations indicate that they are more tolerant to noise on the ebits.
We organize this paper as follows. The first section establishes notations and definitions for EAQ codes similar to those in Ref. [6], and it also shows a way in which an EAQ code with only ebits is remarkably similar to a classical code. Section 4 defines the state diagram of an EAQ convolutional encoder—it reveals if an encoder is noncatastrophic and recursive, and we review how to check for these properties. Section 5 gives several examples of noncatastrophic, recursive EAQ convolutional encoders and details their distance spectra. We discuss the construction of an EAQ serial turbo code in Section 6 and give several combinations of serial concatenations that have good minimumdistance scaling. In Section 7, we detail how to modify the quantum turbo decoding algorithm from Ref. [6] such that the constituent decoders pass along only extrinsic information, and we discuss why this leads to an improvement in performance. Section 8 contains our simulation results with accompanying interpretations of them. In Section 9, we show that entanglement is in fact the resource that enables a convolutional encoder to be both recursive and noncatastrophic—a corollary of Theorem 1 in Ref. [6] states that other resources such as classical bits, gauge qubits, and ancilla qubits do not help. Section 10 then discusses encoders that act on information qubits, ancilla qubits, ebits, and classical bits, and it gives an example of an encoders that can be recursive and noncatastrophic. Section 11 states some general observations regarding recursiveness and noncatastrophicity for different types of encoders. The conclusion summarizes our contribution and states many open questions.
3 EAQ Codes
We first review some important ideas from the theory of EAQ codes in order to prepare us for defining convolutional versions of them along with their corresponding state diagrams. The development is similar to that in Section III of Ref. [6].
The encoder of an EAQ code produces an encoded state by acting on a set of information qubits in a state , ancilla qubits, and ebits:
where
and the sender Alice possesses halves of the entangled pairs while the
receiver Bob possesses the other halves (see Figure 1).
In what follows, we abuse notation by having refer to the
“Clifford group” unitary operator that acts
as above, but having it also refer to a binary matrix that acts on binary
vectors—these binary vectors represent different Pauli operators that are
part of the specification of an EAQ code (e.g., see Ref. [20]).
Suppose now that Alice transmits her qubits of the encoded state over a noisy Pauli channel. Then the resulting state is where is some fold tensor product of Pauli operators (in what follows, we simply say an “qubit Pauli operator”). Suppose that Bob applies the inverse of the encoding to the state . The resulting state has the following form:
(1) 
where is some qubit Pauli operator, is an qubit Pauli operator, and is some qubit Pauli operator. Observe that
where if and otherwise. The fact that is invariant under the application of a Pauli operator implies that a quantum code can be degenerate (where two different errors mapping to the same syndrome have the same effect on the encoded state). Also, observe that
where denotes the four distinguishable Bell states and
Thus all four Pauli operators are distinguishable in this case by performing a Bell measurement (similar to the superdense coding effect [36]). Ebits do not contribute to the degeneracy of a quantum code because different errors lead to distinct measurement results.
Bob can perform basis measurements on the ancillas and Bell measurements on the ebits to determine the syndrome of the Pauli error :
Consider the following relation between the binary representations of , , , , and in (1):
The syndrome only partially determines , but it fully determines . Let us decompose the binary representation of as and that of as . When Bob performs his measurements, he determines , , and . That is, he determines the following relations between the components of and the components of the syndrome :
The syndrome also determines the components and of :
The phenomenon of degeneracy represents the most radical departure of quantum coding from classical coding [32, 33]. Consider two different physical errors and that differ only by operators acting on the ancillas:
where is a length zero vector (the binary representation of a fold tensor product of identity operators). These different errors lead to the same error syndrome. In the classical world, this would present a problem for error correction. But this situation does not cause a problem in the quantum world for the errors and —the logical error affecting the encoded quantum information is the same for both and , and Bob can correct either of these errors simply by applying after decoding.
We now define several sets of operators that are important for determining the properties and performance of EAQ codes. The set of harmless, undetected errors consists of all operators that have a zero syndrome, yet have no effect on the encoded state:
This set of operators is equivalent to the isotropic subgroup, in the language of Refs. [19, 20]. It is also analogous to the allzero codeword of a classical code. The set of harmful, undetected errors consists of all operators that have a zero syndrome, yet change the quantum information in the encoded state:
where. This set of operators corresponds to a single logical transformation on the encoded state, depending on the choice of , and it is analogous to a single codeword of a classical code. The set corresponds to a particular logical transformation and syndrome, and is thus a logical coset:
It is analogous to a single erred codeword of a classical code if , , or is nonzero. The operator codewords of an EAQ code belong to the following set :
(2) 
where is an arbitrary qubit Pauli operator. The set is equivalent to the full set of logical operators for the code, and it is analogous to the set of all codewords of a classical code. These definitions lead to the definition of the minimum distance of an EAQ code as the minimum weight of an operator in :
This definition is similar to the definition of the distance of a classical
code, but it incorporates the coset structure of a quantum code. Thus, we can
determine the performance of the code in terms of distance by tracking its
logical operators, and this intuition is important when we move on to EAQ
convolutional codes. Also, the logical operators play an important role in
decoding because a maximum likelihood decoding algorithm for a quantum code
estimates the most likely logical error given the syndrome and a particular
physical noise model.
We end this section with a final remark concerning EAQ codes and the musing of Hayden et al. in Ref. [26]. Suppose that an EAQ code does not exploit any ancilla qubits and uses only ebits. Then the features of the code become remarkably similar to that of a classical code. Degeneracy, a uniquely quantum feature of a code, does not occur in this case because the syndrome completely determines the error, in analogy with error correction in the classical world. Also, the code loses its coset structure, so that is equal to the identity operator, is equivalent to just one logical operator, and the definition of the code’s minimum distance is the same as the classical definition.
4 EAQ convolutional codes
An EAQ convolutional code is a particular type of EAQ code that has a convolutional structure. In past work on this topic [23, 24, 39], we adopted the “GrasslRötteler” approach to this theory [40], by beginning with a mathematical description of the code and determining a GrasslRötteler pearlnecklace encoder that can encode it. Here, we adopt the approach of Poulin et al. [6] which in turn heavily borrows from ideas in classical convolutional coding [13]. We begin with a seed transformation (a “convolutional encoder” or a “quantum shiftregister circuit”) and determine its state diagram, which yields important properties of the encoder. We can always rearrange a GrasslRötteler pearlnecklace encoder as a convolutional encoder [41, 42, 43], but it is not clear that every convolutional encoder admits a form as a GrasslRötteler pearl necklace encoder. For this reason and others, we adopt the Poulin et al. approach in what follows.
An EAQ convolutional encoder is a “Clifford
group” unitary that acts on memory qubits,
information qubits, ancilla qubits, and halves of ebits to produce a
set of memory qubits and physical or channel
qubits,
where acts on the output memory qubits, acts on the output physical qubits, acts on the input memory qubits, acts on the information qubits, acts on the ancilla qubits, and acts on the halves of ebits. Although the quantum states in these registers can be continuous in nature, the act of syndrome measurement discretizes the errors acting on them, and the above classical representation is useful for analysis of the code’s properties and the flow of the logical operators through the encoding circuit (recall that the goal of a decoding algorithm is to produce good estimates of logical errors). This representation is similar to the shiftregister representation of classical convolutional codes, with the difference that the representation there corresponds to the actual flow of bit information through a convolutional circuit, while here it is merely a useful tool for analyzing the properties of the encoder.
The overall encoding operation for the code is the transformation induced by repeated application of the above seed transformation to a quantum data stream broken up into periodic blocks of information qubits, ancilla qubits, and halves of ebits while feeding the output memory qubits of one transformation as the input memory qubits of the next (see Figure 6 of Ref. [6] for a visual aid). The advantage of a quantum convolutional encoder is that the complexity of the overall encoding scales only linearly with the length of the code for a fixed memory size, while the decoding complexity scales linearly with the length of the code by employing a local maximum likelihood decoder combined with a belief propagation algorithm [6]. The quantum communication rate of the code is essentially while the entanglement consumption rate is , if the length of the code becomes large compared to .
4.1 State Diagram
The state diagram of an EAQ convolutional encoder is the most important tool for analyzing its properties, and it is the formal quantization of a classical convolutional encoder’s state diagram [13, 14]. It examines the flow of the logical operators through the encoder with a finitestate machine approach, and this representation is important for analyzing both its distance and its performance under the iterative decoding algorithm of Ref. [6]. The state diagram is a directed multigraph with vertices that we think of as memory states. We label each memory state with an qubit Pauli operator . We connect two vertices with a directed edge from , labeled as , if there exists a qubit Pauli operator , an qubit Pauli operator , and an qubit Pauli operator such that
(3) 
We refer to the labels and of an edge as the respective logical and physical label.
As an example, consider the transformation depicted in Figure 2. It acts on one memory qubit, one information qubit, and one half of an ebit to produce two channel qubits and one memory qubit. Figure 3 illustrates the state diagram corresponding to this transformation. There are four memory states because there is only one memory qubit, and there are 16 edges because there are four memory states and four logical operators for one information qubit and one ebit.
4.2 Noncatastrophicity
We now recall the definition of noncatastrophicity in Ref. [6], which is the formal quantization of the definition in Section IX of Ref. [13]. The definition from Ref. [6] is the same for an EAQ convolutional encoder because it depends on the iterative decoding algorithm used to decode the code, and we can exploit a slight variation of the iterative decoding algorithm in Ref. [6] to decode EAQ convolutional codes. A path through the state diagram is a sequence , …, of vertices such that is an edge belonging to it. Each logical operator of in (2) corresponds to a path in the state diagram, with the sequence of vertices in the path being the states of memory traversed while encoding the logical operator. The physical and logical weights of a logical operator are equal to the sums of the corresponding weights of the edges traversed in a path that encodes the logical operator. A zero physicalweight cycle is a cycle in the state diagram such that all edges in the cycle have zero physical weight. Finally, an EAQ encoder acting on memory qubits, information qubits, ancilla qubits, and ebits is noncatastrophic if every zero physicalweight cycle in its state diagram has zero logical weight.
Why is this definition an appropriate definition of noncatastrophicity? First, suppose that we modify the circuit in Figure 2 so that the ebit is replaced by an ancilla qubit (it thus becomes the same as Figure 8 of Ref. [6]). Such a replacement leads to a doubling of the number of edges in the state diagram in our Figure 3 because the state diagram should include all of the transitions where a operator acts on the ancilla qubits (these all lead to other logical operators in their case). The new state diagram (see Figure 9 of Ref. [6]) then includes a selfloop at memory state with zero physical weight and nonzero logical weight, and it is thus catastrophic according to the definition. The problem with such a loop is that it can “throw off” the iterative decoding algorithm. If a error were to act on the second physical qubit in one frame of the code (while the identity acts on the rest), then “pushing” this error through the inverse of the encoder applies an operator to the ancilla qubit and produces one syndrome bit for that frame of the code, but it propagates errors onto every logical qubit in the stream while applying errors to every memory qubit. All of these other errors go undetected by an iterative decoder because the error propagation does not trigger any additional syndrome bits besides the initial one that the error triggered.
Observe that the state diagram in Figure 3 for the EAQ convolutional encoder does not feature a zero physicalweight cycle with nonzero logical weight. Thus, the encoder is noncatastrophic, illustrating another departure from the classical theory of turbo codes. Noncatastrophicity in the quantum world is not only a property of the encoder, but it also depends on the resources available for encoding. If we analyze the above scenario that leads to catastrophic error propagation in the unassisted encoder, we see that it does not lead to such propagation for the entanglementassisted encoder in Figure 2. Indeed, suppose again that a error acts on the second physical qubit in a particular frame of the code (with the identity acting on all other qubits). Pushing this error through the inverse of the encoder leads to an operator acting on the ebit and a operator acting on the memory. The operator is detectable by a Bell measurement, and the operator acting on the memory propagates to a operator acting on an ebit in the next frame and then to all ebits and information qubits in successive frames. These operators acting on the ebits are all detectable by a Bell measurement (whereas they are not detecable when acting on an ancilla), so that the iterative decoding algorithm can still correct for errors in the propagation because all these errors trigger syndromes.
4.3 Recursiveness
Recursiveness is another desirable property for an EAQ convolutional encoder
when it is employed as the inner encoder of a quantum serial turbo
code.
It seems like it would be demanding to determine if this condition holds for every possible input, but we can exploit the state diagram to check it. The algorithm for checking recursiveness is as follows. First, we define an admissable path to be a path in which its first edge is not part of a zero physicalweight cycle [6]. Now, consider any vertex belonging to a zerophysical weight loop and any admissable path beginning at this vertex with logical weight one. The encoder is recursive if all such paths do not contain a zero physicalweight loop [6]. The idea is that a weightone logical operator is not sufficient to drive a recursive encoder to a memory state that is part of a zerophysical weight loop—the minimum weight of a logical operator that does so is two.
The example encoder in Figure 2 is not recursive. The only vertex in the state diagram in Figure 3 belonging to a zero physicalweight cycle is the vertex labeled . If the logical input to the encoder is a operator followed by an infinite sequence of identity operators, the circuit outputs as the physical output, returns to the memory state , and then outputs the identity for the rest of time. Thus, the response to this weightone input is finite, and the circuit is nonrecursive. Although this example is nonrecursive, Section 5 details many examples of EAQ convolutional encoders that are both recursive and noncatastrophic.
4.4 Distance Spectrum
We end this section by reviewing the performance measures from Ref. [6], which are also quantizations of the classical measures [13]. The distance spectrum of an EAQ convolutional encoder is the number of admissable paths beginning and ending in memory states that are part of a zero physicalweight cycle, where the physical weight of each admissable path is and the logical weight is greater than zero. This performance measure is most similar to the weight enumerator polynomial of a quantum block code [44], which helps give an upper bound on the error probability of a nondegenerate quantum code on a depolarizing channel under maximum likelihood decoding [45]. The distance spectrum incorporates the translational invariance of a quantum convolutional code and gives an indication of its performance on a memoryless depolarizing channel. Appendix 13 details a simple method to compute the distance spectrum by using the state diagram and ideas rooted in Refs. [13, 46, 47, 48]. The free distance of an EAQ convolutional encoder is the smallest weight for which , and this parameter is one indicator of the performance of a quantum serial turbo code employing constituent convolutional encoders. Although one of the applications of our EAQ convolutional encoders are as the inner encoders in a quantum serial turbo coding scheme, we can also have them as outer encoders in a quantum serial turbo code and use the free distance to show that its minimum distance grows nearlinearly when combined with a noncatastrophic, recursive inner encoder.
5 Example Encoders
Our first example of an EAQ convolutional encoder is the simplest example that is both recursive and noncatastrophic. It exploits one memory qubit and one ebit to encode one information qubit per frame. We discuss this first example in detail, verifying its noncatastrophicity and recursiveness, and we give a method to compute its distance spectrum. Appendix 14 then gives tables that detail our other examples. We found all of these examples by picking encoders uniformly at random from the Clifford group, according to the algorithm in Section VIA.2 of Ref. [49].
The seed transformation for our first example is as follows:
(4) 
where the first input qubit is the memory qubit, the second input qubit is the
information qubit, the third is Alice’s half of the ebit, the first output
qubit is the memory qubit, and the last two outputs are the physical qubits.
We can abbreviate the above encoding by taking the binary representation of
each row at the output and encoding it as a decimal number. For example, the
first row has the binary representation which is the decimal number 33. Thus, we can specify this encoder
as .
The seed transformation in (4) leads to the state diagram of Figure 4, by exploiting (3). We can readily check that the encoder is noncatastrophic and recursive by inspecting Figure 4. The only cycle with zero physical weight is the selfloop at the identity memory state with zero logical weight. The encoder is thus noncatastrophic. To verify recursiveness, note again that the only vertex belonging to a zero physicalweight cycle is the selfloop at the identity memory state. We now consider all weightone admissable paths that begin in this state. If we input a logical , we follow the edge to the memory state. Inputting the identity operator for the rest of time keeps us in the selfloop at memory state while still outputting nonzero physical weight operators. We can then check that inputting a logical operator followed by identities keeps us in the selfloop at the memory state while still outputting nonzero physical weight operators, and inputting a logical operator followed by identities keeps us in the selfloop at the memory state. The encoder is thus recursive. Appendix 14 lists many more examples of entanglementassisted quantum convolutional encoders that are both recursive and noncatastrophic.
6 EAQ Turbo Codes
We comment briefly on the construction and minimum distance of a quantum serial turbo code that employs our example encoders as a constituent encoder (the next section details our simulation results with different encoders). The construction of an EAQ serial turbo code is the same as that in Ref. [6] (see Figure 10 there), with the exception that we assume that Alice and Bob share entanglement in the form of ebits before encoding begins. Alice first encodes her stream of information qubits with the outer encoder, performs a quantum interleaver on all of the qubits, and then encodes the resulting stream with the inner encoder. The quantum communication rate of the resulting EAQ turbo code is where is the number of information qubits encoded by the outer encoder, is the number of physical qubits output from the outer encoder, and a similar convention holds for , , and the inner encoder. In order for the qubits to match up properly, must be equal to . The entanglement consumption rate of the code is where and are the total number of ebits consumed by the outer and inner encoder, respectively.
Perhaps the best combination for an EAQ serial turbo code is to choose the inner quantum convolutional encoder to be a recursive, noncatastrophic EAQ convolutional encoder and the outer quantum convolutional encoder to be a nonrecursive, noncatastrophic standard quantum convolutional encoder. This combination reduces entanglement consumption, ensures good iterative decoding performance, and ensures that the minimum distance of the quantum serial turbo code grows as where is the length of the code and is the free distance of the outer quantum convolutional encoder. A proof of this last statement given in Ref. [16] is essentially identical to the classical proof [5]. Section 8.1 shows that this combination also performs well in practice if noise occurs on the ebits. Additionally, choosing the outer quantum convolutional encoder to encode a highly degenerate code may increase the number of errors that the quantum turbo code can correct, in a vein similar to the results in Ref. [33] (though we have not yet fully investigated this possibility). Appendix 14 lists many examples of entanglementassisted quantum turbo codes and discusses their average minimum distance scaling.
7 Incorporating Extrinsic Information into the Quantum Turbo Decoding Algorithm
Poulin et al. proposed an iterative decoding algorithm for quantum turbo codes in Ref. [6]. Their algorithm is based on the exchange of a posteriori information between the constituent quantum convolutional decoders of a quantum turbo code. Since the decoders pass along a posteriori information, successive iterations of the constituent decoders are dependent on one another, and this gives rise to a detrimental positive feedback effect, which prevents the decoding algorithm there from achieving the desired gains usually observed in iterative decoding.
To avoid the aforementioned situation, it is necessary to ensure that the a priori information directly related to a given information qubit is not reused in the other constituent decoder [1, 50]. Similar to the approach employed in classical turbo decoding [1, 51], this can be achieved by having one decoder remove a priori information from the a posteriori information before feeding it to the other decoder. More explicitly, the iterative decoding procedure should exchange only “extrinsic” information which is unknown and new to the other decoder.
To see how this works, let us consider a fourport SoftInput SoftOutput (SISO) decoder [51, 52] that generates soft output information pertaining to a logical error and physical error . As shown in Figure 5, a SISO decoder exploits an A Posteriori Probability (APP) module that accepts the a priori information and as input and outputs the a posteriori information and . The corresponding extrinsic probabilities and for the qubit at time instant are then obtained by discarding the a priori information from the a posteriori information as follows [51, 52]:
(5) 
where and are normalization factors, which ensure that and , respectively. Furthermore, to avoid any numerical instabilities and to reduce the computational complexity, logdomain arithmetics are conventionally employed, which convert the multiplicative operations of (5) to addition, as given below [52]:
(6) 
Therefore, the inputs and outputs of a SISO decoder are the logarithmic a priori and extrinsic probabilities, respectively.
Based on the above discussion, we have modified the turbo
decoding algorithm of Ref. [6] so that the constituent decoders
pass along only
extrinsic information to each other. See Figure 6
for a depiction of the modified quantum turbo decoding
algorithm [53]. In this figure, and
denote the logarithmic a priori and extrinsic probabilities
of , where . Analogous
to Ref. [6], the inner SISO decoder of
Figure 6 exploits the
physical noise model , syndrome and a priori
information , the last of which is initialized to be equiprobable.
However, it outputs the extrinsic information
, rather than the a posteriori information
as in Ref. [6]. The extrinsic information
is then interleaved to serve as the a priori
information for the outer SISO decoder. The two decoders
thereby engage in iterative decodings, which continue
until either the a posteriori probability
converges
to a definite solution, or a prespecified
maximum number of iterations is reached.
8 Simulation Results
We performed several simulations of EAQ turbo codes and detail the results in this section. This section begins with a description of the parameters of the constituent quantum convolutional encoders. We then describe how the simulation was run, and we finally discuss and interpret the simulation results.
Our simulation results presented here are certainly not intended to be an exhaustive comparison of codes, but they rather serve to illustrate the enhancement in performance from the modified algorithm described in Section 7 and they constitute an exploration of the effect of adding entanglement assistance to the encoders of Poulin et al. [6]—our original intent was to exploit their encoders, but the lack of a clear exposition of their decimal representation convention has excluded us from doing so. So we instead randomly generated and filtered encoders that have comparable attributes to their encoders. The first encoder, dubbed “PTO1R,” acts on three input memory qubits, one information qubit, and two ancillas to produce three output memory qubits and three physical qubits. Its decimal representation is
and a truncated distance spectrum polynomial (see Appendix 13) for it is
This encoder is noncatastrophic and quasirecursive, and its parameters and distance spectrum are comparable to those of Poulin et al.’s first encoder (though they did not comment on whether theirs is quasirecursive) [6]. Replacing the two ancilla qubits in the encoder with two ebits gives an improvement in its truncated distance spectrum polynomial:
The encoder also becomes recursive after this replacement. Let “PTO1REA” denote the EA version of this encoder.
Our second encoder, dubbed “PTO3R,” acts on four input memory qubits, one information qubit, and one ancilla qubit to produce four output memory qubits and two physical qubits. Its decimal representation is
and a truncated distance spectrum polynomial for it is
The encoder is noncatastrophic and quasirecursive with its parameters and distance spectrum comparable to those of Poulin et al.’s third encoder. Replacing the ancilla with an ebit gives an improvement to the truncated distance spectrum polynomial:
and the encoder becomes recursive after this replacement. Let “PTO3REA” denote the EA version of the “PTO3” encoder.
Our turbo codes consist of interleaved serial concatenation of the above encoders with themselves, and we varied the auxiliary resources of the encoders to be ebits, ancillas, or both. Concatenating PTO1R with itself leads to a rate 1/9 quantum turbo code, and concatenating PTO3R with itself leads to a rate 1/4 turbo code.
We simulated the performance of these EAQ turbo codes on a memoryless depolarizing channel with parameter . The action of the channel on a single qubit density operator is as follows:
The benchmarks for the performance of any standard or entanglementassisted quantum code on a depolarizing channel are the hashing bounds [54, 21]. The quantum hashing bound determines the rate at which a random quantum code can operate reliably for a particular depolarizing parameter , but note that it does not give the ultimate capacity limit because degeneracy can improve performance [33, 55, 56, 57]. On the other hand, the entanglementassisted hashing bound does give the ultimate capacity at which an EAQ code can operate reliably for a particular depolarizing parameter , whenever an unbounded amount of entanglement is available [21, 38]. The hashing bound for quantum communication is
and the hashing bound for an entanglementassisted quantum code is
(7) 
The father protocol is a quantum communication protocol that achieves the entanglementassisted quantum capacity [58, 59, 60], while attempting to minimize its entanglement consumption rate. Its entanglement consumption rate is provably optimal for certain channels such as the dephasing channel [61, 62, 63], but it is not necessarily optimal for the depolarizing channel. The entanglement consumption rate of the father protocol on a depolarizing channel is
This entanglement consumption rate implies that the only resources involved in an encoded transmission of the father protocol are information qubits and ebits because the sum of its quantum communication rate and its entanglement consumption rate is equal to one. Figure 7 plots these different hashing bounds and depicts the location of these bounds for our different quantum turbo codes.
One can also consider a “hashing region” of rates that are achievable for entanglementassisted quantum communication [59, 58, 64]—this region is the more relevant benchmark for an entanglementassisted code that does not consume the maximal amount of entanglement. For the case of a depolarizing channel, this region consists of all and that satisfy the following bounds:
(8)  
(9) 
where is the quantum communication rate and is the entanglement consumption rate. The intersection of these two boundary lines occurs when , the entanglement consumption rate of the father protocol. When an entanglementassisted quantum code does not consume the maximal amount of entanglement possible (but rather at some rate ), we should compare its performance with the boundary in (8).
We performed Monte Carlo simulations to determine the performance of our example EAQ turbo codes when decoded with an iterative decoding algorithm (the iterative decoding algorithm is as described in Section 7, with the exception that it decodes EAQ turbo codes). We developed a Matlab computer program for these simulations [65].
A single run of a simulation selects a quantum turbo code with a particular number of logical qubits and a random choice of interleaver, and it then generates a Pauli error randomly according to a depolarizing channel with parameter . This Pauli error leads to an error syndrome, and the syndrome and channel model act as inputs to the iterative decoding algorithm. The iterative decoding algorithm terminates if the hard decision on a recovery operation is the same decision from the previous iteration, or it terminates after a maximum number of iterations (though we never observed the number of iterations until convergence exceeding eight). One run of the simulation declares a failure if the estimated recovery operation is different from the correct recovery operation. The ratio of simulation failures to the total number of simulation runs is the word error rate (WER). In the cases in which errors occur more rarely, we ran every configuration (choice of code, depolarizing parameter, and number of logical qubits) until we observed at least 100 failures—this number gaurantees a reasonable statistical confidence in the results of the simulations. Of course, in the cases where errors occur more frequently, we observed thousands of errors.
Our first simulation involved the serial concatenation of the unassisted PTO1R
encoder with itself, and Figure 8(a) displays the
results. The performance significantly exceeds that of the first encoder of Poulin
et al. (see Figure 12 of Ref. [6]) and there is
an improved separation between the curves for increasing blocklength,
when comparing Figure 8(a) to
Figure 12 of Ref. [6]. A noticeable feature of
Figure 8(a) is the existence of a pseudothreshold,
such that increasing the number of encoded qubits of the turbo code decreases
the WER for all depolarizing noise rates below the pseudothreshold. The
pseudothreshold is not a true threshold because this particular turbo code has
a bounded minimum distance,
Our second simulation tested the serial concatenation of the PTO1REA encoder with itself, and Figure 8(b) displays the results. This turbo code uses the maximal amount of entanglement at a rate of and thus is an instance of the socalled “father” protocol. Entanglement assistance gives the turbo code a dramatic increase in performance, in the sense that it can withstand far higher depolarizing noise rates than the unassisted turbo code. The threshold occurs at —we call this a threshold rather than a pseudothreshold because we expect that the WER should continue to decrease as we increase the number of encoded qubits, but we should clarify that we have not proven that this should hold. However, the EXIT chart analysis of Ref. [53] suggests that this is a true threshold. This threshold is dB beyond the pseudothreshold of the unassisted turbo code, and it is within dB of the noise limit given by the EA hashing bound in (7) for a qubit rate and ebit rate code. This code construction is operating in a noise regime in which standard quantum codes are simply not able to operate (compare with the results in Refs. [66, 67, 68, 6, 69, 70, 71, 72]).
Our third simulation tested the serial concatenation of the PTO1REA inner encoder with the PTO1R outer encoder, and this EAQ turbo code has an entanglement consumption rate of . The benchmark for comparison is given by (8), so that for a code with qubit rate and ebit rate , its noise limit is found by solving for the for which
which in this case is . The inner encoder is recursive, but the outer encoder’s free distance is not as high as that in the previous simulation. Figure 8(c) displays the results. The threshold occurs approximately at a depolarizing noise rate of , which is within dB of the previous threshold and within dB of the noise limit given above.
Our final simulation tested the serial concatenation of the PTO1R inner encoder with the PTO1REA outer encoder, and this EAQ turbo code has an entanglement consumption rate of . Again, the benchmark for comparison is given by (8), so that for a code with qubit rate and ebit rate , its noise limit is found by solving for the for which
which in this case is . The inner encoder is quasirecursive, and the outer encoder has a significantly higher free distance than in the previous simulation because it has entanglement assistance. Though, one can place a constant upper bound on the minimum distance of these turbo codes because the inner encoder is not recursive. This simulation provides a good test to determine the effectiveness of an inner encoder that is quasirecursive. That is, one might think that quasirecursiveness of the inner encoder combined with an outer encoder with high free distance would be sufficient to produce a turbo code with good performance (the thought is that this would explain the good performance in the first simulation), but Figure 8(d) suggests that this intuition does not hold. The pseudothreshold occurs at approximately . This pseudothreshold is only dB higher than the threshold for the unassisted code, and it is dB away from the noise limit of given above.
We conducted similar simulations with the PTO3R and PTO3REA encoders, and Figures 9(ad) display the results. The entanglement consumption rates in Figures 9(ad) are , , , and , respectively. The results are somewhat similar to the previous simulations with the difference that the noise limits and thresholds are lower because these codes have higher quantum data transmission rates. The thresholds occur at approximately , , , and in Figures 9(ad), respectively, and the hashing limits for these codes are approximately , , , and , respectively. Thus, these codes are within 2 dB, 1.6 dB, 1.4 dB, and 2.35 dB of their hashing limits, respectively. Since the threshold of the code in Figure 9(c) occurs at , so that it is performing closest to its hashing limit (1.4 dB away), it appears that this EAQ turbo code is making judicious use of the available entanglement by placing the two ebits in the inner encoder. We would have to conduct further simulations to determine if placing one ebit in the outer encoder and one in the inner encoder would do better, but our suspicion is that the aforementioned use of entanglement is better because the inner encoder is recursive under this choice.
8.1 Noise on Ebits
We conducted another set of simulations to determine how noise on Bob’s half of the ebits affects the performance of a code. This possibility has long been one of the important practical questions concerning the entanglementassisted paradigm, and some researchers have provided partial answers [73, 74, 75, 76, 78]. We briefly review some of these contributions. Shaw et al. first observed that the Steane code is equivalent to a entanglementassisted code that can also correct a single error on Bob’s half of the ebit. This observation goes further: any standard, nondegenerate quantum code is equivalent to an entanglementassisted code that can correct any errors on Alice and Bob’s qubits. This result holds because tracing over any qubits in the original standard code gives a maximally mixed state on qubits, and the purification of these qubits are encoded halves of ebits that Alice possesses [77]. Lai and Brun have studied this observation in more detail, by conducting simulations of such entanglementassisted codes, and they have also studied the case in which a standard stabilizer code is used to protect the ebits [78]. Wilde and Fattal observed that entanglementassisted codes correcting for errors on Bob’s side slightly improve the threshold for quantum computation if ebit errors occur less frequently than gate errors [74]. Hsieh and Wilde studied this question in the Shannontheoretic context and determined an expression for capacity when channel errors and entanglement errors occur [75]. As a side note, Lai and Brun have also looked for codes attempting to achieve the opposite goal [37]—their codes try to maximize the number of channel errors that can be corrected while minimizing the correction on Bob’s half of the ebits.
Figure 10 plots the results of our simulations that allow for noise on Bob’s half of the ebits. We performed three different types of simulations: the first was with the PTO3REA inner / PTO3R outer combination, the second with the PTO3R inner / PTO3REA outer combination, and the third with the PTO3REA inner / PTO3REA outer combination. For each simulation, we kept the channel noise rates fixed at , , and , respectively, because the codes already performed reasonably well at these noise rates, and our goal was to understand the effect of ebit noise on code performance. The codes all performed about the same as they did without ebit noise if we set the ebit noise rate at . Increasing the ebit noise rate an order of magnitude to has the least effect on the PTO3REA inner / PTO3R outer combination. Increasing it further to deteriorates the performance of the other combinations while still having the least effect on the PTO3REA inner / PTO3R outer combination. This result is surprising, considering that this combination has an entanglement consumption rate of while the entanglement consumption rate of the PTO3R inner / PTO3REA outer combination is smaller at . The result suggests that it would be wiser to place ebits in the inner encoder rather than in the outer encoder when these codes operate in practice.
Figure 11 seems to confirm this suggestion. Increasing the number of logical qubits to 1000 for each combination shows an increased performance for the combination PTO3REA inner / PTO3R outer if the ebit noise level is not too high, while the other two combinations perform worse. It is surprising that the combination PTO3REA inner / PTO3R outer performs better—increasing the number of logical qubits in turn increases the number of ebits, having more ebits should translate into more noise on the syndromes, and this then should affect performance. However, this particular combination seems to exhibit some amount of robustness against ebit noise if it is not too high.
9 Recursive, ClassicallyEnhanced Subsystem Encoders are Catastrophic
We can construct other variations of quantum convolutonal encoders and study their state diagrams to determine their properties. One example of a variation mentioned at the end of Ref. [6] is a subsystem convolutional code (based on the idea of a subsystem quantum code [79, 80, 81] that is useful in faulttolerant quantum computation [82]). Poulin et al. suggest that subsystem convolutional codes might be “a concrete avenue” for circumventing the inability of a quantum convolutional encoder to be simultaneously recursive and noncatastrophic. Such codes exploit a resource known as a “gauge” qubit that can add extra degeneracy beyond that available in a standard stabilizer code. Another variation is an encoder that encodes both classical bits and qubits, and we might wonder if these could be simultaneously recursive and noncatastrophic (such codes are known as “classicallyenhanced” codes [83] and are based on tradeoff coding ideas from quantum Shannon theory [84]).
Unfortunately, encoders that act on logical qubits, classical bits, ancilla qubits, and gauge qubits cannot possess both properties simultaneously, and we state this result below as a corollary of Theorem 1 in Ref. [6]. This result implies that entanglement is the resource enabling a convolutional encoder to be both recursive and noncatastrophic (there are no other known local resources for quantum codes besides ancilla qubits, classical bits, and gauge qubits).
Corollary 1
Suppose that a classicallyenhanced subsystem convolutional encoder is recursive. Then it is catastrophic.
First, consider that the state diagram for a classicallyenhanced subsystem quantum convolutional encoder includes an edge from to if there exists a qubit Pauli operator , an qubit Pauli operator , a qubit Pauli operator , and a qubit Pauli operator such that
(10) 
where the binary representations of Pauli operators are the same as in (3). We break the operator acting on the classical bits into two parts because represents the component of the operator that can flip a classical bit from to and back, while represents the component of the operator that has no effect on a classical bit in state or (it merely adds an irrelevant global phase in the case that the bit is equal to ). The entries and are for the “gauge qubits” in a subsystem code [81] (qubits in a maximally mixed state that are invariant under the random application of an arbitrary Pauli operator). We include these transitions in the state diagram because a particular logical operator in a classicallyenhanced subsystem code is equivalent up to operators acting on the ancilla qubits, operators acting on the classical bits, and and operators acting on gauge qubits. The set corresponding to a particular logical input consists of all operators of the following form:
where in this case consists of the infinite, repeated, overlapping application of the convolutional encoder .
Suppose that the encoder is recursive. By definition, it follows that every weightone logical input and its coset has an infinite response. Suppose now that we change the resources in the encoder so that all of the classical bits and gauge qubits become ancilla qubits. The resulting code is now a standard stabilizer code. Additionally, we can remove the operators from the definition of because they no longer play a role as a logical operator, and we can remove the operators from the definition of because they are acting on ancilla qubits. Let denote the new set corresponding to a particular logical operator :
The encoder is still recursive because and because the original encoder is recursive (recursiveness is a property invariant under the replacement of cbits and gauge qubits with ancilla qubits). Then Theorem 1 of Ref. [6] implies that the encoder for the stabilizer code is catastrophic, i.e., it features a zero physicalweight cycle with nonzero logical weight. It immediately follows that the original encoder for the classicallyenhanced subsystem code is catastrophic because its state diagram contains all of the edges of the stabilizer encoder, plus additional edges that correspond to the logical transitions for the classical bits and the operators acting on the gauge qubits.
We should note that the above argument only holds if the original classicallyenhanced subsystem convolutional encoder acts on a nonzero number of logical qubits (this of course is the case in which we are really interested in order to have a nonzero quantum communication rate). For example, the encoder in Figure 2 becomes recursive and noncatastrophic when replacing the logical qubit with a classical bit and the ebit with an ancilla. One can construct the state diagram from the specification in (10) and discover that these two properties hold for the encoder acting on the cbit and ancilla.
The above argument also does not apply if an ebit is available as an auxiliary resource for an encoder. We have found an example of a noncatastrophic, recursive encoder that acts on six memory qubits, one logical qubit, one ancilla qubit, one ebit, one cbit, and one gauge qubit. The seed transformation for this example is as follows:
A noncatastrophic, nonrecursive subsystem convolutional encoder with a high free distance could potentially serve well as an outer encoder of a quantum turbo code, but this might not necessarily be beneficial if our aim is to achieve the capacity of a quantum channel. The encoder for a subsystem code is effectively a noisy encoding map because it is a unitary acting on information qubits in a pure state, ancilla qubits in a pure state, and gauge qubits in the maximally mixed state. Ref. [61] proves that an isometric encoder is sufficient to attain capacity, and we can thus restrict our attention to subspace codes rather than subsystem codes. Though, this line of reasoning does not rule out the possibility that iterative decoding could somehow benefit from the extra degeneracy available in a subsystem code, but this extra degeneracy would increase the decoding time.
10 ClassicallyEnhanced EAQ Encoders
We might also wish to construct classicallyenhanced EAQ turbo codes that transmit classical information in addition to quantum information, in an effort to reach the optimal tradeoff rates from quantum Shannon theory [61, 62]. These codes are then based on the structure of the codes in Refs. [83, 85]. The state diagram for the encoder includes an edge from to if there exists a qubit Pauli operator , a qubit Pauli operator , a qubit Pauli operator , and an qubit Pauli operator such that
The logical operator acts on qubits while the operator acts on classical bits. We break this latter operator into two parts for the same reason discussed in the previous section. The state diagram for an encoder of this form is similar to that for an EAQ convolutional encoder because it includes all transitions for the classical bits. The difference is in the interpretation of the logical weight of edge transitions and in the logical label of an edge. If an edge features a operator acting on a classical bit, then this operator does not contribute to the logical weight of the transition and does not appear on the logical labels—an appears on the logical label if or acts on the classical bit and an appears on the logical label if an or acts on it.
We have found several examples of recursive, noncatastrophic encoders acting on these resources. One of our examples acts on five memory qubits, one logical qubit, one ancilla qubit, one ebit, and one classical bit. Its seed transformation is as follows:
11 The Preservation of Recursiveness and NonCatastrophicity under Resource Substitution
The technique used to prove Corollary 1 motivates us to consider which resource substitutions preserve the properties of recursiveness and noncatastrophicity. We have two different cases:

A resource substitution that removes edges from the state diagram preserves noncatastrophicity and recursiveness.

A resource substitution that adds edges to the state diagram preserves catastrophicity and nonrecursiveness.
To see the first case for recursiveness preservation, consider a general encoder acting on logical qubits, ancilla qubits, ebits, cbits, and gauge qubits. Each logical input to the encoder has the following form:
where the conventions are similar to what we had before. Suppose the encoder is recursive so that , , , and all have an infinite response. Then the encoder is still recursive if we replace an ancilla with an ebit because the original encoder is recursive and we no longer have to consider the operator acting on the replaced ancilla. The encoder is still recursive if we replace a cbit with an ancilla qubit because we no longer have to consider the coset . Finally, it is still recursive if we replace a gauge qubit with an ancilla qubit because we no longer have to consider the operator acting on the replaced gauge qubit.
To see the first case for noncatastrophicity preservation, suppose that an encoder is already noncatastrophic. Then a resource substitution that removes edges from the state diagram preserves noncatastrophicity because this removal cannot create a zero physicalweight cycle with nonzero logical weight.
To see the second case for nonrecursiveness preservation, suppose that an encoder acting on logical qubits and ebits is nonrecursive, meaning that at least one of the weightone logical inputs , , or has a finite response. Then replacing an ebit with an ancilla certainly cannot make the resulting encoder recursive because we have to consider the response to the operators , , or and one of these is already finite from the assumption of nonrecursiveness of the original encoder. Furthermore, consider a nonrecursive encoder acting on logical qubits, ebits, and ancilla qubits. Replacing some of the ancilla qubits with cbits or gauge qubits cannot make the encoder become recursive for the same reasons.
To see the second for catastrophicity preservation, suppose that an encoder is catastrophic. Then a resource substitution that adds edges to the state diagram preserves catastrophicity because any zero physicalweight cycles with nonzero logical weight are still part of the state diagram for the new encoder.
The following diagram summarizes all of the above observations. The resources are for logical qubits, for ancilla qubits, for ebits, for cbits, and for gauge qubits. Recursiveness and noncatastrophicity preservation flow downwards under the displayed resource substitutions, while nonrecursiveness and catastrophicity preservation flow upwards (substitutions at the same level can go in any order).
We can then understand the proof of Corollary 1 in the context of the above diagram. The original encoder acts on and is assumed to be recursive. Resource substitution of and preserves recursiveness, while making the encoder act on only ancilla qubits (the auxiliary resource for a standard quantum code). Using the fact that standard recursive quantum encoders are catastrophic [6], we then back substitute the resources, which is a catastrophicity preserving substitution.
12 Conclusion and Current Work
We have constructed a theory of EAQ serial turbo coding as an extension of Poulin et al.’s theory in Ref. [6]. The introduction of shared entanglement simplifies the theory because an EAQ convolutional encoder can be both recursive and noncatastrophic. These two properties are essential for quantum serial turbo code families to have minimum distance that grows nearlinearly with the length of the code, while still performing well under iterative decoding. We provided many examples of EAQ convolutional encoders that satisfy both properties, and we detailed their parameters. We then showed how the concatenation of these encoders with some of Poulin et al.’s and some of our own lead to EAQ serial turbo codes with nearlinear minimum distance scaling. We modified the quantum turbo decoding algorithm from Ref. [6] such that it follows the turbo decoding principle in which the constituent decoders pass along extrinsic information, and this modification lead to a significant performance improvement over the algorithm outlined in Ref. [6]. We conducted several simulations of EAQ turbo codes—several of our quantum turbo codes were within 1 dB of their hashing limits and two notable surprises were that placing ebits in the inner encoder can achieve a better performance than expected in both scenarios with and without ebit noise. Our simulations are generally consistent with the findings in Ref. [39, 37] and other results from quantum Shannon theory [21, 59], namely, that entanglement assistance can significantly enhance error correction ability. Finally, we considered how to construct the state diagram for encoders that derive from other existing extensions to the theory of quantum error correction, and we showed that classicallyenhanced subsystem convolutional encoders cannot simultaneously be recursive and noncatastrophic.
There are many questions to ask going forward from here. One could certainly seek out other entanglementassisted quantum turbo codes and conduct numerical simulations of their performance. One purpose of our numerical simulations was to illustrate the effect of adding entanglement assistance to the encoders of Poulin et al. [6], and it was not our intent for them to constitute an exhaustive code comparison. It is ongoing work to search for and test many other code combinations, including cases where the inner and outer encoder are either recursive, nonrecursive, have high / medium / low minimum distance, and have varying numbers of memory qubits . Also, if one wished to compare directly the performance of entanglementassisted turbo codes against the codes in Ref. [6], one way to do so might be to look for higherrate codes operating near the entanglementassisted hashing bound that tolerate the same noise level as the codes from Ref. [6]. One could also vary the number of ebits and ancilla qubits present in the various code combinations discussed here.
It would be interesting to explore the performance of the other suggested code structures in Section 10 to determine if they could come close to achieving the optimal rates from quantum Shannon theory [61, 62]. For example, what is the best arrangement for a classicallyenhanced EAQ code? Should we place the classical bits in the inner or outer encoder? Are there more clever ways to use entanglement so that we increase errorcorrecting ability while reducing entanglement consumption? Consider that Hsieh et al. recently constructed a class of entanglementassisted codes that exploit one ebit and still have good performance [86].
We should stress that the behavior of the entanglementassisted codes using maximal entanglement is exactly like that of a classical turbo code. There is no degeneracy, and the iterative decoding algorithm is exactly the same as the classical one. Furthermore, analyses of classical turbo codes should apply directly in these cases [4, 87], and it would be good to determine the exact correspondence. These analyses studied the bit error rate rather than the word error rate, so any study of EAQ turbo codes would have to factor into account this difference.
Much of the classical literature has focused on the choice of a practical interleaver rather than a random one [88, 89, 90, 91], and it might be interesting to import the knowledge discovered here to the quantum case.
Finally, it would be great to find examples of EAQ turbo codes with a positive catalytic rate that outperform the turbo codes in Ref. [6], in the sense that they either have a higher catalytic rate while tolerating the same noise levels or they have the same catalytic rate while tolerating higher noise levels. This is ongoing work.
We acknowledge David Poulin for providing us with a copy of Ref. [16] and for many useful discussions, email interactions, and feedback on the manuscript. We acknowledge JeanPierre Tillich for originally suggesting that shared entanglement might help in quantum serial turbo codes. We acknowledge Todd Brun, Hilary Carteret, and Jan Florjanczyk for useful discussions, and Patrick Hayden for the observation that nondegenerate quantum codes lead to entanglementassisted codes with particular error correction power on Bob’s half of the ebits. Zunaira Babar is grateful to Prof. Lajos Hanzo and Dr. Soon Xin Ng (Michael) for their continuous guidance and support. We acknowledge the computer administrators in the McGill School of Computer Science for making their computational resources available for this scientific research. MMW acknowledges the warm hospitality of the ERATOSORST project, the support of the MDEIE (Québec) PSRSIIRI international collaboration grant, and the support of the Centre de Recherches Mathématiques in Montreal.
13 Computing the Distance Spectrum
There is a straightforward way to compute the distance spectrum of a quantum convolutional encoder. This technique borrows from similar ideas in the classical theory of convolutional coding [13, 46, 47, 48]. We would like to know the number of admissible paths with a particular weight beginning and ending in memory states that are part of a zero physicalweight cycle. For our example in Section 5, the identity memory state is the only memory state part of a zero physicalweight cycle. We create a weight adjacency matrix whose entries correspond to edges in the state diagram. This matrix has in entry if there is a physicalweight edge from vertex to vertex (with the exception of the selfloop at the identity memory state). The weight adjacency matrix for our example is
where the ordering of vertices is , , , and . Note that we place a zero in the entry because we do not want to overcount the number of admissable paths starting and ending in memory states that are part of a zero physicalweight cycle. If we would like the number of admissable paths up to an arbitrary weight that start and end in the identity memory state, then we compute the entry of the following matrix:
The coefficient of in the polynomial entry is the number of admissable paths with weight starting and ending in the identity memory state. One can compute this in some cases using Cramer’s rule, for example. We can also approximate the distance spectrum, e.g., by computing the matrix where
and is some finite positive integer, so that this approximation gives a truncated distance spectrum. Computing the above matrix can be computationally expensive for large , but we can dramatically reduce the number of computations by truncating the polynomial entries of above degree before performing each multiplication. For our example in Section 5, the first ten entries of the distance spectrum polynomial are
so that this gives a fairly reasonable approximation to the true distance spectrum. These coefficients appear in the second column of Table 2 as the first ten values of the distance spectrum for this first example encoder. Note that there are faster ways of computing the distance spectrum for classical convolutional codes [92], and it remains open to determine how to exploit these techniques for quantum convolutional encoders.
14 Example Encoders
Encoder  M  L  A  E  Seed Transformation  Free Dist. 

1  1  1  0  1  3  
2  3  2  0  1  4  
3  3  3  0  1  4  
4  3  4  0  1  3  
5  2  1  1  1  4  
6  2  1  1  2  5  
7  2  2  1  1  3  
8  2  6  0  1  2  
9  2  8  0  1  2  
10  2  9  0  1  N/A 
Table 1 lists the specifications of many other examples of EAQ encoders that are both recursive and noncatastrophic—a computer program helped check that these properties hold for each of the examples [65]. Included in the list of example encoders are some which act on ancilla qubits in addition to ebits. These examples demonstrate that we do not necessarily require the auxiliary resource of an EAQ convolutional encoder to be ebits alone in order for the encoder to possess both properties. Table 2 gives a truncated distance spectrum for each of these encoders.
1  2  3  4  5  6  7  

0  0  0  0  0  0  0  0 
1  0  0  0  0  0  0  0 
2  0  0  0  0  0  0  0 
3  2  0  0  3  0  0  3 
4  5  1  8  32  3  0  22 
5  6  6  69  292  3  1  73 
6  23  49  463  2,622  23  1  286 
7  54  218  3,478  24,848  41  1  1,309 
8  122  1,077  25,057  227,262  127  3  5,696 
9  298  5,477  181,959  2.1  325  11  23,975 
10  737  27,428  1,326,070  1.9  1,061  17  102,132 
We can construct EAQ serial turbo codes, by serially concatenating some of our example “WH encoders” in Table 1 with the “PTO encoders” in Table 1 of Ref. [6] (ordered from left to right). Table 3 details these different combinations, giving their rates and average minimum distance scaling.
Outer  Inner  Q  E  Min. Dist. 

PTO1  WH3  1/4  1/4  
PTO2  WH3  1/4  1/4  
PTO3  WH2  1/3  1/3  
PTO3  WH7  1/4  1/4  
PTO3  WH4  2/5  1/5  
PTO2  WH8  2/7  1/7  
PTO3  WH8  3/7  1/7  
PTO3  WH9  4/9  1/9  
PTO2  WH10  3/10  1/10  
WH11  WH3  1/2  1/4 
The columns of Table 3 give the outer encoder, the inner
encoder, the quantum communication rate, the entanglement consumption rate,
and the average minimum distance growth of a particular EAQ turbo code. The
first four combinations all have a good average minimum distance scaling, but
the catalytic rate
Footnotes
 This holds for EAQ convolutional codes in addition to EAQ block codes [23, 24].
 In our statement that a first principles approach leads to a linear encoding complexity, note that we are fixing the number of the memory qubits to be constant with respect to the blocklength. Of course, in any practical setting, minimizing the number of memory qubits is essential because the complexity of the encoder and decoder grows exponentially with the number of memory qubits.
 The representation of the encoder as a binary matrix leads to a loss of global phase information. Though, this global phase information is not important because measurement of the syndrome destroys it, and it is not necessary for faithful recovery of the encoded state.
 The definitions for the maximum likelihood decoder of an EAQ code are nearly identical to those for stabilizer codes in Section IIIC of Ref. [6]. Thus, we do not give them here.
 Physical or channel qubits in the entanglementassisted paradigm are the ones that Alice transmits over the channel.
 Recall that the inner encoder is the one closer to the channel, and the outer encoder is the one farther from the channel.
 This definition thus implies that a recursive encoder is infinitedepth (it transforms a finiteweight Pauli operator to an infiniteweight one) [40, 23], but the other implication does not necessarily have to hold.
 Note that this convention is different from that of Poulin et al. in Ref. [6].
 Since , the a posteriori probability of is the same as the extrinsic information, according to (6).
 As discussed in Section 4.4 of Ref. [6], the minimum distance of a quantum turbo code is upper bounded by the weightone minimum distance of the inner encoder times the free distance of the outer encoder. Whenever the inner encoder is nonrecursive, the weightone minimum distance is bounded by some constant (not growing with the blocklength), implying a constant bound on the minimum distance of the turbo code.
 The catalytic rate is the difference between the quantum communication rate and the entanglement consumption rate [20].
References
 C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit errorcorrecting coding and decoding: Turbocodes,” in Technical Program of the IEEE International Conference on Communications, vol. 2, Geneva, Switzerland, May 1993, pp. 1064–1070.
 S. Benedetto and G. Montorsi, “Unveiling turbo codes: Some results on parallel concatenated coding schemes,” IEEE Transactions on Information Theory, vol. 42, no. 2, pp. 409–428, March 1996.
 C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: Turbocodes,” IEEE Transactions on Communications, vol. 44, no. 10, pp. 1261–1271, October 1996.
 S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concatenation of interleaved codes: performance analysis, design, and iterative decoding,” IEEE Transactions on Information Theory, vol. 44, no. 3, pp. 909–926, May 1998.
 N. Kahale and R. Urbanke, “On the minimum distance of parallel and serially concatenated codes,” in Proceedings of the International Symposium on Information Theory, Cambridge, Massachussetts, USA, August 1998, p. 31. [Online]. Available: http://lthcwww.epfl.ch/{~}ruediger/papers/weight.ps
 D. Poulin, J.P. Tillich, and H. Ollivier, “Quantum serial turbocodes,” IEEE Transactions on Information Theory, vol. 55, no. 6, pp. 2776–2798, June 2009.
 S. Lloyd, “Capacity of the noisy quantum channel,” Physical Review A, vol. 55, no. 3, pp. 1613–1622, March 1997.
 P. W. Shor, “The quantum channel capacity and coherent information,” in Lecture Notes, MSRI Workshop on Quantum Computation, 2002.
 I. Devetak, “The private classical capacity and quantum capacity of a quantum channel,” IEEE Transactions on Information Theory, vol. 51, pp. 44–55, January 2005.
 P. Hayden, M. Horodecki, A. Winter, and J. Yard, “A decoupling approach to the quantum capacity,” Open Systems & Information Dynamics, vol. 15, pp. 7–19, March 2008.
 H. Ollivier and J.P. Tillich, “Description of a quantum convolutional code,” Physical Review Letters, vol. 91, no. 17, p. 177902, October 2003.
 G. D. Forney, M. Grassl, and S. Guha, “Convolutional and tailbiting quantum errorcorrecting codes,” IEEE Transactions on Information Theory, vol. 53, pp. 865–880, 2007.
 A. J. Viterbi, “Convolutional codes and their performance in communication systems,” IEEE Transactions on Communication Technology, vol. 19, no. 5, pp. 751–772, October 1971.
 A. J. Viterbi, A. M. Viterbi, and N. T. Sindhushayana, “Interleaved concatenated codes: New perspectives on approaching the shannon limit,” Proceedings of the National Academy of Sciences of the United States of America, vol. 94, pp. 9525–9531, September 1997.
 D. Poulin, “Iterative quantum coding schemes: LDPC and turbo codes,” Online Presentation, April 2009, slide 92. [Online]. Available: http://www.physique.usherbrooke.ca/{~}dpoulin/Documents/IDQC09{_}McGill.pdf
 H. Ollivier, D. Poulin, and J.P. Tillich, “Quantum turbo codes,” October 2008, unpublished manuscript.
 M. Houshmand and M. M. Wilde, “Recursive quantum convolutional encoders are catastrophic: A simple proof,” September 2012, arXiv:1209.0082.
 J.P. Tillich, “Quantum codes suitable for iterative decoding,” Online presentation, May 2009. [Online]. Available: http://www.infres.enst.fr/{~}markham/QuPa/28May/exposeJPTillich.pdf
 T. A. Brun, I. Devetak, and M.H. Hsieh, “Correcting quantum errors with entanglement,” Science, vol. 314, no. 5798, pp. 436–439, October 2006.
 I. Devetak, T. A. Brun, and M.H. Hsieh, New Trends in Mathematical Physics. Springer Netherlands, 2009, ch. EntanglementAssisted Quantum ErrorCorrecting Codes, pp. 161–172.
 C. H. Bennett, P. W. Shor, J. A. Smolin, and A. V. Thapliyal, “Entanglementassisted classical capacity of noisy quantum channels,” Physical Review Letters, vol. 83, no. 15, pp. 3081–3084, October 1999.
 ——, “Entanglementassisted capacity of a quantum channel and the reverse shannon theorem,” IEEE Transactions on Information Theory, vol. 48, pp. 2637–2655, 2002.
 M. M. Wilde and T. A. Brun, “Entanglementassisted quantum convolutional coding,” Physical Review A, vol. 81, no. 4, p. 042333, April 2010.
 ——, “Quantum convolutional coding with shared entanglement: General structure,” Quantum Information Processing, vol. 9, no. 5, pp. 509–540, October 2010, arXiv:0807.3803.
 C. E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, pp. 379–423, 1948.
 F. Dupuis, P. Hayden, and K. Li, “A father protocol for quantum broadcast channels,” IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2946–2956, June 2010.
 A. R. Calderbank and P. W. Shor, “Good quantum errorcorrecting codes exist,” Physical Review A, vol. 54, no. 2, pp. 1098–1105, August 1996.
 A. M. Steane, “Error correcting codes in quantum theory,” Physical Review Letters, vol. 77, no. 5, pp. 793–797, July 1996.
 D. Gottesman, “Stabilizer codes and quantum error correction,” Ph.D. dissertation, California Institute of Technology, 1997.
 M. Grassl, “Convolutional and block quantum errorcorrecting codes,” in IEEE Information Theory Workshop, Chengdu, October 2006, pp. 144–148.
 M. Houshmand, S. HosseiniKhayat, and M. M. Wilde, “Minimalmemory, noncatastrophic, polynomialdepth quantum convolutional encoders,” Accepted for publication in IEEE Transactions on Information Theory, 2012, arXiv:1105.0649.
 P. W. Shor and J. Smolin, “Quantum errorcorrecting codes need not completely reveal the error syndrome,” April 1996, arXiv:quantph/9604006.
 D. P. DiVincenzo, P. W. Shor, and J. A. Smolin, “Quantumchannel capacity of very noisy channels,” Physical Review A, vol. 57, no. 2, pp. 830–839, February 1998.
 M.H. Hsieh, I. Devetak, and T. A. Brun, “General entanglementassisted quantum errorcorrecting codes,” Physical Review A, vol. 76, p. 062313, 2007.
 M. M. Wilde and T. A. Brun, “Optimal entanglement formulas for entanglementassisted quantum coding,” Physical Review A, vol. 77, p. 064302, 2008.
 C. H. Bennett and S. J. Wiesner, “Communication via one and twoparticle operators on EinsteinPodolskyRosen states,” Physical Review Letters, vol. 69, no. 20, pp. 2881–2884, November 1992.
 C.Y. Lai and T. Brun, “Entanglement increases the errorcorrecting ability of quantum errorcorrecting codes,” August 2010, arXiv:1008.2598.
 G. Bowen, “Entanglement required in achieving entanglementassisted channel capacities,” Physical Review A, vol. 66, no. 5, p. 052313, November 2002.
 M. M. Wilde and T. A. Brun, “Extra shared entanglement reduces memory demand in quantum convolutional coding,” Physical Review A, vol. 79, no. 3, p. 032313, March 2009.
 M. Grassl and M. Rötteler, “Noncatastrophic encoders and encoder inverses for quantum convolutional codes,” in Proceedings of the IEEE International Symposium on Information Theory, Seattle, Washington, USA, July 2006, pp. 1109–1113, arXiv:quantph/0602129.
 M. M. Wilde, “Quantumshiftregister circuits,” Physical Review A, vol. 79, no. 6, p. 062325, June 2009.
 M. Houshmand, S. HosseiniKhayat, and M. M. Wilde, “Minimal memory requirements for pearl necklace encoders of quantum convolutional codes,” IEEE Transactions on Computers, vol. 61, no. 3, pp. 299–312, March 2012, arXiv:1004.5179.
 M. Houshmand and S. HosseiniKhayat, “Minimalmemory realization of pearlnecklace encoders of general quantum convolutional codes,” Physical Review A, vol. 83, p. 022308, February 2011, arXiv:1009.2242.
 P. Shor and R. Laflamme, “Quantum analog of the MacWilliams identities for classical coding theory,” Physical Review Letters, vol. 78, no. 8, pp. 1600–1602, February 1997.
 D. Poulin, “Optimal and efficient decoding of concatenated quantum block codes,” Physical Review A, vol. 74, no. 5, p. 052333, November 2006.
 R. J. McEliece, Communications and Coding (P. G. Farrell 60th birthday celebration). New York: John Wiley & Sons, 1998, ch. How to Compute Weight Enumerators for Convolutional Codes, pp. 121–141.
 R. Johannesson and K. S. Zigangirov, Fundamentals of Convolutional Coding. WileyIEEE Press, 1999.
 R. J. McEliece, The Theory of Information and Coding. Cambridge University Press, 2002.
 D. P. DiVincenzo, D. W. Leung, and B. M. Terhal, “Quantum data hiding,” IEEE Transactions on Information Theory, vol. 48, no. 3, pp. 580–598, March 2002.
 L. Hanzo, T. H. Liew, B. L. Yeap, R. Y. S. Tee, and S. X. Ng, Turbo Coding, Turbo Equalisation and SpaceTime Coding: EXITChartAided NearCapacity Designs for Wireless Channels, 2nd Edition. New York, USA: John Wiley IEEE Press, March 2011.
 S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “A softinput softoutput APP module for iterative decoding of concatenated codes,” IEEE Communications Letters, vol. 1, no. 1, pp. 22–24, 1997.
 L. Hanzo, R. G. Maunder, J. Wang, and L. Yang, NearCapacity VariableLength Coding:Regular and EXITChartAided Irregular Designs. John Wiley IEEE Press, 2011.
 Z. Babar, S. X. Ng, and L. Hanzo, “Convergence analysis of quantum turbo codes using EXIT charts,” 2013, unpublished manuscript.
 C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, “Mixedstate entanglement and quantum error correction,” Physical Review A, vol. 54, no. 5, pp. 3824–3851, November 1996.
 G. Smith and J. A. Smolin, “Degenerate quantum codes for Pauli channels,” Physical Review Letters, vol. 98, no. 3, p. 030501, January 2007.
 J. Fern and K. B. Whaley, “Lower bounds on the nonzero capacity of Pauli channels,” Physical Review A, vol. 78, no. 6, p. 062335, December 2008.
 J. Fern, “Correctable noise of quantumerrorcorrecting codes under adaptive concatenation,” Physical Review A, vol. 77, no. 1, p. 010301, January 2008.
 I. Devetak, A. W. Harrow, and A. Winter, “A family of quantum protocols,” Physical Review Letters, vol. 93, no. 23, p. 230504, December 2004.
 ——, “A resource framework for quantum Shannon theory,” IEEE Transactions on Information Theory, vol. 54, no. 10, pp. 4587–4618, October 2008.
 A. Abeyesinghe, I. Devetak, P. Hayden, and A. Winter, “The mother of all protocols: restructuring quantum information’s family tree,” Proceedings of the Royal Society A, vol. 465, no. 2108, pp. 2537–2563, 2009.
 M.H. Hsieh and M. M. Wilde, “Entanglementassisted communication of classical and quantum information,” IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4682–4704, September 2010, arXiv:0811.4227.
 ——, “Trading classical communication, quantum communication, and entanglement in quantum Shannon theory,” IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4705–4730, September 2010, arXiv:0901.3038.
 M. M. Wilde and M.H. Hsieh, “The quantum dynamic capacity formula of a quantum channel,” Quantum Information Processing, vol. 11, no. 6, pp. 1431–1463, 2012, arXiv:1004.0458.
 C.Y. Lai, T. A. Brun, and M. M. Wilde, “Dualities and identities for entanglementassisted quantum codes,” October 2010, arXiv:1010.5506.
 M. M. Wilde, “EATurbo,” http://code.google.com/p/eaturbo/, September 2010, Matlab and MEX software for characterizing and simulating entanglementassisted quantum turbo codes (source code available under a GPL license).
 D. J. MacKay, G. Mitchison, and P. L. McFadden, “Sparse graph codes for quantum errorcorrection,” IEEE Transactions on Information Theory, vol. 50, no. 10, p. 2315, October 2004.
 M. Hagiwara and H. Imai, “Quantum quasicyclic LDPC codes,” in Proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, June 2007, pp. 806–810, arXiv:quantph/0701020.
 T. Camara, H. Ollivier, and J.P. Tillich, “A class of quantum LDPC codes: construction and performances under iterative decoding,” in Proceedings of the 2007 International Symposium on Information Theory, Nice, France, June 2007, pp. 811–815.
 M.H. Hsieh, T. A. Brun, and I. Devetak, “Entanglementassisted quantum quasicyclic lowdensity paritycheck codes,” Physical Review A, vol. 79, no. 3, p. 032340, March 2009.
 P. Tan and J. Li, “Efficient quantum stabilizer codes: LDPC and LDPCconvolutional constructions,” IEEE Transactions on Information Theory, vol. 56, no. 1, pp. 476–491, January 2010.
 K. Kasai, M. Hagiwara, H. Imai, and K. Sakaniwa, “Quantum error correction beyond the bounded distance decoding limit,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 1223–1230, February 2012, arXiv:1007.1778.
 Y. Fujiwara, D. Clark, P. Vandendriessche, M. De Boeck, and V. D. Tonchev, “Entanglementassisted quantum lowdensity paritycheck codes,” Physical Review A, vol. 82, p. 042338, October 2010, arXiv:1008.4747.
 B. Shaw, M. M. Wilde, O. Oreshkov, I. Kremsky, and D. Lidar, “Encoding one logical qubit into six physical qubits,” Physical Review A, vol. 78, p. 012337, 2008.
 M. M. Wilde and D. Fattal, “Nonlocal quantum information in bipartite quantum error correction,” Quantum Information Processing, vol. 9, no. 5, pp. 591–610, September 2009.
 M. M. Wilde and M.H. Hsieh, “Entanglement generation with a quantum channel and a shared state,” in Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, Texas, USA, June 2010, pp. 2713–2717.
 Y. Dong, X. Deng, M. Jiang, Q. Chen, and S. Yu, “Entanglementenhanced quantum errorcorrecting codes,” Physical Review A, vol. 79, no. 4, p. 042342, April 2009.
 J. Preskill, Lecture Notes on Quantum Computation, 1999, ch. Quantum Error Correction (Chapter 7), pp. 15–16. [Online]. Available: http://www.theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
 C.Y. Lai and T. A. Brun, “Entanglementassisted quantum errorcorrecting codes with imperfect ebits,” Physical Review A, vol. 86, p. 032319, September 2012, arXiv:1204.0302.
 D. Kribs, R. Laflamme, and D. Poulin, “Unified and generalized approach to quantum error correction,” Physical Review Letters, vol. 94, no. 18, p. 180501, 2005.
 D. W. Kribs, R. Laflamme, D. Poulin, and M. Lesosky, “Operator quantum error correction,” Quantum Information & Computation, vol. 6, pp. 383–399, 2006.
 D. Poulin, “Stabilizer formalism for operator quantum error correction,” Physical Review Letters, vol. 95, no. 23, p. 230504, 2005.
 P. Aliferis and A. W. Cross, “Subsystem fault tolerance with the BaconShor code,” Physical Review Letters, vol. 98, no. 22, p. 220502, 2007.
 I. Kremsky, M.H. Hsieh, and T. A. Brun, “Classical enhancement of quantumerrorcorrecting codes,” Physical Review A, vol. 78, no. 1, p. 012341, 2008.
 I. Devetak and P. W. Shor, “The capacity of a quantum channel for simultaneous transmission of classical and quantum information,” Communications in Mathematical Physics, vol. 256, pp. 287–303, 2005.
 M. M. Wilde and T. A. Brun, “Unified quantum convolutional coding,” in Proceedings of the IEEE International Symposium on Information Theory, Toronto, Ontario, Canada, July 2008, pp. 359–363, arXiv:0801.0821.
 M.H. Hsieh, W.T. Yen, and L.Y. Hsu, “High performance entanglementassisted quantum ldpc codes need little entanglement,” IEEE Transactions on Information Theory, vol. 57, no. 3, pp. 1761–1769, March 2011, arXiv:0906.5532.
 H. Jin and R. J. McEliece, “Coding theorems for turbo code ensembles,” IEEE Transactions on Information Theory, vol. 48, no. 6, pp. 1451–1461, June 2002.
 A. S. Barbulescu and S. S. Pietrobon, “Interleaver design for turbo codes,” Electronics Letters, vol. 30, no. 25, pp. 2107–2108, December 1994.
 S. Dolinar and D. Divsalar, “Weight distribution for turbo codes using random and nonrandom permutations,” JPL Progress report, vol. 42, no. 122, pp. 56–65, August 1995.
 J. Yuan, B. Vucetic, and W. Feng, “Combined turbo codes and interleaver design,” IEEE Transactions on Communications, vol. 47, no. 4, pp. 484–487, April 1999.
 H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe, “Interleaver design for turbo codes,” IEEE Journal on Selected Areas in Communications, vol. 19, no. 5, pp. 831–837, May 2001.
 M. L. Cedervall and R. Johannesson, “A fast algorithm for computing distance spectrum of convolutional codes,” IEEE Transactions on Information Theory, vol. 35, no. 6, pp. 1146–1159, November 1989.