Superadditivity of communication capacity using entangled inputs

Superadditivity of communication capacity using entangled inputs

M. B. Hastings Center for Nonlinear Studies and Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, 87545

The design of error-correcting codes used in modern communications relies on information theory to quantify the capacity of a noisy channel to send informationinfthy (). This capacity can be expressed using the mutual information between input and output for a single use of the channel: although correlations between subsequent input bits are used to correct errors, they cannot increase the capacity. For quantum channels, it has been an open question whether entangled input states can increase the capacity to send classical informationholevo (). The additivity conjecturemoec (); equiv () states that entanglement does not help, making practical computations of the capacity possible. While additivity is widely believed to be true, there is no proof. Here we show that additivity is false, by constructing a random counter-example. Our results show that the most basic question of classical capacity of a quantum channel remains open, with further work needed to determine in which other situations entanglement can boost capacity.

In the classical setting, Shannon presented a formal definition of a noisy channel as a probabilistic map from input states to output states. In the quantum setting, the channel becomes a linear, completely positive, trace-preserving map from density matrices to density matrices, modeling noise in the system due to interaction with an environment. Such a channel can be used to send either quantum or classical information. In the first case, a dramatic violation of operational additivity was recently shown, in that there exist two channels, each of which has zero capacity to send quantum information no matter how many times it is used, but which can be used in tandem to send quantum informationjy ().

Here we address the classical capacity of a quantum channel. To specify how information is encoded in the channel, we must pick a set of states which we use as input signals with with probabilities . Then the Holevo formulaholevo () for the capacity is:

(1)

where is the von Neumann entropy. The maximum capacity of a channel is the maximum over all input ensembles:

(2)

Suppose we have two different channels, . To compute this capacity, it seems necessary to consider entangled input states between the two channels. Similarly, when using the same channel multiple times, it may be useful to use input states which are entangled across multiple uses of the same channel. The additivity conjecture (see Figure 1) is the conjecture that this does not help and that instead

(3)

The additivity conjecture makes it possible to compute the classical capacity of a quantum channel. Further, Shorequiv () showed that several different additivity conjectures in quantum information theory are all equivalent. These are the additivity conjecture for the Holevo capacity, the additivity conjecture for entanglement of formationeof (), strong superadditivity of entanglement of formationsa (), and the additivity conjecture for minimum output entropymoec (). In this Letter, we show that all of these conjectures are false, by constructing a counterexample to the last of these conjectures. Given a channel , define the minimum output entropy by

(4)

The minimum output entropy conjecture is that for all channels and , we have

(5)

A counterexample to this conjecture would be an entangled input state which has a lower output entropy, and hence is more resistant to noise, than any unentangled state (see Figure 2).

Our counterexample to the additivity of minimum output entropy is based on a random construction, similar to those Winter and Hayden used to show violation of the maximal -norm multiplicativity conjecture for all aw (); ph (); phaw (). For , this violation would imply violation of the minimum output entropy conjecture; however, the counterexample found in ph () requires a matrix size which diverges as . We use different system and environment sizes (note that in our construction below) and make a different analysis of the probability of different output entropies. Other violations are known for close to smallp ().

We define a pair of channels and which are complex conjugates of each other. Each channel acts by randomly choosing a unitary from a small set of unitaries () and applying that to . This models a situation in which the unitary evolution of the system is determined by an unknown state of the environment. We define

(6)

where the are -by- unitary matrices, chosen at random from the Haar measure, and the probabilities are chosen randomly as described in the Supplemental Equations. The are all roughly equal. We pick

(7)

We show in the Supplemental Equations that

Theorem 1.

For sufficiently large , for sufficiently large , there is a non-zero probability that a random choice of from the Haar measure and of (as described in Supplemental Equations) will give a channel such that

The size of depends on .

For any pure state input, the output entropy of is at most and that of is at most . To show theorem (1), we first exhibit an entangled state with a lower output entropy for the channel . The entangled state we use is the maximally entangled state:

(9)

As shown in Lemma 1 in the Supplemental Equations, the output entropy for this state is bounded by

(10)

We then use the random properties of the channel to show that no product state input can obtain such a low output entropy. Lemmas 2-5 in the Supplemental Equations show that, with non-zero probability, the entropy is at least , for

(11)

where is a constant and . Thus, since for large enough , for large enough we have , the theorem follows.

The output entropy can be understood differently: for a given pure state input, can we determine from the output which of the unitaries was applied? Recall that

(12)

for any unitary . This means that, for the maximally entangled state, if a unitary was applied to one subsystem, and was applied to the other subsystem, we cannot determine which unitary was applied by looking at the output. This is the key idea behind Eq. (10).

Note that the minimum output entropy of must be less than by an amount at least of order . Suppose and are the two unitaries with the largest . Choose a state which is an eigenvector of . For this state, we cannot distinguish between the states and , and so

(13)

Our randomized analysis bounds how much further the output entropy of the channel can be lowered for a random choice of .

Our work raises the question of how strong a violation of additivity is possible. The relative violation we have found is numerically small, but it may be possible to increase this, and to find new situations in which entangled inputs can be used to increase channel capacity, or novel situations in which entanglement can be used to protect against decoherence in practical devices. The map is similar to that usedrugqe () to construct random quantum expandersqe1 (); qe2 (), raising the possibility that deterministic expander constructions can provide stronger violations of additivity.

While we have used two different channels, it is also possible to find a single channel such that , by choosing from the orthogonal group. Alternately, we can add an extra classical input used to “switch” between and , as suggested to us by P. Hayden.

The equivalence of the different additivity conjectureequiv () means that the violation of any one of the conjectures has profound impacts. The violation of additivity of the Holevo capacity means that the problem of channel capacity remains open, since if a channel is used many times, we must do an intractable optimization over all entangled inputs to find the maximum capacity. However, we conjecture that additivity holds for all channels of the form

(14)

Our intuition for this conjecture is that we believe that multi-party entanglement (between the inputs to three or more channels) is not useful, because it is very unlikely for all channels to apply the same unitary; note that the state has a low minimum output entropy precisely because it is left unchanged as in Eq. (12) if both channels apply corresponding unitaries. This two-letter additivity conjecture would allow us to restrict our attention to considering input states with a bipartite entanglement structure, possibly opening the way to computing the capacity for arbitrary channels.

Acknowledgments— I thank J. Yard, P. Hayden, and A. Harrow. This work was supported by U. S. DOE Contract No. DE-AC52-06NA25396.

Figure 1: Communicating classical information over a quantum channel. A set of states are used with probabilities as signal states on the channel. In (a), we use input states which are unentangled between channels and . In (b), we allow entanglement. The capacity of is equal to . The question addressed is whether entangling, as shown in (b), can increase this capacity.
Figure 2: Minimum output entropy of a quantum channel. A pure state is input to the channel. While the input is a pure state, the output may be a mixed state. We attempt to minimize the entropy of the output state over all pure input states. The question addressed is whether an entangled input state, as shown in (b), can have a lower output entropy for channel , than the sum of the minimum output entropies for the two channels.

Supplemental Equations:

To choose the , we first choose a set of amplitudes as follows. For pick independently from a probability distribution with

(15)

where the proportionality constant is chosen such that . This distribution is the same as that of the length of a random vector chosen from a Gaussian distribution in complex dimensions. Then, define

(16)

Then we set

(17)

so that

(18)

The only reason in what follows for not choosing all the probabilities equal to is that the choice we made will allow us to appeal to certain exact results on random bipartite states later.

We also define the conjugate channel

(19)

As shown in cc ()

(20)

In the notation that follows, we will take

(21)

We use “computer science” big-O notation throughout, rather than “physics” big-O notation. That is, if we state that a quantity is , it means that it is asymptotically bounded by a constant times , and may in fact be much smaller. For example, is in computer science notation but not in physics notation.

Theorem 1 follows from two lemmas below, 1 and 5, which give small corrections to the naive estimates of and for the entropies. Lemma 1 upper bounds by . Lemma 5 shows that for given , for sufficiently large , with non-zero probability, the entropy is at least , for

(22)

where is a constant and . Thus, since for large enough , for large enough we have , the theorem follows.

Lemma 1.

For any and , we have

Proof.

Consider the maximally entangled state, . Then,

Since the states and are pure states, the entropy of the state in (1) is bounded by

(25)

To show Eq. (25), let . Note that for all . Then, the entropy is equal to

Using the fact that the logarithm is an operator monotone functionopmon (), we find that , and also that for all . Inserting these inequalities into Eq. (1), we arrive at Eq. (25).

We claim that the right-hand side of Eq. (25) is bounded by

(27)

To show Eq. (27), define . We claim that . To see this, consider the real vectors and . The inner product of these vectors is equal to since while the norms of the vectors are and , respectively. Applying the Cauchy-Schwarz inequality to this inner product, we find that as claimed. Then the left-hand side of Eq. (27) is equal to

The last line of Eq. (1) is maximized at , giving Eq. (27), which implies Eq. (1). ∎

Lemma 2.

Consider a random bipartite pure state on a bipartite system with subsystems and with dimensions and respectively. Let be the reduced density matrix on . Then, the probability density that has a given set of eigenvalues, , is bounded by

where we define

(30)

Note that for all .

Similarly, consider a random state pure state on an dimensional space, and a channel as defined in Eq. (18), with unitaries chosen randomly from the Haar measure and the numbers chosen as described in Eq. (15) and with . Then, the probability density that the eigenvalues of assume given values is bounded by the same function as above.

Proof.

As shown inpl (); pl2 (), the exact probability distribution of eigenvalues is

(31)

where the constant of proportionality is given by the requirement that the probability distribution integrate to unity. The proportionality constant is as we show below, and for

(32)

so Eq. (2) follows. The second equality in (2) holds because .

Given a random pure state , with and chosen as described above, then the state has the same eigenvalue distribution as the reduced density matrix of a random bipartite state, so the second result follows. To see that the eigenvalue distribution of a random bipartite state in dimensions is indeed the same as that of , we consider the reduced density matrix on the dimensional system of the random bipartite state and show that it has the same statistical properties as . We choose the different amplitudes of the unnormalized bipartite state from a Gaussian distribution. Equivalently, for each corresponding to a given state in the environment, we choose an dimensional vector from a Gaussian distribution. Thus, before normalization, the reduced density matrix of the random bipartite state on the dimensional system has the same statistics as the sum where the are states drawn from a Gaussian distribution. The state is the sum . The have the same statistics as , while the directions of the vectors are independent and uniformly distributed, as are the directions of the . The factor of takes into account the normalization, so that indeed has the same statistics as the normalized bipartite state as claimed.

Finally, we show how to upper bound the proportionality constant. One approach is to keep track of constant factors of in the derivation of pl (); pl2 (). Another approach, which we explain here, is to lower bound the integral . As a lower bound on the integral, we restrict to a subregion of the integration domain: we assume that the -th eigenvalue falls into a narrow interval of width , and we choose these intervals such that for and such that . To do this, for example, we can require that the -th eigenvalue obey . Then, in this subregion, , and . The centers of the intervals were chosen such that if each eigenvalue is at the center, then ; we can then estimate the volume of the subregion as . Combining these estimates, we lower bound the integral as desired. ∎

Remark: In order to get some understanding of the probability of having a given fluctuation in the entropy, we consider a Taylor expansion about . The next three paragraphs are not intended to be rigourous and are not used in the later proof. Instead, they are intended to, first, give some rough idea of the probability of a given fluctuation in the entropy, and, second, explain why -nets do not suffice to give sufficiently tight bounds on the probability of having a given fluctuation in the entropy and hence why we turn to a slightly more complicated way of estimating this probability in lemmas 3-5.

If all the probabilities are close to , so that for small , we can Taylor expand the last line of (2), using , to get:

(33)

Similarly, we can expand

(34)

Using Eq. (33,34), we find that the probability of having is roughly .

Using -nets, these estimates (33,34) give some motivation for the construction we are using, but just fail to give a good enough bound on their own: define an -net with distance between points on the net. There are then points in the net. Then, the probability that, for a random , at least one point on the net has a given is bounded by . Thus, the probability of having a is less than one for . However, in order to use -nets to show that it is unlikely to have any state with given , we need to take a sufficiently dense -net. If there exists a state with given , then any state within distance will have, by Fannes inequalityfannes (), a , and therefore we will need to take a of roughly in order to use the bounds on for points on the net to get bounds on with an accuracy .

However, in fact this Fannes inequality estimate is usually an overestimate of the change in entropy. Given a state with a large , random nearby states can be written as a linear combination of with a random orthogonal vector . Since will typically by close to a maximally mixed state for random , and typically will also have almost vanishing trace with , the state will typically be close to a mixture of with the maximally mixed state, and hence will also have a relatively large . This idea motivates what follows.

Definitions: We will say that a density matrix is “close to maximally mixed” if the eigenvalues of all obey

(35)

where the constant will be chosen later. For any given channel , let denote the probability that, for a randomly chosen , the density matrix is close to maximally mixed. Let denote the probability that a random choice of from the Haar measure and a random choice of numbers produces a channel such that is less than . Note: we are defining to be the probability of a probability here. Then,

Lemma 3.

For an appropriate choice of , the probability can be made arbitrarily close to zero for all sufficiently large and .

Proof.

The probability is less than or equal to times the probability that for a random , random , and random , the density matrix is not close to maximally mixed. From (2), and as we will explain further in the next paragraph, this probability is bounded by the maximum over such that of

By picking large enough, we can make this probability

(37)

arbitrarily small for sufficiently large and .

The fact that for all is important in the claim that (37) indeed is a bound on the given probability. To compute the probability density for a given set of eigenvalues, , such that for some we have , we can use the bound to show that is bounded by . Therefore, Eq. (37) gives a bound on the probability density under the assumption that for some we have .

To turns this bound on the probability density into a bound on the probability, note that the total integration volume is bounded by unity, and the set of such that for some we have is a subset of the set of all .

Finally, note that the maximum of Eq. (37) is achieved at and it is straightforward to control the higher terms in the Taylor expansion of (3) in that case. ∎

The next lemma is the crucial step.

Lemma 4.

Consider a given choice of and which give a such that . Suppose there exists a state , such that has given eigenvalues . Let denote the probability that, for a randomly chosen state , the density matrix has eigenvalues which obey

(38)

for some . Then,

(39)

where the power of in the polynomial in (39) can be made arbitrarily large by an appropriate choice of the polynomial in (38).

Proof.

Consider a random state . We can write as a linear combination of and a state which is orthogonal to as follows:

(40)

where is a phase: .

For random , the probability that is . We can also calculate this probability exactly. Let be the surface area of a unit hypersphere in dimensions. Then, the probability that is equal to

Since is random, the probability distribution of is that of a random state with . One way to generate such a random state with this property is to choose a random state and set

(42)

If we choose a random state , then with probability at least , the state is close to maximally mixed. Further, for any given , the probability that is greater than is , and the polynomial can be chosen to be any given power of by appropriate choice of the constant hidden in the notation for . Therefore,

(43)

with any desired power of in the polynomial on the right-hand side (the notation is used to denote the trace norm here).

Then, since

(44)

we find that

(45)

with again any desired power in the polynomial.

The probability that is close to maximally mixed is at least , and so by (19,44) the probability that the eigenvalues of obey

(46)

is at least . Let

(47)

Thus, since

using Eq. (45) we find that for given , the probability that a randomly chosen gives a state with eigenvalues such that

(49)

is . Combining this result with the probability of , the claim of the lemma follows. ∎

We now give the last lemma which shows a lower bound, with non-zero probability, on . The basic idea of the proof is to estimate the probability that a random state input into a random channel gives an output state with moderately low output entropy (defined slightly differently below in terms of properties of the eigenvalues of the output density matrix). We estimate this probability in two different ways. First, we estimate the probability of such an output state conditioned on being chosen such that there exists some input state with an output entropy less than . Next, we estimate the probability of such an output state, without any conditioning on . By comparing these estimates, we are able to bound the probability of having an input state which gives an output entropy less than .

Lemma 5.

If the unitary matrices are chosen at random from the Haar measure, and the are chosen randomly as described above, then the probability that is less than is less than one for sufficiently large , for appropriate choice of and . The required depends on .

Proof.

Let denote the probability that . Then, with probability at least , for random and , the channel has and has .

Let be a state which minimizes the output entropy of channel . By lemma 4, for such a channel, for a random state , the density matrix has eigenvalues which obey

(50)

for with probability at least

(51)

Therefore, for a random choice of , the state