On Variational Expressions for Quantum Relative Entropies
Abstract
Distance measures between quantum states like the trace distance and the fidelity can naturally be defined by optimizing a classical distance measure over all measurement statistics that can be obtained from the respective quantum states. In contrast, Petz showed that the measured relative entropy, defined as a maximization of the KullbackLeibler divergence over projective measurement statistics, is strictly smaller than Umegaki’s quantum relative entropy whenever the states do not commute. We extend this result in two ways. First, we show that Petz’ conclusion remains true if we allow general positive operator valued measures. Second, we extend the result to Rényi relative entropies and show that for noncommuting states the sandwiched Rényi relative entropy is strictly larger than the measured Rényi relative entropy for , and strictly smaller for . The latter statement provides counterexamples for the dataprocessing inequality of the sandwiched Rényi relative entropy for . Our main tool is a new variational expression for the measured Rényi relative entropy, which we further exploit to show that certain lower bounds on quantum conditional mutual information are superadditive.
Keywords: Quantum entropy, measured relative entropy, relative entropy of recovery, additivity in quantum information theory, operator Jensen inequality, convex optimization.
Mathematics Subject Classifications (2010): 94A17, 81Q99, 15A45.
I Measured Relative Entropy
The relative entropy is the basic concept underlying various information measures like entropy, conditional entropy and mutual information. A thorough understanding of its quantum generalization is thus of preeminent importance in quantum information theory. We start by considering measured relative entropy, which is defined as a maximization of the KullbackLeibler divergence over all measurement statistics that are attainable from two quantum states.
For a positive measure on a finite set and a probability measure on that is absolutely continuous with respect to , denoted , the relative entropy or KullbackLeibler divergence kullback51 () is defined as
(1) 
where we understand whenever . By continuity we define it as if . (We use to denote the natural logarithm.)
To extend this concept to quantum systems, Donald donald86 () as well as Hiai and Petz hiai91 () studied measured relative entropy. In the following we restrict ourselves to a dimensional Hilbert space for some . Let us denote the set of positive semidefinite operators acting on this space by and the subset of density operators (with unit trace) by . For a density operator and , we define two variants of measured relative entropy. The general measured relative entropy is defined as
(2) 
where the optimization is over finite sets and positive operator valued measures (POVMs) on . (More formally, is a map from to positive semidefinite operators and satisfies , whereas is a measure on defined via the relation for any .) At first sight this definition seems cumbersome because we cannot restrict the size of that we optimize over. Therefore, following the works donald86 (); petz86b (); hiai91 (), let us also consider the following projectively measured relative entropy, which is defined as
(3) 
where the maximization is over all sets of mutually orthogonal projectors and we spelled out the KullbackLeibler divergence for discrete measures. Note that without loss of generality we can assume that these projectors are rank as any course graining of the measurement outcomes can only reduce the relative entropy due to its dataprocessing inequality lindblad75 (); uhlmann77 (). Furthermore, the quantity is finite and the supremum is achieved whenever , which here denotes that the support of is contained in the support of . (To verify this, recall that the rank projectors form a compact set and the divergence is lower semicontinuous.)
The first of the following two variational expressions for the (projectively) measured relative entropy is due to Petz petzbook08 (). Note that the second objective function is concave in so that the optimization problem has a particularly appealing form.
Lemma 1.
For and nonzero, the following identities hold:
(4) 
Moreover, the suprema are achieved when and are positive definite operators, i.e. and .
Proof.
If then the two expressions in the suprema of (4) are unbounded, as expected. We now assume that . Let us consider the second expression in (4). We write the supremum over as two suprema over and , where are the eigenvalues of corresponding to the eigenvectors given by rank projectors . Using the fact that , we find
(5) 
For such that , we also have , and thus the corresponding term is zero. When , let us first consider . In this case, the supremum of th term is achieved in the limit . Now in the case (which is the only possible case when ), observe that the expression is concave in , the inner supremum is achieved by the local maxima at . Plugging this into (5), we find
(6) 
This is the expression for the measured relative entropy in (3). The remaining supremum is achieved because the set of rank1 projectors is compact and the divergence is lower semicontinuous.
It remains to show that the two variational expressions in (4) are equivalent. We have for all and, thus, for all . This yields
(7) 
Now note that the expression on the left hand side is invariant under the substitution for . Hence, as for and nonzero , we can add the normalization constraint and we have
(8)  
(9) 
where we used that when . ∎
Using this variational expression, we are able to answer a question left open by Donald (donald86, , Sec. 3) as well as Hiai and Petz (hiai91, , Sec. 1), namely whether the two definitions of measured relative entropy are equal.
Theorem 2.
For and , we have .
Proof.
The direction ‘’ holds trivially. Moreover, if , we can choose to be a rank projector such that and and thus .
It remains to show the direction ‘’ when holds. Let be a POVM. Recall that the distribution is defined by . Introducing , we can write
(10)  
(11)  
(12) 
Now observe that for any , the spectrum of the operator is included in . As a result, we can apply the operator Jensen inequality for the function , which is operator concave on and get
(13) 
Now we simply choose
(14) 
and which allows to further bound (13) by . Comparing this with the variational expression for the measured relative entropy in Lemma 1 yields the desired inequality. ∎
Hence, the measured relative entropy, , achieves its maximum for projective rank measurements and can be evaluated using the concave optimization problem in Lemma 1.
Ii Measured Rényi Relative Entropy
Here we extend the results of the previous section to Rényi divergence. Using the same notation as in the previous section, for we define the Rényi divergence renyi61 () as
(15) 
if and as if . For we rewrite the sum as
(16) 
Hence we see that absolute continuity is not necessary to keep finite for . However, the Rényi divergence instead diverges to when and are orthogonal.^{1}^{1}1 and are orthogonal, denoted , if there exists an such that and . It is well known that the Rényi divergence converges to the KullbackLeibler divergence when and we thus set . Moreover, in the limit we find the maxdivergence .
Let us now define the measured Rényi relative entropy as before, namely
(17) 
We will later show that this is equivalent to the following projectively measured Rényi relative entropy, which we define here for as
(18) 
and analogously for with
(19) 
Note that the supremum in (18) is achieved and is finite whenever . Similarly, the minimum in (19) is nonzero and is finite whenever , i.e. when the two states are not orthogonal.
Next we give variational expressions for similar to the variational characterization of the measured relative entropy in Lemma 1.
Lemma 3.
For and , the following identities hold:
(20)  
(21) 
The infima and suprema are achieved when and .
The expressions (21) can be seen as a generalization of Alberti’s theorem alberti83 () for the fidelity (which corresponds to ) to general .
Proof.
We first show the identity (20). Let us discuss the case in detail. Note that the two expressions given for and are equivalent by the transformation (the reason for the different ways of writing is to see that the expressions are convex in , which will be useful later, in particular in Theorem 4). We first write
(22) 
Let be such that and are both strictly positive (which is the case when and ). Then a local (and thus global) minimum for is easily found at the point where
(23) 
If both we can chose arbitrarily. If only the infimum is achieved in the limit , and if only in the limit . In all these cases the infimum of the th term is zero. Furthermore, it is achieved for a finite, nonzero when and . Plugging this solution into the above expression yields
(24) 
This infimum is always achieved since the set we optimize over is compact. Comparing this with the definition of yields the first equality.
For the case , when , the proof is analogous to the previous argument. Otherwise, it is simple to see that the supremum is .
As for the measured relative entropy, the restriction to rank projective measurements is in fact not restrictive at all.
Theorem 4.
For and , we have .
Proof.
For we follow the steps of the proof of Theorem 2. Consider any finite set and POVM with induced measures and . We can write
(27) 
where we can restrict the sum over . We then find that the sum satisfies
(28) 
where the inequality again follows by the operator Jensen inequality and the operator concavity of the function on . Now we set
(29) 
Thus, we can bound
(30) 
Comparing this with the variational expression in Lemma 3 yields the desired inequality.
For , we use the same notation as in (27). We further distinguish the cases and . For , we define
(31) 
We can then evaluate
(32)  
(33) 
where we used the operator convexity of on and the operator Jensen inequality. Moreover,
(34) 
As a result
(35) 
Comparing this with the variational expression in Lemma 3 yields the desired inequality.
For we choose , so that
(36)  
(37) 
and once again conclude using the variational expression in Lemma 3. ∎
Iii Achievability of Relative Entropy
iii.1 Umegaki’s Relative Entropy
Here we compare the measured relative entropy to other notions of quantum relative entropy that have been investigated in the literature and have found operational significance in quantum information theory. Umegaki’s quantum relative entropy umegaki62 () has found operational significance as the threshold rate for asymmetric binary quantum hypothesis testing hiai91 (). For and , it is defined as
(38) 
We recall the following variational expression by Petz petz88 () (see also kosaki86 () for another variational expression):
(39)  
(40) 
By the dataprocessing inequality for the relative entropy lindblad75 (); uhlmann77 () and Theorem 2 we always have
(41) 
and moreover Petz petz86b () showed the inequality is strict if and do not commute (for and . Theorem 2 strengthens this to show that the strict inequality persists even when we take a supremum over POVMs. In the following we give an alternative proof of Petz’ result and then extend the argument to Rényi relative entropy in Section III.2.
Proposition 5.
Let with and with . Then, we have
(42) 
Our proof relies on the Golden–Thompson inequality golden65 (); thompson65 (). It states that for two Hermitian matrices and , it holds that
(43) 
with equality if an only if as shown in so92 ().
Proof.
First, it is evident that equality holds if since then there exists a projective measurement that commutes with and and thus does not effect the states. For the following, it is worth writing the variational expressions for the two quantities side by side. Namely, writing , we have
(44)  
(45) 
where we optimize over all Hermitian operators . Note that, according to Lemma 1, we can write a for (45) because we are assuming and . The inequality in (42) can now be seen as a direct consequence of the Golden–Thompson inequality.
It remains to show that implies . Let be any maximizer of the variational problem in (45). Observe now that the equality necessitates
(46) 
which holds only if and only if by the equality condition in (43). Now define the function
(47) 
and since maximizes , we must have for all Hermitian ,
(48) 
To evaluate the second summand of this Fréchet derivative we used that and commute. Since this holds for all we must in fact have , which means that , as desired. ∎
In some sense this result tells us that some quantum correlations, as measured by the relative entropy, do not survive the measurement process. This fact appears in quantum information theory in various different guises, for example in the form of locking classical correlations in quantum states divincenzo04 (). (We also point to piani09 () for the use of measured relative entropy measures in quantum information theory.) Moreover, since Umegaki’s relative entropy is the smallest quantum generalization of the KullbackLeibler divergence that is both additive and satisfies dataprocessing (see, e.g., (mybook, , Sec. 4.2.2)), the same conclusion can be drawn for any sensible quantum relative entropy. (An example being the quantum relative entropy introduced by Belavkin and Staszewski belavkin82 ().)
iii.2 Sandwiched Rényi Relative Entropy
Next we consider a family of quantum Rényi relative entropies lennert13 (); wilde13 () that are commonly called sandwiched Rényi relative entropies and have found operational significance since they determine the strong converse exponent in asymmetric binary quantum hypothesis testing mosonyiogawa13 (). They are of particular interest here because they are, for , the smallest quantum generalization of the Rényi divergence that is both additive and satisfies dataprocessing (mybook, , Sec. 4.2.2). (Examples for other sensible quantum generalizations are the quantum Rényi relative entropy first studied by Petz petz86 () and the quantum divergences introduced by Matsumoto matsumoto14 ().)
For and , the sandwiched Rényi relative entropy of order is defined as
(49) 
where the same considerations about finiteness as for the measured Rényi relative entropy apply. We also consider the limits and of the above expression for which we have lennert13 (),
(50) 
respectively. We recall the following variational expression by Frank and Lieb frank13 ():
(51) 
Alternatively, we can also write
(52) 
where we have used the same arguments as in the proof of the second part of Lemma 3. By the dataprocessing inequality for the sandwiched Rényi relative entropy lennert13 (); frank13 (); beigi13 () we always have
(53) 
In the following we give an alternative proof of this fact and show that
(54) 
In contrast, at the boundaries it is known that koenig08 (); fuchs96 (); mosonyiogawa13 (). (We refer to (mosonyiogawa13, , App. A) for a detailed discussion.)
Theorem 6.
Let with and with . For , we have
(55) 
The argument is similar to the proof of Proposition 5 but with the Golden–Thompson inequality replaced by the Araki–Lieb–Thirring inequality liebthirring05 (); araki90 (). It states that for we have
(56)  
(57) 
with equality if and only if except for as shown in hiai94 ().
Proof.
We give the proof for and note that the argument for is analogous. We have the following variational expressions from Lemma 3 and (51):
(58)  
(59) 
where the existence of the minima relies on the fact that both operators have full support. (Note also that these two expressions are in fact equivalent for .) Since , the inequality then follows immediately by the Araki–Lieb–Thirring inequality (56):
(60) 
Furthermore, if we have equality. To show that implies , we define the function
(61) 
For any minimizer of the variational problem in (59), we have
(62) 
for all Hermitian . To evaluate the second summand of this Fréchet derivative we used that and commute, which holds by the equality condition for Araki–Lieb–Thirring. We thus conclude that which implies that . ∎
Iv Violation of DataProcessing for
As a further application of the variational characterization of measured Rényi relative entropy, we can show that the dataprocessing for the sandwiched Rényi relative entropy fails for . (Numerical evidence pointed to the fact that dataprocessing does not hold in this regime martinthesis ().)
Theorem 7.
Let with and with , and . For , we have .
In particular, there exists a rank measurement that achieves and thus violates the dataprocessing inequality.
Proof.
First note that implies that the two states are not orthogonal and thus both quantities are finite. For the formulas (58) and (59) still hold. However, in contrast to the proof of Theorem 6 we have . Hence, we find by the Araki–Lieb–Thirring inequality (57) that
(63) 
As in the proof of Theorem 6 we have equality if and only if . This implies the claim. ∎
V Exploiting Variational Formulas
v.1 Some Optimization Problems in Quantum Information
The variational characterizations of the relative entropy (39)–(40), the sandwiched Rényi relative entropy (51)–(52), and their measured counterparts (Lemma 1 and Lemma 3), can be used to derive properties of various entropic quantities that appear in quantum information theory. We are interested in operational quantities of the form
(64) 
where stands for any relative entropy, measured relative entropy, or Rényi variant thereof, and denotes some convex, compact set of states. For Umegaki’s relative entropy , prominent examples for include the set of

separable states, giving rise to the relative entropy of entanglement vedral98 ().

positive partial transpose states, leading to the Rains bound on entanglement distillation rains01 ().

nondistillable states, leading to bounds on entanglement distillation vedral99 ().

quantum Markov states, leading to insights about the robustness properties of these states linden08 ().

locally recoverable states, leading to bounds on the quantum conditional mutual information fawzirenner14 (); seshadreesan14 (); brandao14 ().

extendible states, leading to bounds on squashed entanglement liwinter14 ().
Other examples are conditional Rényi entropies which are defined by optimizing the sandwiched Rényi relative entropy over a convex set of product states with a fixed marginal, see, e.g., tomamichel13 ().
A central question is what properties of the underlying relative entropy translate to properties of the induced measure . For example, all the relative entropies discussed in this paper are superadditive on tensor product states in the sense that
(65) 
We might then ask if we also have
(66) 
To study questions like this we propose to make use of the variational characterizations of the form
(67) 
where we made use of Sion’s minimax theorem sion58 () for the last equality. We note that the conditions of the minimax theorem are often fulfilled. The minimization over then typically simplifies and is a convex or even semidefinite optimization. (As an example, for the measured relative entropies the objective function becomes linear in .) We can then use strong duality of convex optimization to rewrite this minimization as a maximization problem boyd04 ():
(68) 
This leads to the expression