On Variational Expressions for Quantum Relative Entropies

On Variational Expressions for Quantum Relative Entropies

Mario Berta Department of Computing, Imperial College London Institute for Quantum Information and Matter, California Institute of Technology    Omar Fawzi Laboratoire de l’Informatique du Parallélisme, École Normale Supérieure de Lyon    Marco Tomamichel Centre for Quantum Software and Information, University of Technology Sydney
Abstract

Distance measures between quantum states like the trace distance and the fidelity can naturally be defined by optimizing a classical distance measure over all measurement statistics that can be obtained from the respective quantum states. In contrast, Petz showed that the measured relative entropy, defined as a maximization of the Kullback-Leibler divergence over projective measurement statistics, is strictly smaller than Umegaki’s quantum relative entropy whenever the states do not commute. We extend this result in two ways. First, we show that Petz’ conclusion remains true if we allow general positive operator valued measures. Second, we extend the result to Rényi relative entropies and show that for non-commuting states the sandwiched Rényi relative entropy is strictly larger than the measured Rényi relative entropy for , and strictly smaller for . The latter statement provides counterexamples for the data-processing inequality of the sandwiched Rényi relative entropy for . Our main tool is a new variational expression for the measured Rényi relative entropy, which we further exploit to show that certain lower bounds on quantum conditional mutual information are superadditive.

Keywords: Quantum entropy, measured relative entropy, relative entropy of recovery, additivity in quantum information theory, operator Jensen inequality, convex optimization.

Mathematics Subject Classifications (2010): 94A17, 81Q99, 15A45.

I Measured Relative Entropy

The relative entropy is the basic concept underlying various information measures like entropy, conditional entropy and mutual information. A thorough understanding of its quantum generalization is thus of preeminent importance in quantum information theory. We start by considering measured relative entropy, which is defined as a maximization of the Kullback-Leibler divergence over all measurement statistics that are attainable from two quantum states.

For a positive measure on a finite set and a probability measure on that is absolutely continuous with respect to , denoted , the relative entropy or Kullback-Leibler divergence kullback51 () is defined as

 D(P∥Q) :=∑x∈XP(x)logP(x)Q(x), (1)

where we understand whenever . By continuity we define it as if . (We use to denote the natural logarithm.)

To extend this concept to quantum systems, Donald donald86 () as well as Hiai and Petz hiai91 () studied measured relative entropy. In the following we restrict ourselves to a -dimensional Hilbert space for some . Let us denote the set of positive semidefinite operators acting on this space by and the subset of density operators (with unit trace) by . For a density operator and , we define two variants of measured relative entropy. The general measured relative entropy is defined as

 DM(ρ∥σ):=sup(X,M)D(Pρ,M∥∥Pσ,M), (2)

where the optimization is over finite sets and positive operator valued measures (POVMs) on . (More formally, is a map from to positive semidefinite operators and satisfies , whereas is a measure on defined via the relation for any .) At first sight this definition seems cumbersome because we cannot restrict the size of that we optimize over. Therefore, following the works donald86 (); petz86b (); hiai91 (), let us also consider the following projectively measured relative entropy, which is defined as

 DP(ρ∥σ):=sup{Pi}di=1{d∑i=1tr[Piρ]logtr[Piρ]tr[Piσ]}, (3)

where the maximization is over all sets of mutually orthogonal projectors and we spelled out the Kullback-Leibler divergence for discrete measures. Note that without loss of generality we can assume that these projectors are rank- as any course graining of the measurement outcomes can only reduce the relative entropy due to its data-processing inequality lindblad75 (); uhlmann77 (). Furthermore, the quantity is finite and the supremum is achieved whenever , which here denotes that the support of is contained in the support of . (To verify this, recall that the rank- projectors form a compact set and the divergence is lower semi-continuous.)

The first of the following two variational expressions for the (projectively) measured relative entropy is due to Petz petzbook08 (). Note that the second objective function is concave in so that the optimization problem has a particularly appealing form.

Lemma 1.

For and non-zero, the following identities hold:

 DP(ρ∥σ)=supω>0 tr[ρlogω]−logtr[σω]=supω>0tr[ρlogω]+1−tr[σω]. (4)

Moreover, the suprema are achieved when and are positive definite operators, i.e. and .

Proof.

If then the two expressions in the suprema of (4) are unbounded, as expected. We now assume that . Let us consider the second expression in (4). We write the supremum over as two suprema over and , where are the eigenvalues of corresponding to the eigenvectors given by rank- projectors . Using the fact that , we find

 supω>0tr[ρlogω]+1−tr[σω] =sup{Pi}di=1sup{ωi}di=1d∑i=1tr[Piρ](logωi+1)−tr[Piσ]ωi. (5)

For such that , we also have , and thus the corresponding term is zero. When , let us first consider . In this case, the supremum of -th term is achieved in the limit . Now in the case (which is the only possible case when ), observe that the expression is concave in , the inner supremum is achieved by the local maxima at . Plugging this into (5), we find

 sup{Pi}di=1d∑i=1tr[Piρ]logtr[Piρ]tr[Piσ]+tr[Piρ]−tr[Piρ]=sup{Pi}di=1d∑i=1tr[Piρ]logtr[Piρ]tr[Piσ]. (6)

This is the expression for the measured relative entropy in (3). The remaining supremum is achieved because the set of rank-1 projectors is compact and the divergence is lower semi-continuous.

It remains to show that the two variational expressions in (4) are equivalent. We have for all and, thus, for all . This yields

 supω>0 tr[ρlogω]−logtr[σω]≥supω>0tr[ρlogω]+1−tr[σω]. (7)

Now note that the expression on the left hand side is invariant under the substitution for . Hence, as for and non-zero , we can add the normalization constraint and we have

 supω>0 tr[ρlogω]−logtr[σω] =supω>0tr[σω]=1tr[ρlogω]−logtr[σω] (8) ≤supω>0tr[ρlogω]+1−tr[σω], (9)

where we used that when . ∎

Using this variational expression, we are able to answer a question left open by Donald (donald86, , Sec. 3) as well as Hiai and Petz (hiai91, , Sec. 1), namely whether the two definitions of measured relative entropy are equal.

Theorem 2.

For and , we have .

Proof.

The direction ‘’ holds trivially. Moreover, if , we can choose to be a rank- projector such that and and thus .

It remains to show the direction ‘’ when holds. Let be a POVM. Recall that the distribution is defined by . Introducing , we can write

 D(Pρ,M∥Pσ,M) =∑x∈¯XPρ,M(x)logPρ,M(x)Pσ,M(x) (10) =tr[ρ∑x∈¯XM(x)logPρ,M(x)Pσ,M(x)] (11) =tr[ρ∑x∈¯X√M(x)log(Pρ,M(x)Pσ,M(x)⋅1)√M(x)]. (12)

Now observe that for any , the spectrum of the operator is included in . As a result, we can apply the operator Jensen inequality for the function , which is operator concave on and get

 D(Pρ,M∥Pσ,M) ≤tr[ρlog(∑x∈¯XM(x)Pρ,M(x)Pσ,M(x))]. (13)

Now we simply choose

 ω=∑x∈¯XM(x)Pρ,M(x)Pσ,M(x)so thattr[σω]=∑x∈¯XPσ,M(x)Pρ,M(x)Pσ,M(x)=∑x∈XPρ,M(x)=1, (14)

and which allows to further bound (13) by . Comparing this with the variational expression for the measured relative entropy in Lemma 1 yields the desired inequality. ∎

Hence, the measured relative entropy, , achieves its maximum for projective rank- measurements and can be evaluated using the concave optimization problem in Lemma 1.

Ii Measured Rényi Relative Entropy

Here we extend the results of the previous section to Rényi divergence. Using the same notation as in the previous section, for we define the Rényi divergence renyi61 () as

 Dα(P∥Q):=1α−1log∑x∈XP(x)(P(x)Q(x))α−1 (15)

if and as if . For we rewrite the sum as

 ∑x∈XP(x)αQ(x)1−α. (16)

Hence we see that absolute continuity is not necessary to keep finite for . However, the Rényi divergence instead diverges to when and are orthogonal.111 and are orthogonal, denoted , if there exists an such that and . It is well known that the Rényi divergence converges to the Kullback-Leibler divergence when and we thus set . Moreover, in the limit we find the max-divergence .

Let us now define the measured Rényi relative entropy as before, namely

 DMα(ρ∥σ):=sup(X,M)Dα(Pρ,M∥∥Pσ,M). (17)

We will later show that this is equivalent to the following projectively measured Rényi relative entropy, which we define here for as

 DPα(ρ∥σ):=1α−1logQPα(ρ∥σ),withQPα(ρ∥σ):=sup{Pi}di=1{∑itr[Piρ]αtr[Piσ]1−α}, (18)

and analogously for with

 QPα(ρ∥σ):=min{Pi}di=1{∑itr[Piρ]αtr[Piσ]1−α}. (19)

Note that the supremum in (18) is achieved and is finite whenever . Similarly, the minimum in (19) is non-zero and is finite whenever , i.e. when the two states are not orthogonal.

Next we give variational expressions for similar to the variational characterization of the measured relative entropy in Lemma 1.

Lemma 3.

For and , the following identities hold:

 QPα(ρ∥σ) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩infω>0αtr[ρω]+(1−α)tr[σωαα−1]for α∈(0,12)infω>0αtr[ρω1−1α]+(1−α)tr[σω]for α∈[12,1)supω>0αtr[ρω1−1α]+(1−α)tr[σω]for α∈(1,∞) (20) =⎧⎪ ⎪⎨⎪ ⎪⎩infω>0tr[ρω]αtr[σωαα−1]1−αfor α∈(0,1)supω>0tr[ρω1−1α]αtr[σω]1−αfor% α∈(1,∞). (21)

The infima and suprema are achieved when and .

The expressions (21) can be seen as a generalization of Alberti’s theorem alberti83 () for the fidelity (which corresponds to ) to general .

Proof.

We first show the identity (20). Let us discuss the case in detail. Note that the two expressions given for and are equivalent by the transformation (the reason for the different ways of writing is to see that the expressions are convex in , which will be useful later, in particular in Theorem 4). We first write

 infω>0αtr[ρω1−1α]+(1−α)tr[σω] =inf{Pi}di=1inf{ωi}di=1α∑iω1−1αitr[Piρ]+(1−α)∑iωitr[Piσ]. (22)

Let be such that and are both strictly positive (which is the case when and ). Then a local (and thus global) minimum for is easily found at the point where

 α(1−1α)ω−1αitr[Piρ]+(1−α)tr[Piσ]=0⟹ωi=(tr[Piρ]tr[Piσ])α. (23)

If both we can chose arbitrarily. If only the infimum is achieved in the limit , and if only in the limit . In all these cases the infimum of the -th term is zero. Furthermore, it is achieved for a finite, non-zero when and . Plugging this solution into the above expression yields

 inf{Pi}di=1∑i∈[d]tr[Piρ]αtr[Piσ]1−α. (24)

This infimum is always achieved since the set we optimize over is compact. Comparing this with the definition of yields the first equality.

For the case , when , the proof is analogous to the previous argument. Otherwise, it is simple to see that the supremum is .

Now we show (21). Using (20) and the weighted arithmetic-geometric mean inequality we have

 QPα(ρ∥σ)≥infω>0tr[ρω1−1α]αtr[σω]1−αfor α∈(0,1). (25)

However, for any feasible in (20) and , is also feasible and choosing shows that (20) cannot exceed (25). Similarly, by Bernoulli’s inequality,

 QPα(ρ∥σ)≤supω>0tr[ρω1−1α]αtr[σω]1−αfor α∈(1,∞). (26)

And the same argument as above yields the equality. ∎

As for the measured relative entropy, the restriction to rank- projective measurements is in fact not restrictive at all.

Theorem 4.

For and , we have .

Proof.

For we follow the steps of the proof of Theorem 2. Consider any finite set and POVM with induced measures and . We can write

 Dα(Pρ,M∥Pσ,M)=11−αlog∑x∈¯XPρ,M(x)αPσ,M(x)1−α, (27)

where we can restrict the sum over . We then find that the sum satisfies

 ∑x∈¯XPρ,M(x)(Pρ,M(x)Pσ,M(x))α−1 ≤tr⎡⎢⎣ρ(∑x∈¯XM(x)(Pρ,M(x)Pσ,M(x))α)1−1α⎤⎥⎦, (28)

where the inequality again follows by the operator Jensen inequality and the operator concavity of the function on . Now we set

 ω=∑x∈¯XM(x)(Pρ,M(x)Pσ,M(x))αso thattr[σω]=∑x∈XPρ,M(x)(Pρ,M(x)Pσ,M(x))α−1. (29)

Thus, we can bound

 ∑x∈XPρ,M(x)(Pρ,M(x)Pσ,M(x))α−1≤αtr[ρω1−1α]+(1−α)tr[σω]. (30)

Comparing this with the variational expression in Lemma 3 yields the desired inequality.

For , we use the same notation as in (27). We further distinguish the cases and . For , we define

 ω=∑x∈¯XM(x)(Pρ,M(x)Pσ,M(x))α−1. (31)

We can then evaluate

 tr[σωαα−1] =tr⎡⎢ ⎢⎣σ⎛⎝∑x∈¯XM(x)(Pρ,M(x)Pσ,M(x))α−1⎞⎠αα−1⎤⎥ ⎥⎦ (32) ≤tr[σ∑x∈¯XM(x)(Pρ,M(x)Pσ,M(x))α]=∑x∈¯XPρ,M(x)αPσ,M(x)1−α. (33)

where we used the operator convexity of on and the operator Jensen inequality. Moreover,

 tr[ρω]=∑x∈¯XPρ,M(x)αPσ,M(x)1−α. (34)

As a result

 ∑x∈¯XPρ,M(x)αPσ,M(x)1−α ≥αtr[ρω]+(1−α)tr[σωαα−1]. (35)

Comparing this with the variational expression in Lemma 3 yields the desired inequality.

For we choose , so that

 tr[ρω1−1α] ≤∑x∈¯XPρ,M(x)(Pρ,M(x)Pσ,M(x))α−1 (36) tr[σω] =∑x∈¯XPρ,M(x)αPσ,M(x)1−α, (37)

and once again conclude using the variational expression in Lemma 3. ∎

Iii Achievability of Relative Entropy

iii.1 Umegaki’s Relative Entropy

Here we compare the measured relative entropy to other notions of quantum relative entropy that have been investigated in the literature and have found operational significance in quantum information theory. Umegaki’s quantum relative entropy umegaki62 () has found operational significance as the threshold rate for asymmetric binary quantum hypothesis testing hiai91 (). For and , it is defined as

 D(ρ∥σ):=tr[ρ(logρ−logσ)]if σ≫ρ and as +∞ if σ≫/ρ. (38)

We recall the following variational expression by Petz petz88 () (see also kosaki86 () for another variational expression):

 D(ρ∥σ) =supω>0tr[ρlogω]−logtr[exp(logσ+logω)] (39) =supω>0tr[ρlogω]+1−tr[exp(logσ+logω)]. (40)

By the data-processing inequality for the relative entropy lindblad75 (); uhlmann77 () and Theorem 2 we always have

 DP(ρ∥σ)=DM(ρ∥σ)≤D(ρ∥σ), (41)

and moreover Petz petz86b () showed the inequality is strict if and do not commute (for and . Theorem 2 strengthens this to show that the strict inequality persists even when we take a supremum over POVMs. In the following we give an alternative proof of Petz’ result and then extend the argument to Rényi relative entropy in Section III.2.

Proposition 5.

Let with and with . Then, we have

 DM(ρ∥σ)≤D(ρ∥σ)with % equality if and only if[ρ,σ]=0. (42)

Our proof relies on the Golden–Thompson inequality golden65 (); thompson65 (). It states that for two Hermitian matrices and , it holds that

 tr[exp(X+Y)]≤tr[exp(X)exp(Y)] (43)

with equality if an only if as shown in so92 ().

Proof.

First, it is evident that equality holds if since then there exists a projective measurement that commutes with and and thus does not effect the states. For the following, it is worth writing the variational expressions for the two quantities side by side. Namely, writing , we have

 D(ρ∥σ) =1+supHtr[ρH]−tr[exp(logσ+H)] (44) DM(ρ∥σ) =1+maxHtr[ρH]−tr[σexp(H)], (45)

where we optimize over all Hermitian operators . Note that, according to Lemma 1, we can write a for (45) because we are assuming and . The inequality in (42) can now be seen as a direct consequence of the Golden–Thompson inequality.

It remains to show that implies . Let be any maximizer of the variational problem in (45). Observe now that the equality necessitates

 tr[exp(logσ+H∗)]=tr[σexp(H∗)], (46)

which holds only if and only if by the equality condition in (43). Now define the function

 f(H)=tr[ρH]−tr[σexp(H)], (47)

and since maximizes , we must have for all Hermitian ,

 0=Df(H∗)[Δ]=tr[ρΔ]−tr[σexp(H∗)Δ]. (48)

To evaluate the second summand of this Fréchet derivative we used that and commute. Since this holds for all we must in fact have , which means that , as desired. ∎

In some sense this result tells us that some quantum correlations, as measured by the relative entropy, do not survive the measurement process. This fact appears in quantum information theory in various different guises, for example in the form of locking classical correlations in quantum states divincenzo04 (). (We also point to piani09 () for the use of measured relative entropy measures in quantum information theory.) Moreover, since Umegaki’s relative entropy is the smallest quantum generalization of the Kullback-Leibler divergence that is both additive and satisfies data-processing (see, e.g., (mybook, , Sec. 4.2.2)), the same conclusion can be drawn for any sensible quantum relative entropy. (An example being the quantum relative entropy introduced by Belavkin and Staszewski belavkin82 ().)

iii.2 Sandwiched Rényi Relative Entropy

Next we consider a family of quantum Rényi relative entropies lennert13 (); wilde13 () that are commonly called sandwiched Rényi relative entropies and have found operational significance since they determine the strong converse exponent in asymmetric binary quantum hypothesis testing mosonyiogawa13 (). They are of particular interest here because they are, for , the smallest quantum generalization of the Rényi divergence that is both additive and satisfies data-processing (mybook, , Sec. 4.2.2). (Examples for other sensible quantum generalizations are the quantum Rényi relative entropy first studied by Petz petz86 () and the quantum divergences introduced by Matsumoto matsumoto14 ().)

For and , the sandwiched Rényi relative entropy of order is defined as

 Dα(ρ∥σ):=1α−1logQα(ρ∥σ)withQα(ρ∥σ):=tr[(σ1−α2αρσ1−α2α)α], (49)

where the same considerations about finiteness as for the measured Rényi relative entropy apply. We also consider the limits and of the above expression for which we have lennert13 (),

 D∞(ρ∥σ)=inf{λ∈R|ρ≤exp(λ)σ}andD1(ρ∥σ)=D(ρ∥σ), (50)

respectively. We recall the following variational expression by Frank and Lieb frank13 ():

 Qα(ρ∥σ) =⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩infω>0αtr[ρω]+(1−α)tr[(ω12σα−1αω12)αα−1]for α∈(0,1)supω>0αtr[ρω]+(1−α)tr[(ω12σα−1αω12)αα−1]% for α∈(1,∞). (51)

Alternatively, we can also write

 Qα(ρ∥σ) =⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩infω>0tr[ρω]αtr[(ω12σα−1αω12)αα−1]1−αfor α∈(0,1)supω>0tr[ρω]αtr[(ω12σα−1αω12)αα−1]1−αfor α∈(1,∞), (52)

where we have used the same arguments as in the proof of the second part of Lemma 3. By the data-processing inequality for the sandwiched Rényi relative entropy lennert13 (); frank13 (); beigi13 () we always have

 DMα(ρ∥σ)≤Dα(ρ∥σ)for α≥12. (53)

In the following we give an alternative proof of this fact and show that

 [ρ,σ]≠0⟹DMα(ρ∥σ)

In contrast, at the boundaries it is known that  koenig08 (); fuchs96 (); mosonyiogawa13 (). (We refer to (mosonyiogawa13, , App. A) for a detailed discussion.)

Theorem 6.

Let with and with . For , we have

 DMα(ρ∥σ)≤Dα(ρ∥σ)with equality if and only if[ρ,σ]=0. (55)

The argument is similar to the proof of Proposition 5 but with the Golden–Thompson inequality replaced by the Araki–Lieb–Thirring inequality liebthirring05 (); araki90 (). It states that for we have

 tr[YXY]r≤tr[YrXrYr] for r≥1 (56) tr[YXY]r≥tr[YrXrYr] for r∈[0,1], (57)

with equality if and only if except for as shown in hiai94 ().

Proof.

We give the proof for and note that the argument for is analogous. We have the following variational expressions from Lemma 3 and (51):

 Qα(ρ∥σ) =infω>0αtr[ρω]+(1−α)tr[(ω−12σ1−ααω−12)α1−α] (58) QPα(ρ∥σ) =minω>0αtr[ρω]+(1−α)tr[σωαα−1], (59)

where the existence of the minima relies on the fact that both operators have full support. (Note also that these two expressions are in fact equivalent for .) Since , the inequality then follows immediately by the Araki–Lieb–Thirring inequality (56):

 QPα(ρ∥σ)≥Qα(ρ∥σ)⟹DMα(ρ∥σ)≤Dα(ρ∥σ). (60)

Furthermore, if we have equality. To show that implies , we define the function

 fα(ω)=αtr[ρω]+(1−α)tr[σωαα−1]. (61)

For any minimizer of the variational problem in (59), we have

 0=Dfα(ω∗α)[Δ]=αtr[ρΔ]−αtr[σ(ω∗α)1α−1Δ], (62)

for all Hermitian . To evaluate the second summand of this Fréchet derivative we used that and commute, which holds by the equality condition for Araki–Lieb–Thirring. We thus conclude that which implies that . ∎

Iv Violation of Data-Processing for α<12

As a further application of the variational characterization of measured Rényi relative entropy, we can show that the data-processing for the sandwiched Rényi relative entropy fails for . (Numerical evidence pointed to the fact that data-processing does not hold in this regime martinthesis ().)

Theorem 7.

Let with and with , and . For , we have .

In particular, there exists a rank- measurement that achieves and thus violates the data-processing inequality.

Proof.

First note that implies that the two states are not orthogonal and thus both quantities are finite. For the formulas (58) and (59) still hold. However, in contrast to the proof of Theorem 6 we have . Hence, we find by the Araki–Lieb–Thirring inequality (57) that

 QPα(ρ∥σ)≤Qα(ρ∥σ)⟹DPα(ρ∥σ)≥Dα(ρ∥σ). (63)

As in the proof of Theorem 6 we have equality if and only if . This implies the claim. ∎

V Exploiting Variational Formulas

v.1 Some Optimization Problems in Quantum Information

The variational characterizations of the relative entropy (39)–(40), the sandwiched Rényi relative entropy (51)–(52), and their measured counterparts (Lemma 1 and Lemma 3), can be used to derive properties of various entropic quantities that appear in quantum information theory. We are interested in operational quantities of the form

 M(ρ):=minσ∈CD(ρ∥σ), (64)

where stands for any relative entropy, measured relative entropy, or Rényi variant thereof, and denotes some convex, compact set of states. For Umegaki’s relative entropy , prominent examples for include the set of

• separable states, giving rise to the relative entropy of entanglement vedral98 ().

• positive partial transpose states, leading to the Rains bound on entanglement distillation rains01 ().

• non-distillable states, leading to bounds on entanglement distillation vedral99 ().

• quantum Markov states, leading to insights about the robustness properties of these states linden08 ().

• locally recoverable states, leading to bounds on the quantum conditional mutual information fawzirenner14 (); seshadreesan14 (); brandao14 ().

• -extendible states, leading to bounds on squashed entanglement liwinter14 ().

Other examples are conditional Rényi entropies which are defined by optimizing the sandwiched Rényi relative entropy over a convex set of product states with a fixed marginal, see, e.g., tomamichel13 ().

A central question is what properties of the underlying relative entropy translate to properties of the induced measure . For example, all the relative entropies discussed in this paper are superadditive on tensor product states in the sense that

 D(ρ1⊗ρ2∥σ1⊗σ2)≥D(ρ1∥σ1)+D(ρ2∥σ2). (65)

We might then ask if we also have

 minσ12∈C12D(ρ1⊗ρ2∥σ12)=M(ρ1⊗ρ2)\lx@stackrel?≥M(ρ1)+M(ρ2)=minσ1∈C1D(ρ1∥σ1)+minσ2∈C2D(ρ2∥σ2). (66)

To study questions like this we propose to make use of the variational characterizations of the form

 D(ρ∥σ)=supω>0f(ρ,σ,ω)in order to writeM(ρ)=minσ∈Csupω>0f(ρ,σ,ω)=supω>0minσ∈Cf(ρ,σ,ω), (67)

where we made use of Sion’s minimax theorem sion58 () for the last equality. We note that the conditions of the minimax theorem are often fulfilled. The minimization over then typically simplifies and is a convex or even semidefinite optimization. (As an example, for the measured relative entropies the objective function becomes linear in .) We can then use strong duality of convex optimization to rewrite this minimization as a maximization problem boyd04 ():

 minσ∈Cf(ρ,σ,ω)=max¯σ∈¯C¯f(ρ,¯σ,ω). (68)