Minimum guesswork discrimination between quantum states

# Minimum guesswork discrimination between quantum states

Weien Chen111chenweienn@gmail.com Yongzhi Cao Hanpin Wang Yuan Feng222Yuan.Feng@uts.edu.au Centre for QCIS, University of Technology, Sydney, Australia
###### Abstract

Error probability is a popular and well-studied optimization criterion in discriminating non-orthogonal quantum states. It captures the threat from an adversary who can only query the actual state once. However, when the adversary is able to use a brute-force strategy to query the state, discrimination measurement with minimum error probability does not necessarily minimize the number of queries to get the actual state. In light of this, we take Massey’s guesswork as the underlying optimization criterion and study the problem of minimum guesswork discrimination. We show that this problem can be reduced to a semidefinite programming problem. Necessary and sufficient conditions when a measurement achieves minimum guesswork are presented. We also reveal the relation between minimum guesswork and minimum error probability. We show that the two criteria generally disagree with each other, except for the special case with two states. Both upper and lower information-theoretic bounds on minimum guesswork are given. For geometrically uniform quantum states, we provide sufficient conditions when a measurement achieves minimum guesswork. Moreover, we give the necessary and sufficient condition under which making no measurement at all would be the optimal strategy.

Keywords: quantum state discrimination, error probability, guesswork, brute-force strategy, information flow

## 1 Introduction

Since Helstrom’s pioneering work on quantum binary decision problem [1], quantum state discrimination has been extensively studied [2, 3, 1, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. The problem is usually described as a protocol between two parties, conventionally named Alice and Bob. Alice selects a quantum state from a set according to a probability distribution and gives it to Bob. We assume that Bob knows both the set of possible states and their associated probabilities. His aim is to identify the actual prepared state. To this end, Bob performs some quantum measurement on in order to extract information about the index . This gives rise to an optimization problem with regard to Bob’s choice of measurement. A number of criteria have been considered to concretize the meaning of this optimality [1, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20], among which error probability and the Shannon mutual information are two representatives. While the former has led to a research line known as minimum error discrimination (MED) [14, 15, 16, 17, 18, 19], the later corresponds to the study of accessible information [21, 22, 7, 8, 9]. Interestingly, as an alternative to MED, an unambiguous (error-less) scheme of state discrimination has been proposed, by allowing certain fraction of inconclusive measurement outcomes [10, 11, 12].

We point out that quantum state discrimination can be seen as a special case of quantitative information flow (QIF) analysis, which has been an active topic in security community during the last decades [23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. In QIF analysis, the aim is to quantify the amount of information leaked by a covert channel from a high-level entity, whose secret information (e.g., a password) is mathematically described as a random variable with alphabet and the associated probability distribution , to a low-level entity, whose partial information about is described as another random variable with alphabet . The correlation between and is determined by the channel matrix of the covert channel. To put quantum state discrimination in the context of QIF analysis, we may view Alice as the high-level entity and Bob the low-level entity. The only restriction is that the correlation between these two entities are ruled by quantum mechanics: Alice encodes her classical secret messages into quantum states ; Bob performs a measurement on Alice’s prepared state to get information about and stores his measurement outcome in ; the channel matrix is then given by the Born rule [33], , where is the measurement operator corresponding to the outcome .

In the literature of QIF analysis, researchers have proposed different figures of merit to quantify how successfully a low-level entity Bob can identify the secret value of given knowledge about , according to different adversarial strategies which Bob may adopt. In particular, it is well-known that error probability, guesswork, and the Shannon entropy deal with one-shot strategy, brute-force strategy, and subset membership strategy, respectively, and thus play important and complementary roles in QIF analysis [34, 35, 36]. In the quantum setting, it is clear that one-shot strategy and subset membership strategy have been considered. Error probability and the Shannon entropy have been widely studied in quantum information theory, and led a large amount of research on, besides MED and accessible information discussed above, quantum source coding [5, 37], quantum channel capacity [38, 39, 40], etc. However, to the best of our knowledge, no work has addressed brute-force strategy in the context of quantum state discrimination.

The above observation motivates us to consider Massey’s guesswork [41] as the optimization criterion in quantum state discrimination. We name the new problem minimum guesswork discrimination (MGD). In contrast to the MED scenario where Bob has only one chance to ask Alice “is ” for some chosen based on his measurement outcome, in this study Bob carries out multiple such queries until hitting Alice’s prepared state . Guesswork, the new criterion, quantifies the expected number of queries that Bob needs to make. We hope our preliminary step towards the study of brute-force strategy in the quantum setting will initiate a sibling direction of MED.

The rest of this paper is organized as follows. In Section 2, we first review the classical guessing problem, then extend it to the quantum setting. We show that MGD can be reduced to a semidefinite programming (SDP) problem, and present necessary and sufficient conditions which must be satisfied by the optimal measurement to achieve minimum guesswork. Section 3 is devoted to the relation between the minimum error criterion and the minimum guesswork criterion. We provide both upper and lower information-theoretic bounds on minimum guesswork in Section 4, and sufficient conditions when a measurement achieves minimum guesswork for geometrically uniform states in Section 5. In Section 6, we answer the question “when would making no measurement at all be the optimal strategy?” by a sufficient and necessary condition on the quantum ensemble. We discuss several other interesting issues worthy of consideration in Section 7 and conclude this work in Section 8.

## 2 Quantum guessing problem

We first review the classical guessing problem. Then its quantum variant can be simply formalized by instantiating the classical problem with quantum information and mechanics. Suppose that Alice has a discrete random variable with alphabet and the associated probability distribution . Bob, who knows both the alphabet and the distribution, aims to identify the true value of by keeping on asking questions of the form “is ?” until getting the answer “yes”. How many guesses is he expected to make? Massey [41] observed that Bob’s optimal strategy for minimizing his work is to arrange his queries according to the non-increasing order of probabilities ’s. Formally, the guesswork of Bob is given by

 G(X)≜n∑i=1σ(i)p(xi), (1)

where is a permutation on the index set such that implies . Recall that a permutation on set is just an one-to-one mapping from to itself. Here, represents formally Bob’s guessing strategy: he guesses in his th query. The guesswork quantifies the expected number of queries that Bob needs to guess the actual value of when applying the strategy . The constraint on ensures its optimality in that any other permutation yields greater or equal guesswork.

The above definition of guesswork has been generalized to conditional version [42], which is more appealing in practice. In addition to Alice’s alphabet and the prior distribution on it, Bob may possess some extra knowledge (or side information) of . In general, we assume a channel existing between Alice and Bob with input set and output set . The probabilistic behavior of this channel is characterized in the standard way, i.e., by conditional probabilities . Consequently, given some fixed input random variable of Alice, we can derive an output random variable on Bob’s side with the associated distribution obtained by

 p(yj)=n∑i=1p(xi)p(yj|xi),

for each . As usual, we denote the joint distribution of and by .

Now, instead of consulting the prior distribution of which leads to the guesswork , Bob applies an optimal guessing strategy on each posterior distribution when is observed. We denote each corresponding posterior guesswork by . Bob’s conditional guesswork is then given by

 G(X|Y)≜m∑j=1p(yj)G(X|Y=yj)=m∑j=1p(yj)n∑i=1σj(i)p(xi|yj), (2)

where each is a permutation on such that implies . In [42], Arikan showed that extra knowledge always reduces (at least preserves) guesswork, i.e., the inequality holds.

We now define the quantum guessing problem by extending the above scenario to the quantum setting. Alice selects from her alphabet a secret message with probability and encodes it into a (possibly mixed) quantum state , which is accessible to Bob. Alice’s operation gives rise to an ensemble of quantum states living in a finite dimensional Hilbert space . We call this a quantum encoding of . In order to identify Alice’s secret message, Bob performs on the quantum state a positive operator-valued measure (POVM) , which comprises positive semidefinite (PSD) operators satisfying the completeness condition , where is the identity matrix in . The probability that Bob obtains the th measurement outcome when Alice chooses the th message is given by . Note that the random variable is completely determined by the ensemble and the POVM . Hence, minimizing over all possible POVMs leads to the following definition of minimum guesswork:

 Gopt(E)≜minΠ∈MG(X|Y), (3)

where is the set of all POVMs. We name this minimization problem minimum guesswork discrimination (MGD).

For convenience of the following reasoning, we introduce some notations. Let be the set of all non-zero PSD operators and be the set of all non-zero rank-one PSD operators. A complete POVM is a POVM comprising only rank-one PSD operators. The set of all complete POVMs is denoted by . Given , the random variable which takes value from the alphabet of is defined by

 Pr(Xπ=xi)≜p(xi)Tr(ρxiπ)∑ni=1p(xi)Tr(ρxiπ).

Intuitively, describes Bob’s posterior distribution over Alice’s messages when he obtains the outcome indicated by the measurement operator . With this notation, the guesswork can be rewritten as for some and as given in the preceding paragraph. Sometimes, we write instead of to indicate the specific POVM adopted by Bob.

In the following, we give several alternative characterizations of . The first one states that the optimal POVM achieving can always be taken as a complete measurement.

###### Proposition 1.

Let be a quantum encoding of a random variable , and be defined in Eq.(3). It holds that

###### Proof.

Since can be rewritten as for a POVM , it is sufficient to prove that for any the guesswork cannot be increased by splitting into two measurement operators and such that . Formally, we have the following inference:

 p(yj)G(Xπyj)=p(yj)n∑i=1σj(i)p(xi)Tr(ρxiπyj)p(yj)=n∑i=1σj(i)p(xi)Tr(ρxiπyj′)+n∑i=1σj(i)p(xi)Tr(ρxiπyj′′)≥n∑i=1σj′(i)p(xi)Tr(ρxiπyj′)+n∑i=1σj′′(i)p(xi)Tr(ρxiπyj′′)=p(yj′)G(Xπyj′)+p(yj′′)G(Xπyj′′), (4)

where and the inequality is due to the fact that the permutation may not necessarily be optimal with respect to or . Therefore, from an optimal POVM which achieves , one can always derive a complete POVM which also achieves . ∎

Another alternative characterization of is suggested by the minimum error criterion used in MED. As shown in Remark 1 in Section 3, to minimize error probability it suffices to consider POVMs ’s which satisfy when comprises states. Here, denotes the number of measurement operators in . Similarly, in MGD, the following simple observation shows an upper bound on in minimizing guesswork. Suppose and are two measurement operators in . If the two posterior distributions induced by and achieve their minimum guesswork and by the same optimal permutation , then merging and into a single measurement operator would not change the overall performance of . Since there are exactly different permutations on the index set , we can reduce to for an optimal POVM .

###### Proposition 2.

Let be a quantum encoding of a random variable , and be defined in Eq.(3). It holds that

 Gopt(E)=minΠ∈Mn!G(X|Y),

where is the set of all POVMs consisting of exactly measurement operators.

Based on Eldar et al.’s analogous results in MED [43], we reduce MGD to an SDP problem, which has numerical solutions within any desired accuracy in mathematics, and derive necessary and sufficient conditions satisfied by the optimal POVM to achieve minimum guesswork. We present the results below and omit the proof which is simply an imitation of the reasoning in [43].

###### Proposition 3.

Let be a quantum encoding of a random variable , and be defined in Eq.(3). It holds that

 Gopt(E)=maxATr(A),

where ranges over all Hermitian operators in satisfying for any permutation on .

###### Proposition 4.

Let be a quantum encoding of a random variable , and be defined in Eq.(3). A POVM achieves if and only if for any permutation on it holds that

 n∑i=1n!∑j=1σj(i)p(xi)ρxiπyj≤n∑i=1σ(i)p(xi)ρxi.

It is worth noting that Proposition 4 can also be proved directly using the technique introduced in [44].

## 3 Relation between MGD and MED

Given two random variables and , the unconditional and conditional error probabilities of guessing the value of are defined respectively by

 Perr(X)≜1−max1≤i≤np(xi), (5)

and

 (6)

The underlying observation here is that to minimize the error probability, Bob should always guess the most probable message according to his prior or posterior distributions. In addition, if and are correlated by some quantum encoding and some POVM as in the preceding section, Bob’s error probability can be written as

 Perr(X|Y)=1−m∑j=1max1≤i≤np(xi)Tr(ρxiπyj),

because of the fact that . The definition of minimum error probability in MED is then given by

 (7)
###### Remark 1.

It is worth noting that in the literature of MED, error probability is usually formulated by

 P′err(X|Y)≜1−n∑i=1p(xi)Tr(ρxiπyi),

with the POVM being restricted to comprise exactly measurement operators. It is easy to show that this simplified characterization is equivalent to in the sense that

 minΠ∈MnP′err(X|Y)=minΠ∈MPerr(X|Y),

where is the set of all POVMs consisting of exactly measurement operators.

Now, we are ready to investigate the relation between and . To this end, we start by examining the two notions and in classic setting.

###### Lemma 1 ([45], Lemma 2.4).

Let and be defined in Eq.(1) and Eq.(5), respectively. It holds that

 G(X)≤n2⋅Perr(X)+1. (8)

If we assume without loss of generality that , then Eq.(8) achieves equality when for . It is straightforward to prove the conditional counterpart of Eq.(8).

###### Corollary 1.

Let and be defined in Eq.(2) and Eq.(6), respectively. It holds that

 G(X|Y)≤n2⋅Perr(X|Y)+1.

Interestingly, guesswork can also be lower bounded in terms of error probability.

###### Lemma 2.

Let and be defined in Eq.(1) and Eq.(5), respectively. It holds that

 G(X)≥12(1−Perr(X))+12, (9)

with the equality achieved when probabilities in are equal to for some integer and the other probabilities all equal to zero.

###### Proof.

Without loss of generality, we assume that . Given fixed, in order to minimize we need to require as many probabilities ’s being equal to as possible. It follows that

 G(X) ≥k∑i=1i⋅p(x1)+(k+1)⋅(1−p(x1)⋅k) =−p(x1)2k2−p(x1)2k+k+1 ≥12p(x1)+12 =12(1−Perr(X))+12,

where . The only non-trivial part of the above reasoning is the second inequality. To prove it, let . Then, it is equivalent to prove that

 −k22(k+t)−k2(k+t)+k+1≥k+t2+12,

which can be further simplified to . Since , we conclude the proof. When , thus , the equality is achieved. ∎

Again, we derive the conditional counterpart of Eq.(9).

###### Corollary 2.

Let and be defined in Eq.(2) and Eq.(6), respectively. It holds that

 G(X|Y)≥12(1−Perr(X|Y))+12. (10)
###### Proof.

We have the following inference:

 G(X|Y) =m∑j=1p(yj)G(X|Y=yj) ≥m∑j=1p(yj)(12(1−Perr(X|Y=yj))+12) ≥12(1−∑mj=1p(yj)Perr(X|Y=yj))+12 =12(1−Perr(X|Y))+12,

where the second inequality is from Jensen’s inequality. ∎

We observe that there is an even stronger connection between guesswork and error probability for the special case where .

###### Lemma 3.

If the alphabet of comprises exactly two elements, i.e., , then it holds that

• ;

• .

We omit the proof of this lemma, because it is easy from the definitions of guesswork and error probability. Based on the preceding results, we obtain the relation between and as follows.

###### Theorem 1.

Let be a quantum encoding of a random variable , and and be defined in Eq.(3) and Eq.(7), respectively. It holds that

 12(1−Popterr(E))+12≤Gopt(E)≤n2Popterr(E)+1 (11)

and if ,

 Gopt(E)=Popterr(E)+1. (12)
###### Proof.

We prove the left inequality in Eq.(11). Let be an optimal POVM achieving and the corresponding random variable induced by and . Applying Eq.(10), we have the following inference:

 Gopt(E) =G(X|Y) ≥12(1−Perr(X|Y))+12 ≥12(1−Popterr(E))+12.

Eq.(12) and the right inequality in Eq.(11) follows from similar reasoning but using optimal POVM achieving instead. ∎

This theorem states that the minimum error criterion coincides with the minimum guesswork criterion in the context of two state discrimination. However, regarding to general cases, the two criteria may not agree with each other, though there still exists a weak correlation between them as shown in Eq.(11). The following example shows their disagreement when three states are presented.

###### Example 1.

We consider the so-called trine ensemble [46, 47], which consists of three equiprobable pure qubit states (living in a -dimensional Hilbert space), given by

 |ϕx1⟩ =|0⟩, |ϕx2⟩ =−12|0⟩+√32|1⟩, |ϕx3⟩ =−12|0⟩−√32|1⟩.

For the trine ensemble, the -component POVM, denoted by , with each operator given by

 πyi=23ρxi=23|ϕxi⟩⟨ϕxi|,

achieves the minimum error with . This POVM is known as the square-root measurement [46]. On the other hand, in Section 5 we will show that the POVM with

 |ψxk⟩=cos(2kπ3−7π12)|0⟩+sin(2kπ3−7π12)|1⟩

achieves the minimum guesswork with . It is also easy to verify that

 Perr(X|ΠG) =23−√36>Perr(X|ΠE), G(X|ΠE) =32>G(X|ΠG),

where we write or for to avoid confusion. Indeed, by observing the proof of the optimality of in Section 5, we can conclude that for the trine ensemble there does not exist any POVM which can achieve both minimum error and minimum guesswork.

## 4 Upper and lower information-theoretic bounds on minimum guesswork

As a matter of fact, the optimization problem in MED is usually hard to solve analytically. Closed-form result or optimal measurement is only known for some special quantum systems, e.g., the case with exactly two states [1], equiprobable symmetric states [16], or multiply symmetric states [17]. We believe that it is also the case for MGD, because of the analogy between these two problems. In light of this, upper/lower bounds on minimum guesswork are as desirable as those on minimum error [48, 49, 50, 51, 52]. In what follows, we start by reviewing some existing upper/lower bounds on guesswork in classic setting. We then combine them with the celebrated Holevo bound and the less well-known subentropy bound on accessible information, resulting in upper/lower bounds on minimum guesswork in the quantum setting.

In [41], Massey proved the following lower bound on guesswork .

###### Lemma 4 ([41]).

Let be defined in Eq.(1). Provided , it holds that

 G(X)≥14⋅2H(X)+1.

Note that is the Shannon entropy. The logarithm is taken with base . Let us also recall the conditional Shannon entropy:

 H(X|Y) ≜m∑j=1p(yj)H(X|Y=yj), =−m∑j=1p(yj)n∑i=1p(xi|yj)logp(xi|yj).

Based on Massey’s result, we derive a similar lower bound on conditional guesswork.

###### Corollary 3.

Let be defined in Eq.(2). Provided for each , it holds that

 G(X|Y)≥14⋅2H(X|Y)+1. (13)
###### Proof.

We have the following inference:

 G(X|Y) =m∑j=1p(yj)G(X|Y=yj) ≥m∑j=1p(yj)(14⋅2H(X|Y=yj)+1) ≥14⋅2∑mj=1p(yj)H(X|Y=yj)+1 =14⋅2H(X|Y)+1,

where the second inequality is from Jensen’s inequality. ∎

Upper bound on guesswork in terms of the Shannon entropy also exists.

###### Lemma 5 ([53]).

Let be defined in Eq.(1). It holds that

 G(X)≤n−12lognH(X)+1. (14)

Again, we present the conditional counterpart of Eq.(14).

###### Corollary 4.

Let be defined in Eq.(2). It holds that

 G(X|Y)≤n−12lognH(X|Y)+1. (15)
###### Proof.
 G(X|Y) =m∑j=1p(yj)G(X|Y=yj) ≤m∑j=1p(yj)(n−12lognH(X|Y=yj)+1) =n−12lognH(X|Y)+1.

Now, let us review two bounds on accessible information. Let be a quantum encoding of a random variable , and be the random variable induced by and some POVM as described in Section 2. The accessible information of the ensemble is defined as the maximum mutual information between and obtainable via varying the POVM :

 Iacc(E)≜maxΠ∈MI(X:Y),

where . Although accessible information is difficult to characterize analytically, various upper and lower bounds have been found. In a celebrated paper [21], Holevo bounded as follows:

 Iacc(E)≤χ(E)≜S(n∑i=1p(xi)ρxi)−n∑i=1p(xi)S(ρxi), (16)

where the von Neumann entropy of quantum state is given by which is a natural extension of the Shannon entropy. The quantity is usually referred to as the Holevo information of the ensemble . Interestingly, accessible information can also be lower bounded by a quantity which has an analogous form to the Holevo information. With the notion of subentropy which is defined by

 Q(ρ)≜−∑k∏l≠kλkλk−λlλklogλk,

with ’s being the eigenvalues of the state , Jozsa, Robb and Wootters [22] obtained the following inequality:

 Iacc(E)≥Λ(E)≜Q(n∑i=1p(xi)ρxi)−n∑i=1p(xi)Q(ρxi). (17)

Combining Eq.(13) and Eq.(16), we obtain the following lower bound on minimum guesswork.

###### Theorem 2.

Let be a quantum encoding of a random variable and be defined in Eq.(3). Provided for any , it holds that

 Gopt(E)≥14⋅2H(X)−χ(E)+1. (18)
###### Proof.

Since

 Iacc(E)=H(X)−minΠ∈MH(X|Y)≤χ(E),

we have that

 minΠ∈MH(X|Y)≥H(X)−χ(E). (19)

Let be an optimal POVM achieving . Due to the guarantee that for any , we are safe to apply Eq.(13):

 G(X|Y)≥14⋅2H(X|Y)+1,

where is the random variable induced by and . Then, Eq.(19) and Eq.(13) together imply that

 Gopt(E)≥14⋅2H(X)−χ(E)+1,

as required. ∎

From the above proof, we see that the equality in Eq.(18) holds if and only if there exists an optimal POVM achieving such that and . To satisfy the first equation, we have to require for each (see the proof of Corollary 3). Consequently, the alphabet of must be countably infinite and each must obey the geometric distribution  [41]. A trivial case is when are identical, e.g.,

 Pr(Xπ=xi)=12i,i=1,2,⋯,

for each . In this case, we have that , which further implies that as required by the second equation. Therefore, Alice’s choice of has to be a trivial encoding with all the states being identical! Indeed, this special system satisfies the “no-measurement” condition discussed in Section 6. That is, Bob cannot decrease his prior guesswork by applying any measurement.

Nonetheless, there also exists non-trivial quantum encoding satisfying the equality condition of Eq.(18). We give an example below. Let be a random variable with the associated distribution given by

 p(x1) =p(x2)=38, p(xi) =12i,i=3,4,⋯.

Consider the following quantum encoding of :

 ρx1 =23|0⟩⟨0|+13|1⟩⟨1|, ρx2 =13|0⟩⟨0|+23|1⟩⟨1|, ρxi =12|0⟩⟨0|+12|1⟩⟨1|,i=3,4,⋯.

It is straightforward to verify that for any . Then, with the POVM , we see that both and obey the geometric distribution , and thus . On the other hand, we can calculate that and . Hence, it holds that as required by the equality condition of Eq.(18).

###### Remark 2.

We note that in order to apply Eq.(18) we need to verify in advance the precondition for any non-zero PSD . This constraint, which limits the applicability of the bound, originates from the one in Lemma 4. Intuitively, it can only be fulfilled by quantum states whose spanning spaces overlap to a certain extent. We give an example where this constraint holds. Consider a -dimensional Hilbert space with an orthogonal basis . Let be a quantum encoding of some random variable , with defined by

 ρxi=14(I−|i⟩⟨i|),

where is the identity operator on . For any , we can calculate that with

 pi=14(1−⟨i|π|i⟩Tr(π)).

It is easy to verify that .

To bound minimum guesswork in the other direction, we combine Eq.(15) and Eq.(17).

###### Theorem 3.

Let be a quantum encoding of a random variable and be defined in Eq.(3). It holds that

 Gopt(E)≤n−12logn(H(X)−Λ(E))+1. (20)
###### Proof.

Since

 Iacc(E)=H(X)−minΠ∈MH(X|Y)≥Λ(E),

we have that

 minΠ∈MH(X|Y)≤H(X)−Λ(E).

Consequently, there must exist some and such that . Applying Eq.(15), we obtain the following inference:

 Gopt(E)≤G(X|Y)≤n−12lognH(X|Y)+1≤n−12logn(H(X)−Λ(E))+1. (21)

Let us examine when the inequality in Eq.(20) is saturated. First, it is required that . Otherwise, there must exist a random variable induced by and some POVM such that , which further implies that is strictly less than the RHS term in Eq.(20). According to the discussion in [22], the lower bound on the accessible information of can only be achieved by the so-called Scrooge ensemble or trivially by an ensemble with all the states being identical. It was showed that the amount of information that we can obtain from the Scrooge ensemble by performing some complete POVM is independent of the choice of POVM.

Second, for any complete POVM and the corresponding random variable induced by and , we require that to saturate the second inequality in Eq.(21). It then follows from the proof of Corollary 4 that for any . On the other hand, the equality in Eq.(14) holds if and only if obeys either the uniform distribution, i.e., for each , or trivially a point distribution, i.e., for some  [53]. As a consequence, all the quantum states in must be either mutually orthogonal or identical. Since mutually orthogonal states cannot form a Scrooge ensemble, we conclude that the equality in Eq.(20) holds if and only if all the states in are identical.

## 5 Geometrically uniform states

In this section, we consider MGD for a special type of symmetric quantum states, the geometrically uniform states [13]. We provide sufficient conditions when some geometrically uniform measurement achieves minimum guesswork. With this technique, we are able to prove the optimality of the POVM , as given in Example 1, for the trine ensemble.

Let be a finite group which contains unitary operators. The identity element of and the inverse of is denoted by and , respectively. An ensemble of states is called geometrically uniform if ’s can be generated by a state and a group such that . It is also assumed that , as required by symmetry. Similarly, we can define such symmetry for quantum measurements. A POVM is called geometrically uniform if the measurement operators are generated by some and such that . In [18], Eldar, Megretski, and Verghese showed that the square-root measurement of a geometrically uniform ensemble is also geometrically uniform, and that there always exists a geometrically uniform measurement which achieves the minimum error for this ensemble.

We state our sufficient conditions as follows. Given a unitary operator and a POVM , we use to denote the new POVM