General entropy–like uncertainty relations in finite dimensions

General entropy–like uncertainty relations in finite dimensions

Abstract

We revisit entropic formulations of the uncertainty principle for an arbitrary pair of positive operator-valued measures (POVM) and , acting on finite dimensional Hilbert space. Salicrú generalized -entropies, including Rényi and Tsallis ones among others, are used as uncertainty measures associated with the distribution probabilities corresponding to the outcomes of the observables. We obtain a nontrivial lower bound for the sum of generalized entropies for any pair of entropic functionals, which is valid for both pure and mixed states. The bound depends on the overlap triplet with (resp. ) being the overlap between the elements of the POVM (resp. ) and the overlap between the pair of POVM. Our approach is inspired by that of de Vicente and Sánchez-Ruiz [Phys. Rev. A 77, 042110 (2008)] and consists in a minimization of the entropy sum subject to the Landau–Pollak inequality that links the maximum probabilities of both observables. We solve the constrained optimization problem in a geometrical way and furthermore, when dealing with Rényi or Tsallis entropic formulations of the uncertainty principle, we overcome the Hölder conjugacy constraint imposed on the entropic indices by the Riesz–Thorin theorem. In the case of nondegenerate observables, we show that for given , the bound obtained is optimal; and that, for Rényi entropies, our bound improves Deutsch one, but Maassen–Uffink bound prevails when . Finally, we illustrate by comparing our bound with known previous results in particular cases of Rényi and Tsallis entropies.

1Introduction

The uncertainty principle (UP), originally formulated by Heisenberg [1], is one the most characteristic features of the quantum world. The principle establishes that one cannot predict with certainty and simultaneously the outcomes of two (or more) incompatible measurements. The study of quantitative formulations of this principle has a long outstanding history. First formulations made use of variances as uncertainty measures and the principle was described state by state by the existence of a lower bound for the product of the variances [1]. However, such formulations are not always adequate since the variance is not always convenient for describing the uncertainty of a random variable. For instance, there exist variables with infinite variance [4]. Moreover, in the case of discrete-spectrum observables, the universal (state-independent) lower bound becomes trivial (zero), and thus Heisenberg-like inequalities do not quantify the UP [5]. For these reasons, many authors attempted and still attempt to propose alternative formulations, using other uncertainty measures. One possibility consists in using information-theoretic measures [10], leading to entropic uncertainty relations (EURs). In this line, pioneering works by Hirschman [13], Bialynicki-Birula and Mycielski [14] based on important results due to Beckner [15], Deutsch [5], or Maassen and Uffink (MU) [6] who proved a result conjectured by Kraus [16], have given rise to different formulations of the principle based on Shannon and generalized one-parameter information entropies, or on entropic moments [17]. Versions using the sum of variances (instead of their product) [46], the Fisher information [47], or moments of various orders [50] have also been developed.

In this contribution, we focus on the formulation of the UP in the case of finite dimensions by using -entropies, a generalization of the Shannon entropy due to Salicrú et al. [51]. In particular, we deal with two well-known one-parameter entropy families, the Rényi and Tsallis ones. Our aim is to obtain a universal and nontrivial bound for the sum of the entropies associated with the outcomes of a pair of positive operator-valued measures. In order to do this, we follow a method similar to that of de Vicente and Sánchez-Ruiz in Ref. [26], solving the minimization problem for the sum of generalized entropies subject to the Landau–Pollak inequality [53]. We develop a geometrical approach to the problem.

The paper is organized as follows. In Section 2, we begin with basic definitions and notation, we present the problem, and we summarize previous results on EURs that deal with Rényi or Tsallis entropies. In Section 3, we give our main results concerning general entropy-like formulations of the UP in finite dimensions. For the sake of comparison with existing bounds in the literature, in Section 4 we choose some particular cases. A discussion is provided in Section 5. The proofs of our results are given in detail in a series of appendices.

2Statement of the problem: notation and previous results

2.1Generalized entropies

We are interested in quantitative formulations of the uncertainty principle, particularly through the use of information-theoretic quantities. More precisely, as measure of ignorance or of lack of information we employ Salicrú et al. -entropies [51],

for any probability vector and where the entropic functionals and are such that, either is concave and is increasing, or is convex and is decreasing. We restrict here to employ entropic functionals such that

  • is continuous and strictly concave or strictly convex,

  • is continuous and strictly monotone,

  • (so that the “elementary” uncertainty associated to a event with zero-probability is zero),

  • (without loss of generality).

Many of the well-known cases in the literature satisfy these assumptions (see Refs. [51] for a list of examples). Among them, the most renowned ones are

  • Shannon entropy [10], given by and where stands for the natural logarithm, corresponding to

  • Rényi entropies [11], introduced in the domain of mathematics from the same axiomatics as Shannon but relaxing only one property (recursivity is generalized); it is given by , and , where is the entropic index,

  • Tsallis entropies, firstly introduced by Havrda and Charvát [54] from an axiomatics quite close to that of Shannon, then by Daróczy [55] through a generalization of a functional equation satisfied by the Shannon entropy, and finally by Tsallis [56] in the domain of nonextensive physics; it is given by , , and ,

The last two cases belong to a general one-parameter family given by and ,

with increasing and , and where the entropic index plays the role of a “magnifying glass”, in the following sense: when , the contribution of the different terms in the sum becomes more uniform with respect to the case , thus stressing the tails of the distribution; conversely, when , the leading probabilities of the distribution are stressed in the summation. As an extreme example, for the generalized entropy is simply a function of the number of nonzero components of the probability vector , regardless of the values of these probabilities; this measure is closely linked to the quasi-norm which measures the sparsity of a representation in signal processing [57]. If additionally is differentiable, with , the Shannon entropy is recovered from entropies when .

The generalized -entropies satisfy usual properties as:

  • is a Schur-concave function of its argument, that is, if is majorized1 by , which is denoted , then . This property is a consequence of Karamata inequality that states that if is convex (resp. concave), then is Schur-convex (resp. Schur-concave) (see [60] or [61]), together with the decreasing (resp. increasing) property of . The property of Schur-concavity is useful in some problems of combinatorial, numerical or statistical analysis [61].

  • , with equality iff the probability distribution is a Kronecker delta: for certain , that is, the th-outcome appears with certainty so that the ignorance is zero. This property is a consequence of Schur-concavity of since , together with .

  • , with equality iff the probability distribution is uniform: for all , that is, all outcomes appear with equal probability so that the uncertainty is maximal. Again, this property is a consequence of Schur-concavity of since (see [61]).

  • is a concave function of if is concave; this is due to the facts that: (i) for concave (resp. convex) function , function is concave (resp. convex) [62], and (ii) function is increasing (resp. decreasing). This property is useful in optimization problems [63]. Shannon entropy is known to be concave [12]. Rényi entropy is concave for ; and in fact, it can be shown that there exists an -dependent index greater than 1, up to which Rényi entropy remains concave [64]. Tsallis entropy is concave for any index .

Furthermore, the one-parameter entropy is a decreasing function in terms of for fixed . With the positivity of , this ensures the convergence of (at least simply) when so that could be called minimal generalized -entropy (when the limit is not identically zero).

Finally, note that from the strict monotony of the function , there exists a one-to-one mapping between two generalized entropies sharing the same functional , say and , under the form . For instance, the one-to-one mappings between Rényi entropy and Tsallis entropy , for a given , are

and

2.2Entropic uncertainty relations

Let be an -dimensional Hilbert space. A general quantum measurement is described by positive operator-valued measures (POVM). This is a set of Hermitian positive semidefinite operators satisfying the completeness relation , where is the identity operator and is the number of outcomes. For given POVM and quantum system described by a density operator (Hermitian, positive semidefinite with unit trace) acting on , the probability of the th outcome is equal to .

In this contribution, we consider the -entropies for the probability vectors

associated with the measurements of two POVM and , respectively.

The fact that the sum of -entropies is lower bounded gives rise to an entropy-like formulation of the UP, that is, inequalities of the form

for any two pairs and of entropic functionals, where the bound is nontrivial, i.e., nonzero, and universal in the sense of being independent of the state of the quantum system. In particular, dealing with the family , we focus on the case where is the same for both entropies, but with an arbitrary pair of nonnegative entropic indices. The ultimate goal is to find the optimal bound, which by definition is obtained by minimization of the left-hand side, i.e.,

In the case of two nondegenerate quantum measurements, the optimal bound depends on the transformation matrix whose entries are given by

where and are eigenbases of and , respectively (, , ). From the orthonormality of the bases, where denotes the set of unitary matrices. A relevant characteristic of such a unitary matrix is its greatest-modulus element,

the so-called overlap between the eigenbases of and . From the unitary property of matrix , the overlap is in the range . The case corresponds to and being complementary observables, meaning that maximum certainty in the measure of one of them, implies maximum ignorance about the other. In the opposite extreme case, corresponds to observables and sharing (at least) an eigenvector; this situation happens for example when the observables commute.

In this nondegenerate context, to find the optimal bound depending on the transformation matrix is a difficult problem in general; a weaker problem is to restrict to bounds depending on the overlap instead of on the whole matrix . Thus, the optimal -dependent bound writes

We call the -optimal bound in order to distinguish it from that we call -optimal bound.

Similarly, in the general POVM framework, finding the -optimal bound Eq. is a difficult task. In this context, a relevant characteristic of the pair is the triplet of overlaps,

[in the nondegenerate case, ]. A weaker problem is again to restrict to bounds depending only on , the -optimal bound being

with .

The study of entropic formulations to quantify the UP is not new and has been addressed in various contexts [5]. However, the problem of finding -optimal (resp. -optimal) or -optimal (resp. -optimal) bounds in the form posed in Eqs. – still remains open in many cases. Moreover, many available results correspond to Rényi or Tsallis entropies with conjugated indices (in the sense of Hölder: ) as they are based on the Riesz–Thorin theorem [65]; however, recently some results were derived for nonconjugated indices in some particular situations.

For the sake of later comparison we summarize existing bounds, dealing in particular with Rényi or Tsallis entropies, classified by the entropic measure used and the entropic indices involved. To fix notation, we define the following regions in the -plane:

which are called conjugacy curve and regions “below” and “above” the conjugacy curve, respectively (see Figure 1).

Figure 1: The conjugacy curve {\cal C} is represented by the solid line (the positive branch of the hyperbola \frac{1}{2\alpha}+\frac{1}{2\beta}=1), while the region \underline{{\cal C}} below this curve is in dark gray, and the region \overline{{\cal C}} above that curve is represented in light gray.
Figure 1: The conjugacy curve is represented by the solid line (the positive branch of the hyperbola ), while the region “below” this curve is in dark gray, and the region “above” that curve is represented in light gray.

Results available in the literature comprise the following:

  • Shannon entropy:

    • Deutsch obtained the first bound in 1983 [5], which is given by .

    • MU improved Deutsch bound by using the Riesz–Thorin theorem, in the context of pure states. Their bound is and it is not optimal, except for complementary observables, that is, for .

    • de Vicente and Sanchez-Ruiz [26] improved MU bound in the range with by using the Landau–Pollak inequality that links and , in the context of pure states. This bound is not optimal, except for complementary observables (see also [23]) or for qubits () [22].

    • Recently, Coles and Piani (CP) [44] improved the MU bound in the whole range of the overlap , indeed they obtained the bound , where is the second largest value among the . Moreover, the authors obtained a stronger but implicit bound and generalized their results for POVMs and bipartite scenarios (see also [45]).

  • Rényi entropies:

    • For , the MU bound remains valid. Rastegin extended this result to the case of mixed states and generalized quantum measurements [66]. These works are mainly based on Riesz–Thorin theorem. The bound is not tight, except for [23].

    • For , the MU bound remains valid due to the decreasing property of Rényi entropy with the index. Here again, for the bound is optimal [23].

    • For , the Deutsch bound remains valid. This result is due to MU who solved the minimization of the sum of min-entropies (infinite indices) subject to the Landau–Pollak inequality. Note that the Deutsch bound is valid in the whole positive quadrant (but it is not optimal) due to the decreasing property of the Rényi entropy vs the index.

    • For , Puchała, Rudnicki and Życzkowski (PRZ) in Ref. [41] derived recently a series of bounds depending on the transformation matrix by using majorization technique. We denote by the greatest of those bounds which is not -optimal although it improves previous ones in several situations. A particular bound of the series (the worst one) depends only on the overlap , and expresses as but it is not -optimal. Further extensions of this work to mixed states and generalized quantum measurements are given by Friedland [42].

    • For , the CP bounds remain valid due to the decreasing property of Rényi entropy with the index.

    • For and , we derived recently the -optimal bound . It depends only on the overlap, so that it is -optimal as well, and [40]. Note that this equality is trivial since only parametrizes all the and that in this case the phases play no role (due to the symmetry of the Bloch sphere or from the decomposition for a single qubit [40]). Numerical solutions have been found in the whole quadrant, and we have been able to derive analytical expressions in some regions. In addition, the states that correspond to the bound were obtained, in terms of the whole matrix .

  • Tsallis entropies:

    • For and pure states, the inequality

      has been derived in Ref. [20]. This relation can be viewed as a consequence of the fact that the sum of Rényi entropies with equal indices is lower bounded by the Deutsch bound, together with relation linking and . This bound has been refined to when , starting from the MU inequality in the conjugacy curve, and using the decreasing property of vs , and relation .

    • For , following recent works of Rastegin [35], one can obtain the inequality

      ( stands for the identity function, ).

    • For , bound remains valid due to the decreasing property of Tsallis entropy vs the entropic index.

    • For , MU, Deutsch and CP bounds remain valid due to the decreasing property of Tsallis entropy with the index.

    One can find in the literature many bounds improving the above mentioned, in special contexts (particular overlap and/or particular pair of indices). We refer the interested reader to [21]. For the sake of completeness of this short review, it is worth mentioning that there is a new insight of entropic uncertainty relations that allows the observer to have access to a quantum memory [67]. Also, there exist entropic formulations of the UP for more than two measurements (in particular, for mutually unbiased bases) [70] and for observables with continuous spectra [23]. These topics have many applications in different issues of quantum information such that entanglement detection, proof of the security of quantum cryptographic protocols, and others [75]. Such studies go beyond the scope of the present paper.

    Finally, it can be shown that some bounds and relations discussed above can be expressed in terms of the generalized entropies of the family (with a common function for both entropies, but any pair of entropic indices):

  • entropies:

    • For , with the additional condition that is increasing, following the same approach as that of Rastegin in Ref. [35] and using the decreasing property of vs , one can prove the relation

      which includes as particular cases the results of MU and of Rastegin.

    • For : since is Schur-concave, the Corollary 2 of Ref. [41] allows us to derive a -dependent bound for where denotes the Kronecker product2. If for then . Applying the results of PRZ to the right-hand side we obtain a bound for the sum of entropies. Rényi and Tsallis entropies with entropic index greater than or equal to one are particular cases.

    • For : from the Schur concavity of we have again a -dependent bound for . Now, if for , one has (notice that Tsallis entropy does not fulfill this property in this case). Therefore, PRZ results applied to the right-hand side allows again to obtain a bound for the sum of this class of entropies. Rényi entropies with entropic index lower than or equal to one are particular cases.

    • For , MU, Deutsch and CP bounds remain valid due to the decreasing property of the entropy with the index.

3Generalized entropic uncertainty relations

We extend results summarized in the preceding section for POVM pairs, and generalized entropies with arbitrary pairs of entropic functionals and . Our approach follows that of de Vicente and Sánchez-Ruiz [26] except that here the concomitant optimization problem is mainly solved in a geometrical way. This allows us to generalize the results to arbitrary entropic functionals. Moreover, we use the fact that the Landau–Pollak inequality applies for POVM pairs and for both pure and mixed states [81] to argue that our results include these situations.

Our major results are given by the following Proposition, and Corollaries ?, ? and ?:

See Appendix A.

For the sake of simplicity, when dealing with entropies (with the same function for both observables), the bound is simply denoted

Let us note the following facts:

  • is explicitly independent of .

  • Previous results in the literature, in particular that of de Vicente and Sánchez-Ruiz [26], are extended here from Shannon to more general -entropies, the former being recovered as a particular case. Moreover, our result applies in the POVM framework and for both pure and mixed states.

  • For Tsallis entropies with , it is straightforward to obtain relations of the type

    that improve and generalize the findings in [20] and is valid for all positive entropic index.

Note that, except when , bound is implicit. This is also the case for several bounds in the literature [26]. But, as for [26], the problem is shown to be reduced to an optimization on one parameter over a bounded interval, instead of on parameters. Notice that from the increasing property of vs (see Appendix A), an explicit lower bound can be obtained:

Note however that this analytic bound is weaker, and that when it turns out to be trivial.

Finally, it is to be noticed that bound is in general not -optimal. Indeed, our method for solving the minimization problem first treats separately the contribution of each observable in the entropy sum and, only in a second step the link between the observables is taken into account through the Landau–Pollak inequality. In some specific cases, this relative weakness disappears, as we see now.

Hereafter, we consider the case of nondegenerate quantum observables. In this case, we have , () and ( except when ), then the bound reduces to

with .

As already mentioned, bound is in general not -optimal. However, it can be shown that this bound does turn out to be optimal for some particular values of the overlap. This is summarized in the following corollary:

See Appendix B.

We suspect that this corollary is also valid when , but we have not been able to prove it yet.

A consequence of the corollary is that, in the range of the overlap , the bound reduces to the qubit case and improves all -dependent bounds such as those of MU [6] or Rastegin [35] in the context of entropies of the family. In particular, since and do not depend on , then and for any and any . Moreover, it is shown in [40] that, for a certain range of entropic indices and in the context of Rényi entropies, this -optimal bound takes an analytical expression.

Now, we particularize the Proposition to the case of Rényi entropy [setting and , i.e., in the family], which is mostly used in the literature of EURs, and compare our bound with previous ones, as we detail in the following two corollaries:

See Appendix C.

This result is particularly interesting above the conjugacy curve, , where the only -dependent explicitly known bound for Rényi entropies is precisely .

It is known that the sum of Rényi entropies below the conjugacy curve, , is lower bounded by MU result. For we were able to improve this bound, but for it is not always the case. Indeed, we have:

See Appendix D.

To the best of our knowledge, in the range of the overlap , the MU result is the tightest -dependent bound when .

4Comparison with previously known bounds

4.1Maassen–Uffink, Rastegin and Coles–Piani bounds

We now compare our bound with previously known ones in the nondegenerate context, for Rényi and Tsallis entropies with indices in the region or just within . Relative differences are shown through density plots in Figs. Figure 2, Figure 3, Figure 4 and Figure 5, for chosen typical values of the overlap . Positivity of these differences indicates that our bound improves the previous.

In Figure 2 we plot for entropic indices in and below the conjugacy curve, . We observe the following behavior of our bound with respect to MU result:

  • Up to ( is shown), the relative difference is negative or zero, so our bound does not improve the MU one (Corollary ?).

  • When is between and ( is shown), the relative difference is positive or negative (although very small), so our bound improves the MU one in some regions of the -plane. This region is delimited by the white line: the improvement takes place below this curve; we observe that the region of improvement increases with the overlap.

  • When exceeds ( and are shown), the relative difference is positive, so our bound improves MU one (Corollary ?); the improvement significantly increases with the overlap.

Figure 2: Rényi entropy case: density plots of \frac{{\cal B}_{\alpha,\beta;\log}(c)   -  {\cal B}^{MU}(c)}{{\cal B}_{\alpha,\beta;\log}(c)}, for (\alpha,\beta) \in  {\cal C}\cup \underline{{\cal C}} when c  = 0.5, 0.706, 0.708 and 0.9.
Figure 2: Rényi entropy case: density plots of , for when and .

In Figure 3 we plot the relative difference: for entropic indices in and below the conjugacy curve, . We observe the following behavior with respect to Rastegin results:

  • Up to ( and are shown), the relative difference is positive or negative, so our bound improves the Rastegin one in some regions of the -plane. The regions where an improvement occurs are outside the domain marked by the black line. These regions always exists (even when ) and increases with the overlap.

  • When exceeds ( and are shown), the relative difference is positive, so our bound improves Rastegin one (Corollary ?) and the improvement increases significantly with the overlap.

Figure 3: Tsallis entropy case: density plots of \frac{{\cal B}_{\alpha,\beta;\mathrm{id\:}-1}(c)              -             {\cal B}_{\alpha,\beta;
      \mathrm{id\:}-1}^R(c)}{{\cal B}_{\alpha,\beta;\mathrm{id\:}-1}(c)}, for (\alpha,\beta) \in {\cal C}\cup
  \underline{{\cal C}} when c = 0.5, 0.6, 0.708 and 0.9.
Figure 3: Tsallis entropy case: density plots of , for when and .

In Figs. Figure 4 and Figure 5 we plot the relative differences: , for and , respectively, where with being the lowest possible second larger value of the (we choose here and respectively); the entropic indices are . We observe the following behavior with respect to Coles–Piani results:

  • For any value of , the relative difference can be positive or negative, so our bound improves the Coles–Piani one in some regions of the -plane. The regions where an improvement occurs are below the domain marked by the solid line in Figs. Figure 4 and Figure 5. These regions generally exist (even when ) and their extension is greater with the overlap (the improvement always exists for ).

  • When increases (and ), the domain of improvement is smaller. Remind however that the best possible CP bound is plotted here.

Figure 4: Rényi and Tsallis entropy cases for N  = 3: density plots of \frac{{\cal B}_{\alpha,\beta;f}(c)  -  {\cal B}^{CP^*}(c)}{{\cal B}_{\alpha,\beta;f}(c)}, for (\alpha,\beta) \in [0 \, ; \,  1]^2 when c = 0.6 and 0.9.
Figure 4: Rényi and Tsallis entropy cases for : density plots of , for when and .
Figure 5: Same as Fig.  for N = 10.
Figure 5: Same as Fig. for .

4.2Bounds for powers of a circular permutation matrix in the line

An illustrative example to consider for the evaluation of generalized EURs is given in Ref. [41], where a special class of transformation matrices is used. Indeed, the quantum observables here are such that the transformation between their eigenbases is a power of a circular -dimensional permutation matrix, namely with and where denotes the identity matrix. We compute our bound in these cases for and for some chosen, equal entropic indices, and we compare our results with the bounds of PRZ, MU and Deutsch in the case of Rényi entropy (Fig. ?), and with the bounds of Rastegin, CP and PRZ in the case of Tsallis entropy (Fig. ?). In this particular example, can be analytically determined, allowing for an analytic expression for both CP bounds and . It appears that, whatever , both bounds coincide and that they coincide with the MU bound.

In Fig. ? we plot the bounds , , and for the Rényi entropic formulation of the UP, in terms of the power in the transformation matrix, when and . The overlap corresponding to the transformation is also shown in the figure. We observe that:

  • For our bound improves both PRZ and MU ones for a wide range of values of . The fact that our bound can be lower than that of PRZ for does not contradict Corollary ?. Indeed, the PRZ bound is -dependent and is evaluated here for a particular ; it is not the minimum over all for a given .

  • For our bound improves Deutsch result (Corollary ?) as well as PRZ for all .

Rényi entropy case: bounds {\cal B}\equiv  {\cal B}_{\alpha,\alpha;\log}(c) (solid line), {\cal B}^{PRZ}  \equiv  {\cal B}_{\alpha;\log}^{PRZ}(T) (dashed-dotted line), {\cal B}^{MU} \equiv  {\cal B}^{MU}(c) (left plot, dashed line) and {\cal B}^D \equiv
  {\cal B}^D(c) (right plot, dashed line), in terms of the power s in the transformation matrix for \alpha = 0.8 and 1.4. In addition, we plot the overlap c in terms of s (dotted line).

In Fig. ? we plot the bounds , , , and , for the Tsallis entropic formulation of the UP, in terms of the power in the transformation matrix, when and . We observe that:

  • For our bound improves both Coles–Piani and Rastegin ones in a wide range of values of .

  • For our bound improves PRZ one for all .

Tsallis entropy case: bounds {\cal B}_{\alpha,\alpha;\mathrm{id\:}-1}(c) (solid line), {\cal B}_{\alpha,\alpha;\mathrm{id\:}-1}^R(c) (left plot, dashed line), {\cal B}^{CP}(T) (left plot, dotted line below that of {\cal B}_{\alpha,\alpha;\mathrm{id\:}-1}^R(c)), {\cal B}_{\alpha;\mathrm{id\:}-1}^{PRZ}(T) (right plot, dashed-dotted line), in terms of the power s in the transformation matrix for \alpha=0.8 and 1.4. In addition, we plot the overlap c in terms of s (dotted line).

4.3Bounds for randomly drawn unitary matrices in the line

As a further example, we randomly generate unitary matrices sampled according to a Haar (uniform) distribution on [83]. We compute our bound in these cases for some chosen, equal entropic indices, and we compare our results with the bounds of PRZ, MU and Deutsch in the case of Rényi entropy (Fig. Figure 6), with the bounds of Rastegin and PRZ in the case of Tsallis entropy (Fig. Figure 7), and with in both cases (Fig. Figure 8).

In Figure 6 we plot the bounds , , , and for the Rényi entropic formulation of the UP, in terms of the overlap , when and . We observe that:

  • For , our bound improves MU one in the whole range of the overlap. We find transformation matrices such that our bound improves PRZ one, although with a low frequency of occurrence.

  • For , our bound improves MU one when (Corollary ?). We find transformation matrices such that our bound improves PRZ one, with a frequency higher than for and increasing with as well.

  • For , our bound improves Deutsch one in the whole range of the overlap (Corollary ?). Again, we find transformation matrices such that our bound improves PRZ one, with a frequency higher than for and increasing with as well.

Figure 6: Rényi entropy case: bounds {\cal B}_{\alpha,\alpha;\log}(c) (solid line), {\cal B}^{MU}(c) (dashed line, left and middle plots) , {\cal B}_{\alpha;\log}^{PRZ}(T) (dots), and {\cal B}^D(c) (dashed line, right plot), in terms of the overlap c for \alpha=0.2, 0.8 and 1.4.
Figure 6: Rényi entropy case: bounds (solid line), (dashed line, left and middle plots) , (dots), and (dashed line, right plot), in terms of the overlap for and .

In Figure 7 we plot the bounds , , and for the Tsallis entropic formulation of the UP, in terms of the overlap , when and . We observe that:

  • For , our bound improves Rastegin one when (Corollary ?). We find transformation matrices such that our bound improves PRZ one, with relatively high frequency of occurrence.

  • For , we find transformation matrices such that our bound improves PRZ one in a wider range for the overlap and with higher frequency than for .

  • For , for all the sampled matrices we find that our bound improves PRZ one in the whole range of the overlap.

Figure 7: Tsallis entropy case: bounds {\cal B}_{\alpha,\alpha;\mathrm{id\:}-1}(c) (solid line), {\cal B}_{\alpha,\alpha;\mathrm{id\:}-1}^R(c) (dashed line, left plot), and {\cal B}_{\alpha;\log}^{PRZ}(T) (dots), in terms of the overlap c for \alpha=1,1.5 and 2.
Figure 7: Tsallis entropy case: bounds (solid line), (dashed line, left plot), and (dots), in terms of the overlap for and .

In Figure 8 we plot the bounds , or , and for both Rényi and Tsallis entropic formulation of the UP, in terms of the overlap , when and . We observe that:

  • For any , our bound improves in a wide range of the overlap .

  • In the Tsallis context, for , for all the sampled matrices, we find an improvement of in the whole range of the overlap. We observe that the range of values of for which an improvement of the CP bound occurs, decreases with .

Figure 8: Rényi, Tsallis and Shannon entropy cases: bounds {\cal B}_{\alpha,\alpha;f}(c) (solid line), {\cal B}^{MU}(c) or {\cal B}^{R}_{\alpha,\alpha;\mathrm{id\:}-1} (dashed line) , {\cal B}^{\overline{CP}}(T) (dots), in terms of the overlap c for \alpha=0.5 (Rényi and Tsallis) and 1 (Shannon).
Figure 8: Rényi, Tsallis and Shannon entropy cases: bounds (solid line), or (dashed line) , (dots), in terms of the overlap for (Rényi and Tsallis) and (Shannon).

We notice that, as MU, Deutsch, Rastegin and our bounds depend only on the overlap , then the same relative behaviors remain valid for dimensions higher than 3 (at least for ). In contrast, that may not be the case for the relation between CP, PRZ and our bound, since the formers depend on the whole transformation matrix ; indeed, we expect an increase of the predominance of PRZ and CP over other -dependent bounds. However, our bound is easier to calculate than PRZ one for instance whose computation complexity increases combinatorially with the dimension of the matrix .

5Concluding remarks

In this contribution we provide a general entropy-like formulation of the uncertainty principle, for any pair of POVM in the case of pure or mixed states in finite dimensions. The sum of generalized -entropies associated to two POVMs is proposed as measure of joint uncertainty, and lower bounds for that sum are searched for, in terms of the overlaps between the POVM, which in a sense quantifies the degree of incompatibility of the observables. Our main result is summarized in the Proposition of Section 3, where we give a -dependent lower bound for the entropy-sum, leading to the family of entropic uncertainty relations . To obtain this, we follow the same approach as de Vicente and Sánchez-Ruiz appealing to the Landau–Pollak inequality, and we solve the concomitant constrained minimization problem, mainly in a geometrical manner. In this way, the calculation of a -dependent bound reduces to the resolution of the straightforward one-dimensional minimization problem in .

Our uncertainty relation generalizes previous similar results in several ways, namely, it is valid for:

  • Salicrú generalized entropic forms [including Rényi and Tsallis entropies, which are obtained for with and , respectively],

  • any choice for the pair of entropic functionals and (overcoming the limitation due to the Riesz–Thorin theorem that involves conjugated pairs of indices when dealing with the family with the same , which is mainly used in related literature),

  • any pair of positive operator valued measures, and

  • both pure and mixed states (which is proved without recourse to the concavity property, that, for instance, Rényi entropy does not fulfill in general).

Besides we show that, in the case of nondegenerate quantum observables with overlap , the bound reduces to the unidimensional minimization problem . Moreover, for values of the overlap greater than , our bound is -optimal and it reduces to that of the qubit () case (Corollary ?). In other words, we improve all -dependent bounds in that range of the overlap.

In addition, we go further in the case of Rényi entropies and we find that our bound improves Deutsch one in the whole range of values of the overlap (Corollary ?), and also that our bound does not improve Maassen–Uffink one for values of the overlap lower than or equal to (Corollary ?). The former result is particularly interesting for entropic indices above the conjugacy curve where, up to our knowledge, Deutsch bound is the only known one with an analytic expression; whereas the latter result establishes that restricting the domain by the Landau–Pollak inequality, leads to a result weaker than using Riesz–Thorin theorem.

Finally, in Section 4, we provide several examples that exhibit an improvement with respect to known results in the literature, in the cases of Rényi and Tsallis entropies.

The extension of our approach to take into account quantum memory and for more than two POVMs is currently under investigation.

Acknowledgments

SZ and MP are very grateful to the Région Rhône-Alpes (France) for the grants that enabled this work. MP and GMB also acknowledge financial support from CONICET (Argentina), and warm hospitality during their stay at GIPSA-Lab. The authors thank Pr. J.-F. Bercher for useful discussions about the class of Salicrú entropies. The authors acknowledge anonymous referees for helpful comments.


AProof of the Proposition

Our aim is, given the probability vectors and associated with the POVM and respectively, to minimize the sum of -entropies subject to the Landau–Pollak inequality. In this way, our method follows and advances on that of de Vicente and Sánchez-Ruiz [26], and consists of two steps:

  1. Minimization of subject to . At this step, the two sets of probabilities are treated separately. Thus, denoting by this minimal entropy, we arrive at the inequality where the right-hand side depends only on the two maximal probabilities.

  2. Minimization of subject to the Landau–Pollak inequality.

a.1First step: minimization of the -entropy subject to a given maximum probability

This problem involves looking for the vectors (the set of probability vectors in ) that minimize a given -entropy under the constraint that the maximum probability is provided3, i.e., we search for

Notice that, due to the normalization constraint, one necessarily has4

Note also that in the case , then all the ’s are equal to (uniform distribution) and thus the problem becomes trivial.

Using the fact that the function to be optimized is invariant under a permutation of the probability components, we can reduce the dimensionality of the problem in the following way: let us fix and define ; then, to solve the optimization problem is equivalent to search for

where we define

and we denote by the allowed domain for , i.e.,

with denoting an -dimensional closed hypercube, and corresponding to an -dimensional hyperplane perpendicular to the vector . Notice that the point is both inside the hypercube and on the hyperplane , which guarantees that the intersection of those sets is not empty.

It can be seen that is a convex polytope embedded in [87]; in other words, it is a convex body, convex hull of its vertices that are the pure points of this convex (i.e., the points that cannot be written as convex combination of several points of the set) [88].

Next, since is a strictly concave (resp. convex) function on , it is also concave (resp. convex) on the polytope . It turns out that achieves its minimum (resp. maximum) only on one or several of the extreme points (or pure points) of [62]. The problem consists then in determining the set of pure points of . Before studying the case of arbitrary , let us illustrate what happens in the cases and (the case is trivial since reduces to the point , and the maximizing probability vector is where should be between and 1).

Case .

Two different situations arise for the intersection of the line with the square , :

  • For , the line intersects the square in its “lower corner” or, in other words, the restriction of the line to the first quadrant is entirely inside the square: , then is the whole segment between the points and [see Figure 9 (left plot)]. These are the pure points, and both lead to the same extremal value for .

  • For , the intersection of the line with the square reduces to the segment linking the points and , which are then the pure points of [see Figure 9 (right plot)]. Both points lead to the same extremal value for .

Notice that the pure points are on the edges of the square.

Figure 9: Domain {\cal PT}_{\!P} (line in bold) in the case N=3, for P = 0.6 and 0.4 (from left to right). It is the intersection between the line q_1+q_2 =
  1-P and the square [0 \, ; \,  P]^2. The pure points of {\cal PT}_{\!P} are given by the dots.
Figure 9: Domain (line in bold) in the case , for and (from left to right). It is the intersection between the line and the square . The pure points of are given by the dots.

Case