Concavity of entropy under thinning

Concavity of entropy under thinning

Yaming Yu Department of Statistics
University of California
Irvine, CA 92697, USA
   Oliver Johnson Department of Mathematics
University of Bristol
University Walk, Bristol, BS8 1TW, UK

Building on the recent work of Johnson (2007) and Yu (2008), we prove that entropy is a concave function with respect to the thinning operation . That is, if and are independent random variables on with ultra-log-concave probability mass functions, then

where denotes the discrete entropy. This is a discrete analogue of the inequality ( denotes the differential entropy)

which holds for continuous and with finite variances and is equivalent to Shannon’s entropy power inequality. As a consequence we establish a special case of a conjecture of Shepp and Olkin (1981). Possible extensions are also discussed.

binomial thinning; convolution; entropy power inequality; Poisson distribution; ultra-log-concavity.

I Introduction

This paper considers information-theoretic properties of the thinning map, an operation on the space of discrete random variables, based on random summation.

Definition 1 (Rényi, [10])

For a discrete random variable on , the thinning operation is defined by

where are (i) independent of each other and of and (ii) identically distributed Bernoulli random variables, i.e., for each .

Equivalently, if the probability mass function (pmf) of is , then the pmf of is

where is the binomial pmf. (Note that we write for the map acting on the pmf as well as acting on the random variable.)

We briefly mention other notation used in this paper. We use to denote the Poisson distribution with mean , i.e., the pmf is . The entropy of a discrete random variable with pmf is defined as

and the relative entropy between (with pmf ) and (with pmf ) is defined as

For convenience we write where .

The thinning operation is intimately associated with the Poisson distribution and Poisson convergence theorems. It plays a significant role in the derivation of a maximum entropy property for the Poisson distribution (Johnson [7]). Recently there has been evidence that, in a number of problems related to information theory, the operation is the discrete counterpart of the operation of scaling a random variable by ; see [5, 6, 7, 14]. Since scaling arguments can give simple proofs of results such as the Entropy Power Inequality, we believe that improved understanding of the thinning operation could lead to discrete analogues of such results.

For example, thinning lies at the heart of the following result (see [5, 6, 14]), which is a Poisson limit theorem with an information-theoretic interpretation.

Theorem 1 (Law of Thin Numbers)

Let be a pmf on with mean . Denote by the th convolution of , i.e., the pmf of where are independent and identically distributed (i.i.d.) with pmf . Then

  1. converges point-wise to as ;

  2. tends to as ;

  3. as , monotonically decreases to zero, if it is ever finite;

  4. if is ultra-log-concave, then increases in .

For Part (4), we recall that a random variable on is called ultra-log-concave, or ULC, if its pmf is such that the sequence is log-concave. Examples of ULC random variables include the binomial and the Poisson. In general, a sum of independent (but not necessarily identically distributed) Bernoulli random variables is ULC. Informally, a ULC random variable is less “spread out” than a Poisson with the same mean. Note that in Part (4) the ULC assumption is natural since, among ULC distributions with a fixed mean, the Poisson achieves maximum entropy ([7, 14]).

Parts (2) and (3) of Theorem 1 (see [5, 6]) resemble the entropic central limit theorem of Barron [2], in that convergence in relative entropy, rather than the usual weak convergence, is established. The monotonicity statements in Parts (3) and (4), proved in [14], can be seen as the discrete analogue of the monotonicity of entropy in the central limit theorem, conjectured by Shannon and proved much later by Artstein et al. [1].

In this work we further explore the behavior of entropy under thinning. Our main result is the following concavity property.

Theorem 2

If and are independent random variables on with ultra-log-concave pmfs, then


Theorem 2 is interesting on two accounts. Firstly, it can be seen as an analogue of the inequality


where and are continuous random variables with finite variances and denotes the differential entropy. The difference between thinning by in (1) and scaling by in (2) is required to control different moments. In the discrete case, the law of small numbers [5] and the corresponding maximum entropy property [7] both require control of the mean, which is achieved by this thinning factor. In the continuous case, the central limit theorem [2] requires control of the variance, which is achieved by this choice of scaling. It is well-known that (2) is a reformulation of Shannon’s entropy power inequality ([12, 3]). Thus Theorem 2 may be regarded as a first step towards a discrete entropy power inequality (see Section IV for further discussion).

Secondly, Theorem 2 is closely related to an open problem of Shepp and Olkin [11] concerning Bernoulli sums. With a slight abuse of notation let denote the entropy of the sum , where is an independent Bernoulli random variable with parameter .

Conjecture 1 ([11])

The function is concave in , i.e.,


for all and .

As noted by Shepp and Olkin [11], is concave in each and is concave in the special case where and . We provide further evidence supporting Conjecture 1, by proving another special case, which is a consequence of Theorem 2 when applied to Bernoulli sums.

Corollary 1

Relation (3) holds if for all .

Conjecture 1 remains open. We are hopeful, however, that the techniques introduced here could help resolve this long-standing problem.

In Section II we collect some basic properties of thinning and ULC distributions, which are used in the proof of Theorem 2 in Section III. Possible extensions are discussed in Section IV.

Ii Preliminary observations

Basic properties of thinning include the semigroup relation ([7])


and the commuting relation ( denotes convolution)


It is (5) that allows us to deduce Corollary 1 from Theorem 2 easily.

Concerning the ULC property, three important observations ([7]) are

  1. a pmf is ULC if and only if the ratio is a decreasing function of ;

  2. if is ULC, then so is ;

  3. if and are ULC, then so is their convolution .

A key tool for deriving Theorem 2 and related results ([7, 13]) is Chebyshev’s rearrangement theorem, which states that the covariance of two increasing functions of the same random variable is non-negative. In other words, if is a scalar random variable, and and are increasing functions, then (assuming the expectations are finite)

Iii Proof of Theorem 2

The basic idea is to use the decomposition

where as before with , and .

The behavior of the relative entropy under thinning is fairly well-understood. In particular, by differentiating with respect to and then using a data-processing argument, Yu [14] shows that


Further, for any independent and , the data-processing inequality shows that By taking and , one concludes that

Therefore we only need to prove the corresponding result for , that is


Unfortunately, matters are more complicated because there is no equivalent of the data-processing inequality, i.e., the inequality does not always hold. (Consider for example and i.i.d. Bernoulli with parameter . This inequality then reduces to , which clearly fails for all .)

Nevertheless, it is possible to establish (7) directly. We illustrate the strategy with a related but simpler result, which involves the equivalent of Equation (6) for .

Proposition 1

For any pmf on with mean , we have .


Let us assume that the support of is finite; the general case follows by a truncation argument ([14]). In view of (6), we only need to show , where

By substituting in Equation (8) of [7], we obtain that

and hence, using summation by parts,

In a similar way, using the inequality ,

The last inequality holds since .

Having established the convexity of , we can now deduce the full Proposition using (6). \qed

Before proving Theorem 2, we note that although (1) is stated for , only the case need to be considered. Indeed, if (1) holds for , then for general such that , we have


where (4) and (5) are used in the equality, and Proposition 1 is used in (8).


Assume , and let and denote the pmfs of and respectively. Assume and to avoid the trivial case. As noted before, we only need to show that

is convex in (where ). The calculations are similar to (but more involved than) those for Proposition 1, and we omit the details. The key is to express in the following form suitable for applying Chebyshev’s rearrangement theorem.



Ultra-log-concavity and dominated convergence permit differentiating term-by-term.

For each fixed , since decreases in and increases in , we know that decreases in . Since is ULC, the ratio is decreasing in . Hence we may apply Chebyshev’s rearrangement theorem to the sum over and obtain


Similarly, considering the sum over , since is decreasing in for any fixed ,


Adding up (9) and (10), and noting that

we get

which is nonnegative, in view of the inequality . \qed

Iv Towards a discrete Entropy Power Inequality

In the continuous case, (2) is quickly shown (see [4]) to be equivalent to Shannon’s entropy power inequality


valid for independent and with finite variances, with equality if and only if and are normal. We aim to formulate a discrete analogue of (11), with the Poisson distribution playing the same role as the normal since it has the corresponding infinite divisibility and maximum entropy properties.

Observe that the function appearing in (11) is (proportional to) the inverse of the entropy of the normal with variance . That is, if we write then the entropy power , so Equation (11) can be written as

Although there does not exist a corresponding closed form expression for the entropy of a Poisson random variable, we can denote . Then is increasing and concave. (The proof of Proposition 1, when specialized to the Poisson case, implies this concavity.) Define

That is, . It is tempting to conjecture that the natural discrete analogue of Equation (11) is

for independent discrete random variables and , with equality if and only if and are Poisson. However, this is not true. A counterexample, provided by an anonymous referee, is the case where and both have the pmf , , Since this pmf even lies within the ULC class, the conjecture still fails when restricted to this class.

We believe that the discrete counterpart of the entropy power inequality should involve the thinning operation described above. If so, the natural conjecture is the following, which we refer to as the thinned entropy power inequality.

Conjecture 2

If and are independent random variables with ULC pmfs on , then ()


In a similar way to the continuous case, (12) easily yields the concavity of entropy, Equation (1), as a corollary. Indeed, by (12) and the concavity of , we have

and (1) follows.

Unlike the continuous case, (1) does not easily yield (12). The key issue is the question of scaling. That is, in the continuous case, the entropy power satisfies for all and . It is this result that allows Dembo et al. [4] to deduce (11) from (2).

Such an identity does not hold for thinned random variables. However, we conjecture that


for all and ULC . Note that this Equation (13), which we refer to as the restricted thinned entropy power inequality (RTEPI), is simply the case of the full thinned entropy power inequality (12). If (13) holds, we can use the argument provided by [4] to deduce the following result, which is in some sense close to the full thinned entropy power inequality, although in general.

Proposition 2

Consider independent ULC random variables and . For any such that

if the RTEPI (13) holds then


Note that an equivalent formulation of the RTEPI (13) is that if is Poisson with then for any , Given and we define and to be Poisson with and .

Given and , we pick such that and so that:


where Equation (14) follows by Theorem 2 and Equation (IV) follows by the reformulated RTEPI.

Now making the (optimal) choice

this inequality becomes

The result follows by applying to both sides. Note that the restrictions on and are required to ensure and . \qed

Again assuming (13), Proposition 2 yields the following special case of (12). The reason this argument works is that, as in [4], if is Poisson then (13) holds with equality for all .

Corollary 2

If RTEPI (13) holds then (12) holds in the special case where is ULC and is Poisson with mean such that .


For let be Poisson with mean . Then . The condition ensures that we can choose small enough such that

By Proposition 2,

The claim follows by noting that has the same Poisson distribution as . \qed

We hope to report progress on (12) in future work. Given the fundamental importance of (11), it would also be interesting to see potential applications of (12) (if true) and (1). For example, Oohama [9] used the entropy power inequality (11) to solve the multi-terminal source coding problem. This showed the rate at which information could be transmitted from sources, producing correlated Gaussian signals but unable to collaborate or communicate with each other, under the addition of Gaussian noise. It would be of interest to know whether (12) could lead to a corresponding result for discrete channels.

Note: Since the submission of this paper to ISIT09, we have found a proof of the restricted thinned entropy power inequality, i.e., Equation (13). The proof, based on [7], is somewhat technical and will be presented in a future work.


  • [1] S. Artstein, K. M. Ball, F. Barthe and A. Naor, “Solution of Shannon’s problem on the monotonicity of entropy,” J. Amer. Math. Soc., vol. 17, no. 4, pp. 975–982 (electronic), 2004.
  • [2] A. R. Barron, “Entropy and the central limit theorem,” Ann. Probab., vol. 14, no. 1, pp. 336–342, 1986.
  • [3] N. M. Blachman, “The convolution inequality for entropy powers,” IEEE Trans. Inform. Theory, vol. 11, pp. 267–271, 1965.
  • [4] A. Dembo, T. M. Cover and J. A. Thomas, “Information Theoretic Inequalities,” IEEE Trans. Information Theory, vol. 37, no. 6, pp. 1501–1518, 1991.
  • [5] P. Harremoës, O. Johnson, and I. Kontoyiannis, “Thinning and the law of small numbers,” IEEE Symp. Inf. Theory, Jun. 2007.
  • [6] P. Harremoës, O. Johnson, and I. Kontoyiannis, “Thinning and information projections,” IEEE Symp. Inf. Theory, Jul. 2008.
  • [7] O. Johnson, “Log-concavity and the maximum entropy property of the Poisson distribution,” Stochastic Processes and their Applications, vol. 117, no. 6, pp. 791–802, 2007.
  • [8] I. Kontoyiannis, P. Harremoës, and O. T. Johnson, “Entropy and the law of small numbers,” IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 466–472, Feb. 2005.
  • [9] Y. Oohama, “Rate-distortion theory for Gaussian multiterminal source coding systems with several side informations at the decoder,” IEEE Trans. Inform. Theory, vol. 51, no. 7, pp. 2577–2593, July 2005.
  • [10] A. Rényi, “A characterization of Poisson processes,” Magyar Tud. Akad. Mat. Kutató Int. Közl., vol. 1, pp. 519–527, 1956.
  • [11] L. A. Shepp and I. Olkin, “Entropy of the sum of independent Bernoulli random variables and of the multinomial distribution,” in Contributions to probability, pp. 201–206, Academic Press, New York, 1981.
  • [12] A. J. Stam, “Some inequalities satisfied by the quantities of information of Fisher and Shannon,” Inform. Contr., vol. 2, no. 2, pp. 101–112, Jun. 1959.
  • [13] Y. Yu, “On the maximum entropy properties of the binomial distribution,” IEEE Trans. Inf. Theory, vol. 54, no. 7, pp. 3351–3353, Jul. 2008.
  • [14] Y. Yu, “Monotonic convergence in an information-theoretic law of small numbers,” Technical report, Department of Statistics, University of California, Irvine, 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description