Concavity of entropy under thinning
Building on the recent work of Johnson (2007) and Yu (2008), we prove that entropy is a concave function with respect to the thinning operation . That is, if and are independent random variables on with ultra-log-concave probability mass functions, then
where denotes the discrete entropy. This is a discrete analogue of the inequality ( denotes the differential entropy)
which holds for continuous and with finite variances and is equivalent to Shannon’s entropy power inequality. As a consequence we establish a special case of a conjecture of Shepp and Olkin (1981). Possible extensions are also discussed.
This paper considers information-theoretic properties of the thinning map, an operation on the space of discrete random variables, based on random summation.
Definition 1 (Rényi, )
For a discrete random variable on , the thinning operation is defined by
where are (i) independent of each other and of and (ii) identically distributed Bernoulli random variables, i.e., for each .
Equivalently, if the probability mass function (pmf) of is , then the pmf of is
where is the binomial pmf. (Note that we write for the map acting on the pmf as well as acting on the random variable.)
We briefly mention other notation used in this paper. We use to denote the Poisson distribution with mean , i.e., the pmf is . The entropy of a discrete random variable with pmf is defined as
and the relative entropy between (with pmf ) and (with pmf ) is defined as
For convenience we write where .
The thinning operation is intimately associated with the Poisson distribution and Poisson convergence theorems. It plays a significant role in the derivation of a maximum entropy property for the Poisson distribution (Johnson ). Recently there has been evidence that, in a number of problems related to information theory, the operation is the discrete counterpart of the operation of scaling a random variable by ; see [5, 6, 7, 14]. Since scaling arguments can give simple proofs of results such as the Entropy Power Inequality, we believe that improved understanding of the thinning operation could lead to discrete analogues of such results.
Theorem 1 (Law of Thin Numbers)
Let be a pmf on with mean . Denote by the th convolution of , i.e., the pmf of where are independent and identically distributed (i.i.d.) with pmf . Then
converges point-wise to as ;
tends to as ;
as , monotonically decreases to zero, if it is ever finite;
if is ultra-log-concave, then increases in .
For Part (4), we recall that a random variable on is called ultra-log-concave, or ULC, if its pmf is such that the sequence is log-concave. Examples of ULC random variables include the binomial and the Poisson. In general, a sum of independent (but not necessarily identically distributed) Bernoulli random variables is ULC. Informally, a ULC random variable is less “spread out” than a Poisson with the same mean. Note that in Part (4) the ULC assumption is natural since, among ULC distributions with a fixed mean, the Poisson achieves maximum entropy ([7, 14]).
Parts (2) and (3) of Theorem 1 (see [5, 6]) resemble the entropic central limit theorem of Barron , in that convergence in relative entropy, rather than the usual weak convergence, is established. The monotonicity statements in Parts (3) and (4), proved in , can be seen as the discrete analogue of the monotonicity of entropy in the central limit theorem, conjectured by Shannon and proved much later by Artstein et al. .
In this work we further explore the behavior of entropy under thinning. Our main result is the following concavity property.
If and are independent random variables on with ultra-log-concave pmfs, then
Theorem 2 is interesting on two accounts. Firstly, it can be seen as an analogue of the inequality
where and are continuous random variables with finite variances and denotes the differential entropy. The difference between thinning by in (1) and scaling by in (2) is required to control different moments. In the discrete case, the law of small numbers  and the corresponding maximum entropy property  both require control of the mean, which is achieved by this thinning factor. In the continuous case, the central limit theorem  requires control of the variance, which is achieved by this choice of scaling. It is well-known that (2) is a reformulation of Shannon’s entropy power inequality ([12, 3]). Thus Theorem 2 may be regarded as a first step towards a discrete entropy power inequality (see Section IV for further discussion).
Secondly, Theorem 2 is closely related to an open problem of Shepp and Olkin  concerning Bernoulli sums. With a slight abuse of notation let denote the entropy of the sum , where is an independent Bernoulli random variable with parameter .
Conjecture 1 ()
The function is concave in , i.e.,
for all and .
As noted by Shepp and Olkin , is concave in each and is concave in the special case where and . We provide further evidence supporting Conjecture 1, by proving another special case, which is a consequence of Theorem 2 when applied to Bernoulli sums.
Relation (3) holds if for all .
Conjecture 1 remains open. We are hopeful, however, that the techniques introduced here could help resolve this long-standing problem.
In Section II we collect some basic properties of thinning and ULC distributions, which are used in the proof of Theorem 2 in Section III. Possible extensions are discussed in Section IV.
Ii Preliminary observations
Basic properties of thinning include the semigroup relation ()
and the commuting relation ( denotes convolution)
Concerning the ULC property, three important observations () are
a pmf is ULC if and only if the ratio is a decreasing function of ;
if is ULC, then so is ;
if and are ULC, then so is their convolution .
A key tool for deriving Theorem 2 and related results ([7, 13]) is Chebyshev’s rearrangement theorem, which states that the covariance of two increasing functions of the same random variable is non-negative. In other words, if is a scalar random variable, and and are increasing functions, then (assuming the expectations are finite)
Iii Proof of Theorem 2
The basic idea is to use the decomposition
where as before with , and .
The behavior of the relative entropy under thinning is fairly well-understood. In particular, by differentiating with respect to and then using a data-processing argument, Yu  shows that
Further, for any independent and , the data-processing inequality shows that By taking and , one concludes that
Therefore we only need to prove the corresponding result for , that is
Unfortunately, matters are more complicated because there is no equivalent of the data-processing inequality, i.e., the inequality does not always hold. (Consider for example and i.i.d. Bernoulli with parameter . This inequality then reduces to , which clearly fails for all .)
For any pmf on with mean , we have .
By substituting in Equation (8) of , we obtain that
and hence, using summation by parts,
In a similar way, using the inequality ,
The last inequality holds since .
Having established the convexity of , we can now deduce the full Proposition using (6). \qed
Assume , and let and denote the pmfs of and respectively. Assume and to avoid the trivial case. As noted before, we only need to show that
is convex in (where ). The calculations are similar to (but more involved than) those for Proposition 1, and we omit the details. The key is to express in the following form suitable for applying Chebyshev’s rearrangement theorem.
Ultra-log-concavity and dominated convergence permit differentiating term-by-term.
For each fixed , since decreases in and increases in , we know that decreases in . Since is ULC, the ratio is decreasing in . Hence we may apply Chebyshev’s rearrangement theorem to the sum over and obtain
Similarly, considering the sum over , since is decreasing in for any fixed ,
which is nonnegative, in view of the inequality . \qed
Iv Towards a discrete Entropy Power Inequality
valid for independent and with finite variances, with equality if and only if and are normal. We aim to formulate a discrete analogue of (11), with the Poisson distribution playing the same role as the normal since it has the corresponding infinite divisibility and maximum entropy properties.
Observe that the function appearing in (11) is (proportional to) the inverse of the entropy of the normal with variance . That is, if we write then the entropy power , so Equation (11) can be written as
Although there does not exist a corresponding closed form expression for the entropy of a Poisson random variable, we can denote . Then is increasing and concave. (The proof of Proposition 1, when specialized to the Poisson case, implies this concavity.) Define
That is, . It is tempting to conjecture that the natural discrete analogue of Equation (11) is
for independent discrete random variables and , with equality if and only if and are Poisson. However, this is not true. A counterexample, provided by an anonymous referee, is the case where and both have the pmf , , Since this pmf even lies within the ULC class, the conjecture still fails when restricted to this class.
We believe that the discrete counterpart of the entropy power inequality should involve the thinning operation described above. If so, the natural conjecture is the following, which we refer to as the thinned entropy power inequality.
If and are independent random variables with ULC pmfs on , then ()
and (1) follows.
Unlike the continuous case, (1) does not easily yield (12). The key issue is the question of scaling. That is, in the continuous case, the entropy power satisfies for all and . It is this result that allows Dembo et al.  to deduce (11) from (2).
Such an identity does not hold for thinned random variables. However, we conjecture that
for all and ULC . Note that this Equation (13), which we refer to as the restricted thinned entropy power inequality (RTEPI), is simply the case of the full thinned entropy power inequality (12). If (13) holds, we can use the argument provided by  to deduce the following result, which is in some sense close to the full thinned entropy power inequality, although in general.
Consider independent ULC random variables and . For any such that
if the RTEPI (13) holds then
Note that an equivalent formulation of the RTEPI (13) is that if is Poisson with then for any , Given and we define and to be Poisson with and .
Given and , we pick such that and so that:
Now making the (optimal) choice
this inequality becomes
The result follows by applying to both sides. Note that the restrictions on and are required to ensure and . \qed
For let be Poisson with mean . Then . The condition ensures that we can choose small enough such that
By Proposition 2,
The claim follows by noting that has the same Poisson distribution as . \qed
We hope to report progress on (12) in future work. Given the fundamental importance of (11), it would also be interesting to see potential applications of (12) (if true) and (1). For example, Oohama  used the entropy power inequality (11) to solve the multi-terminal source coding problem. This showed the rate at which information could be transmitted from sources, producing correlated Gaussian signals but unable to collaborate or communicate with each other, under the addition of Gaussian noise. It would be of interest to know whether (12) could lead to a corresponding result for discrete channels.
-  S. Artstein, K. M. Ball, F. Barthe and A. Naor, “Solution of Shannon’s problem on the monotonicity of entropy,” J. Amer. Math. Soc., vol. 17, no. 4, pp. 975–982 (electronic), 2004.
-  A. R. Barron, “Entropy and the central limit theorem,” Ann. Probab., vol. 14, no. 1, pp. 336–342, 1986.
-  N. M. Blachman, “The convolution inequality for entropy powers,” IEEE Trans. Inform. Theory, vol. 11, pp. 267–271, 1965.
-  A. Dembo, T. M. Cover and J. A. Thomas, “Information Theoretic Inequalities,” IEEE Trans. Information Theory, vol. 37, no. 6, pp. 1501–1518, 1991.
-  P. Harremoës, O. Johnson, and I. Kontoyiannis, “Thinning and the law of small numbers,” IEEE Symp. Inf. Theory, Jun. 2007.
-  P. Harremoës, O. Johnson, and I. Kontoyiannis, “Thinning and information projections,” IEEE Symp. Inf. Theory, Jul. 2008.
-  O. Johnson, “Log-concavity and the maximum entropy property of the Poisson distribution,” Stochastic Processes and their Applications, vol. 117, no. 6, pp. 791–802, 2007.
-  I. Kontoyiannis, P. Harremoës, and O. T. Johnson, “Entropy and the law of small numbers,” IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 466–472, Feb. 2005.
-  Y. Oohama, “Rate-distortion theory for Gaussian multiterminal source coding systems with several side informations at the decoder,” IEEE Trans. Inform. Theory, vol. 51, no. 7, pp. 2577–2593, July 2005.
-  A. Rényi, “A characterization of Poisson processes,” Magyar Tud. Akad. Mat. Kutató Int. Közl., vol. 1, pp. 519–527, 1956.
-  L. A. Shepp and I. Olkin, “Entropy of the sum of independent Bernoulli random variables and of the multinomial distribution,” in Contributions to probability, pp. 201–206, Academic Press, New York, 1981.
-  A. J. Stam, “Some inequalities satisfied by the quantities of information of Fisher and Shannon,” Inform. Contr., vol. 2, no. 2, pp. 101–112, Jun. 1959.
-  Y. Yu, “On the maximum entropy properties of the binomial distribution,” IEEE Trans. Inf. Theory, vol. 54, no. 7, pp. 3351–3353, Jul. 2008.
-  Y. Yu, “Monotonic convergence in an information-theoretic law of small numbers,” Technical report, Department of Statistics, University of California, Irvine, 2008. http://arxiv.org/abs/0810.5203