Entropy Jumps for Radially Symmetric Random Vectors

Entropy Jumps for Radially Symmetric Random Vectors

Thomas A. Courtade
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Abstract

We establish a quantitative bound on the entropy jump associated to the sum of independent, identically distributed (IID) radially symmetric random vectors having dimension greater than one. Following the usual approach, we first consider the analogous problem of Fisher information dissipation, and then integrate along the Ornstein-Uhlenbeck semigroup to obtain an entropic inequality. In a departure from previous work, we appeal to a result by Desvillettes and Villani on entropy production associated to the Landau equation. This obviates strong regularity assumptions, such as presence of a spectral gap and log-concavity of densities, but comes at the expense of radial symmetry. As an application, we give a quantitative estimate of the deficit in the Gaussian logarithmic Sobolev inequality for radially symmetric functions.

1 Introduction

Let be a random vector on with density . The entropy associated to is defined by

(1)

provided the integral exists. The non-Gaussianness of , denoted by , is given by

(2)

where denotes a Gaussian random vector with the same covariance as . Evidently, is the relative entropy of with respect to , and is therefore nonnegative. Moreover, if and only if is Gaussian.

Our main result may be informally stated as follows: Let be IID radially symmetric random vectors on , , with sufficiently regular density . For any

(3)

where is an explicit function depending on , the regularity of , and a finite number of moments of . In particular, if for some , then . A precise statement can be found in Section 3, along with an analogous result for Fisher information and a related estimate that imposes no regularity conditions. In interpreting the result, it is important to note that, although a radially symmetric density has a one-dimensional parameterization, the convolution is inherently a -dimensional operation unless is Gaussian. Thus, it does not appear that (3) can be easily reduced to a one-dimensional problem.

The quantity characterizes the entropy production (or, entropy jump) associated to under rescaled convolution. Similarly, letting denote the Fisher information of , the quantity characterizes the dissipation of Fisher information. By the convolution inequalities of Shannon [1], Blachman [2] and Stam [3] for entropy power and Fisher information, it follows that both the production of entropy and the dissipation of Fisher information under rescaled convolution are nonnegative. Moreover, both quantities are identically zero if and only if is Gaussian.

The fundamental problem of bounding entropy production (and dissipation of Fisher information) has received considerable attention, yet quantitative bounds are few. In particular, the entropy power inequality establishes that entropy production is strictly greater than zero unless is Gaussian, but gives no indication of how entropy production behaves as, say, a function of when is non-Gaussian. As a consequence, basic stability properties of the entropy power inequality remain elusive, despite it being a central inequality in information theory. For instance, a satisfactory answer to the following question is still out of reach: If the entropy power inequality is nearly saturated, are the random summands involved quantifiably close to Gaussian?

Perhaps the first major result to address the question of entropy production in this setting is due to Carlen and Soffer [4], who showed for each random vector with and , there exists a nonnegative function on , strictly increasing from 0, and depending only on the two auxiliary functions

(4)

and

(5)

such that

(6)

Moreover, the function will do for any density with and . Hence, this provides a nonlinear estimate of entropy production in terms of that holds uniformly over all probability densities that exhibit the same decay and smoothness properties (appropriately defined). Unfortunately, the proof establishing existence of relies on a compactness argument, and therefore falls short of giving satisfactory quantitative bounds.

A random vector with density has spectral gap (equivalently, finite Poincaré constant ) if, for all smooth functions with ,

(7)

Generally speaking, a non-zero spectral gap is a very strong regularity condition on the density (e.g., it implies has finite moments of all orders).

In dimension , if has spectral gap , then [5, 6] established the linear bound111Technically speaking, Barron and Johnson establish a slightly different inequality. However, a modification of their argument gives the same result as Ball, Barthe and Naor. See the discussion surrounding [7, Theorem 2.4].

(8)

In dimension , Ball and Nguyen [8] recently established an essentially identical result under the additional assumption that is isotropic (i.e., ) with log-concave density . Along these lines, Toscani has a strengthened EPI for log-concave densities [9], but the deficit is qualitative in nature in contrast to the quantitative estimate obtained by Ball and Nguyen.

Clearly, entropy production and Fisher information dissipation is closely related to convergence rates in the entropic and Fisher information central limit theorems. Generally speaking though, bounds in the spirit of (8) are unnecessarily strong for establishing entropic central limit theorems of the form , where denotes the normalized sum of IID copies of . Indeed, it was long conjectured that under moment conditions. This was positively resolved by Bobkov, Chistyakov and Götze [10, 11] using Edgeworth expansions and local limit theorems. By Pinsker’s inequality, we know that dominates squared total-variation distance, so is interpreted as a version of the Berry-Esseen theorem for the entropic CLT with the optimal convergence rate. However, while the results of Bobkov et al. give good long-range estimates of the form (with explicit constants depending on the cumulants of ), the smaller-order terms propagate from local limit theorems for Edgeworth expansions and are non-explicit. Thus, explicit bounds for the initial entropy jump cannot be readily obtained.

Along these lines, we remark that Ledoux, Nourdin and Peccati [12] recently established the weaker convergence rate via the explicit bound

(9)

where is the relative Fisher information of with respect to the standard normal , and denotes the Stein discrepancy of with respect to , which is defined when a Stein kernel exists (see [12] for definitions). In principle, this has potential to give an explicit bound on the entropy production in terms of by considering . Unfortunately, a Stein kernel may not always exist; even if it does, further relationships between and or would need to be developed to ultimately yield a bound like (3).

Another related line of work in statistical physics considers quantitative bounds on entropy production in the Boltzmann equation (see the review [13] for an overview). The two problems are not the same, but there is a strong analogy between entropy production in the Boltzmann equation and entropy jumps associated to rescaled convolution as can be seen by comparing [14] to [4]. The details of this rich subject are tangential to the present discussion, but we remark that a major milestone in this area was achieved when the entropy production in the Boltzmann equation was bounded from below by an explicit function of (and various norms of ), where models the velocity of a particle in a rarified gas[15, 16]. A key ingredient used to prove this bound was an earlier result by Desvillettes and Villani that controls relative Fisher information via entropy production in the Landau equation:

Lemma 1.

[17] Let be a random vector on , satisfying and having density . Then,

(10)

where is the minimum eigenvalue of the covariance matrix associated to , and is the orthogonal projection onto the subspace orthogonal to .

Our proof of (3) follows a program similar to [15, 16], and is conceptually straightforward after the correct ingredients are assembled. In particular, we begin by recognizing that the LHS of (10) resembles dissipation of Fisher information when written in the context of projections (cf. [6, Lemma 3.1]). Using the radial symmetry assumption, we are able to bound the Fisher information dissipation from below by error terms plus entropy production in the Landau equation, which is subsequently bounded by relative Fisher information using Lemma 1. Care must be exercised in order to control error terms (this is where our regularity assumptions enter), but the final result (3) closely parallels that proved in [15] for the Boltzmann equation. We remark that the assumption of a non-vanishing Boltzmann collision kernel in [15] has a symmetrizing effect on the particle density functions involved; the rough analog in the present paper is the radial symmetry assumption.

Organization

The rest of this paper is organized as follows. Section 2 briefly introduces notation and definitions that are used throughout. Main results are stated and proved in Section 3, followed by a brief discussion on potential extensions to non-symmetric distributions. Section 4 gives an application of the results to bounding the deficit in the Gaussian logarithmic Sobolev inequality.

2 Notation and Definitions

For a vector , we let denote its Euclidean norm. For a random variable on and , we write for the usual -norm of . It will be convenient to use the same notation for , with the understanding that is not a norm in this case.

Throughout, denotes a standard Gaussian random vector on ; the dimension will be clear from context. For a random vector on , we let be a normalized Gaussian vector, so that . For , we denote the coordinates of a random vector on as . Thus, for example, is a zero-mean Gaussian random variable with variance .

For a random vector with smooth density222All densities are with respect to Lebesgue measure. , we define the Fisher information

(11)

and the entropy

(12)

where ‘’ denotes the natural logarithm throughout. For random vectors with respective densities , the relative Fisher information is defined by

(13)

and the relative entropy is defined by

(14)

Evidently, both quantities are nonnegative and

(15)

Finally, we recall two basic inequalities that will be taken for granted several times without explicit reference: for real-valued we have , and for random variables , we have Minkowski’s inequality: when .

Definition 1.

A random vector with density is radially symmetric if for some function .

We primarily concern ourselves with random vectors that satisfy certain mild regularity conditions. In particular, it is sufficient to control pointwise in terms of .

Definition 2.

A random vector on with smooth density is -regular if, for all ,

(16)

We remark that the smoothness requirement of in the definition of -regularity is stronger than generally required for our purposes. However, it allows us to avoid further qualifications; for instance, the identities (11) hold for any -regular function. Moreover, since for smooth , we have for any -regular with .

Evidently, -regularity quantifies the smoothness of a density function. The following important example shows that any density can be mollified to make it -regular.

Proposition 1.

[18] Let and be independent, where . Then is -regular with .

Observe that, in the notation of the above proposition, if is radially symmetric then so is . Therefore, Proposition 1 provides a convenient means to construct radially symmetric random vectors that are -regular. Indeed, we have the following useful corollaries (proofs are found in the appendix).

Proposition 2.

Let be a random vector on , and let for . If is -regular, then is -regular.

Proposition 3.

Let be a non-negative random variable with and distribution function . For any and , there exists a -regular radially symmetric random vector on with and satisfying

(17)

where is the distribution function of .

3 Main Results

In this section, we establish quantitative estimates on entropy production and Fisher information dissipation under rescaled convolution. As can be expected, we begin with an inequality for Fisher information, and then obtain a corresponding entropy jump inequality by integrating along the Ornstein-Uhlenbeck semigroup.

3.1 Dissipation of Fisher Information under Rescaled Convolution

Theorem 1.

Let be IID radially symmetric random vectors on , , with -regular density . For any

(18)

where

(19)
Remark 1.

We have made no attempt to optimize the constant .

A few comments are in order. First, we note that inequality (18) is invariant to scaling for . Indeed, if is -regular, then a change of variables shows that is -regular. So, using homogeneity of the norms, we find that

(20)

Combined with the property that , we have

(21)

which has the same scaling behavior as the LHS of (18). That is,

(22)

Second, inequality (18) does not contain any terms that explicitly depend on dimension. However, it is impossible to say that inequality (18) is dimension-free in the usual sense that both sides scale linearly in dimension when considering product distributions. Indeed, the product of two identical radially symmetric densities is again radially symmetric if and only if the original densities were Gaussian themselves, which corresponds to the degenerate case when the dissipation of Fisher information is identically zero. However, inequality (18) does exhibit dimension-free behavior in the following sense: Suppose for simplicity that is normalized so that . Since is radially symmetric, it can be expressed as the product of independent random variables , where is uniform on the -dimensional sphere and is a nonnegative real-valued random variable satisfying . Now, by the log Sobolev inequality and Talagrand’s inequality, we have

(23)

The equality follows since, for any vectors we have . However, this can be achieved with equality by the coupling . Thus, we have

(24)
(25)

Now, we note that , so we have a bound of the form

(26)

where the function is effectively dimension-free in that it only depends on the (one-dimensional) quadratic Wasserstein distance between and . For , the law of large numbers implies that a.s. Therefore, behaves similarly to high dimensions. Indeed, by the triangle inequality applied to ,

(27)

So, we see that (18) depends very weakly on when the marginal distribution of is preserved and dimension varies.

One important question remains: As dimension , do there exist random vectors on with sufficient regularity for which the associated random variable is not necessarily concentrated around ? The answer to this is affirmative in the sense of Proposition 3: we may approximate any distribution function to within arbitrary accuracy, at the (potential) expense of increasing the regularity parameter .

Proof of Theorem 1.

As remarked above, inequality (18) is invariant to scaling. Hence, there is no loss of generality in assuming that is normalized according to . Also, since is radially symmetric, is equal to in distribution, therefore we seek to lower bound the quantity

(28)

Toward this end, define , and denote its density by . By the projection property of the score function of sums of independent random variables, the following identity holds (e.g., [7, Lemma 3.4]):

(29)

where is the score function of and is the score function of .

For , let denote the orthogonal projection onto the subspace orthogonal to . Now, we have

(30)
(31)
(32)

The inequality follows since by definition, and since is an orthogonal projection. The last equality follows since due to the fact that is the tangential gradient of , which is identically zero due to radial symmetry of .

Next, for any , use the inequality

(33)

to conclude that

(34)

We bound the second term first. By -regularity and the triangle inequality, we have

(35)

So, noting the inclusion

(36)

we have the pointwise inequality

(37)
(38)
(39)

Taking expectations and using the fact that are IID, we have for any conjugate exponents and ,

(40)
(41)
(42)
(43)
(44)
(45)

Since , radial symmetry implies . Therefore, by Lemma 1, we have

(46)

Continuing from above, we have proved that

(47)

For any , Taking yields the identity

(48)

So, putting , and simplifying, we obtain

(49)
(50)
(51)
(52)

where we have made use of the crude bound and substituted . ∎

3.2 Entropy Production under Rescaled Convolution

As one would expect, we may ‘integrate up’ in Theorem 1 to obtain an entropic version. A precise version of the result stated in Section 1 is given as follows:

Theorem 2.

Let be IID radially symmetric random vectors on , , with -regular density . For any

(53)

where

(54)
Remark 2.

Although the constant appears to grow favorably with dimension , this dimension-dependent growth can cancel to give a bound that is effectively dimension-free. An illustrative example follows the proof.

Proof.

Similar to before, the inequality (53) is scale-invariant. Indeed, all relative entropy terms are invariant to scaling , and we also have due to being -regular if is -regular and homogeneity of the norms. Thus, we may assume without loss of generality that is normalized so that . Next, define , and let denote the Ornstein-Uhlenbeck evolutes of and , respectively. That is, for

(55)

By Proposition 2, is -regular for all . Noting that , an application of Theorem 1 gives

(56)
(57)

where (57) holds since, for ,

(58)
(59)
(60)
(61)

The bound (61) uses the fact that is a chi-squared random variable with degrees of freedom, and hence (using ):

(62)
(63)
(64)

Now, the claim will follow by integrating both sides. Indeed, by the classical de Bruijn identity, we have

(65)

By Jensen’s inequality,

(66)
(67)
(68)

where we used the bound due to exponential decay of information along the semigroup (e.g., [19]), a change of variables, and the identity . Thus, we have proved