On the Isoperimetric constant, covariance inequalities and L_{p}-Poincaré inequalities in dimension one

On the Isoperimetric constant, covariance inequalities and -Poincaré inequalities in dimension one

Adrien Saumard \thanksreft1label=e1]asaumard@gmail.com [ CREST, Ensai, Université Bretagne Loire
\printeade1
   Jon A. Wellner\thanksreft2label=e2]jaw@stat.washington.edu [ Department of Statistics, University of Washington, Seattle, WA 98195-4322,
\printeade2
Abstract

Firstly, we derive in dimension one a new covariance inequality of type that characterizes the isoperimetric constant as the best constant achieving the inequality. Secondly, we generalize our result to bounds for the covariance. Consequently, we recover Cheeger’s inequality without using the co-area formula. We also prove a generalized weighted Hardy type inequality that is needed to derive our covariance inequalities and that is of independent interest. Finally, we explore some consequences of our covariance inequalities for Poincaré inequalities and moment bounds. In particular, we obtain optimal constants in general Poincaré inequalities for measures with finite isoperimetric constant, thus generalizing in dimension one Cheeger’s inequality, which is a Poincaré inequality for , to any integer .

[
\kwd
\runtitle

covariance inequalities

{aug}\thankstext

t1Supported by NI-AID grant 2R01 AI29168-04, and by a PIMS postdoctoral fellowship \thankstextt2Supported in part by NSF Grant DMS-1566514, NI-AID grant 2R01 AI291968-04

class=AMS] \kwd[Primary ]60E15 \kwd60F10

covariance inequality \kwdcovariance formula \kwdisoperimetric constant \kwdCheeger’s inequality \kwdmoment bounds


1 The isoperimetric constant

For a measure on (we will focus on ), an isoperimetric inequality is an inequality of the form,

(1.1)

where , is an arbitrary measurable set in and stands for the perimeter of , defined to be

where is an -neighborhood of . The optimal value of in (1.1) is referred to as the Cheeger isoperimetric constant of . It turns out that the isoperimetric constant is linked to the best constant in Poincaré’s inequality, this is the celebrated Cheeger’s inequality: it says that if satisfies

(1.2)

for every smooth (i.e. locally Lipschitz) function on , then we can take

On , the isoperimetric constant achieves the following identity (Bobkov and Houdré [1997a], Theorem 1.3),

where and

In addition to being defined via relations on sets (1.1), the isoperimetric constant can be stated through the use of a functional inequality. Indeed, the isoperimetric constant is also the optimal constant satisfying the following analytic (Cheeger-type) inequality (see for instance p.192, Bobkov and Houdré [1997a]),

(1.3)

where is an integrable, (locally) Lipschitz function on and is the median of with respect to .

Inequality (1.3) is also termed an -Poincaré inequality. Instances of -Poincaré inequalities could also be considered for a centering of by its mean rather than its median, since

(1.4)

Bobkov and Houdré [1997b], chapter 14, studied related optimal constants in Sobolev-type inequalities defined from Orlicz-norm.
In a different direction, inequality (1.3) may also be seen as a special instance of a covariance inequality. Indeed, by definition of a median, it follows that

and so,

(1.5)

where .

A natural question then arises: can the isoperimetric constant be defined as the optimal constant in a covariance inequality bounding, for suitable functions and , their covariance by the moment of that is ? Surely, the upper-bound on the covariance will also depend on some function of the magnitude of .

Recently, Menz and Otto [2013] have established in dimension one what they call an asymmetric Brascamp-Lieb inequality: if is a strictly log-concave measure on , then for smooth and square integrable functions and on ,

(1.6)

where is the potential of the measure , defined by the relation , and the norms are taken with respect to . This result has been generalized by Carlen, Cordero-Erausquin and Lieb [2013] to higher dimension and to versions (rather than ). For a strictly log-concave measure on , , any square integrable locally Lipschitz functions and and , , it holds that

where is the least eigenvalue of .

Such results seem close in their form to what we would want to have to generalize (1.3). However, it is well-known (see for instance Ledoux [2004]) that for a log-concave measure , if - which means that is strongly log-concave, see for instance Saumard and Wellner [2014] - then

where is the (optimal) Poincaré constant of . In particular, Inequality (1.6) implies in this case,

(1.7)

for smooth and square integrable functions and on . But and we a priori don’t know if the right-hand side of (1.7) could be changed to , where would be a universal constant - note that we will give a positive answer to this question in the following. Hence the connection between inequalities of the form of (1.7) and the isoperimetric constant is not straightforward.

One of the reasons for this difficulty is that if we try to approximate the function sign appearing in (1.5) by a sequence of smooth functions in order to use inequality (1.6) or (1.7) and take the limit, then the sequence of sup-norms will diverge to infinity and it is thus hopeless to recover the -Poincaré inequality (1.3) at the limit. Another limitation of asymmetric Brascamp-Lieb inequalities is that they hold for strictly log-concave measures while our expected covariance inequality would ideally be valid for any measure.

In Section 2, taking the dimension to be one, we establish a covariance inequality that is valid for any measure on and that indeed generalizes the -Poincaré inequality (1.3). Then we will consider in Section 3 extensions of our covariance inequalities that are related to -Poincaré inequalities, for . In particular, we will prove and make use of some generalized (weighted) Hardy-type inequalities, that are of independent interest. We will explore further consequences in terms of moment estimates of our new covariance inequalities in Section 5.

2 A covariance inequality

All norms, expectations and covariances will be taken with respect to a probability measure on so that we will skip related indices in the notations.

Notice that if a -Poincaré inequality (with a centering by the median) holds for a measure , that is there exists such that for every smooth integrable ,

(2.1)

then if is in , we get

(2.2)

where . Moreover, the optimal constant in (2.1) is .

The following theorem states an inequality in dimension one that is sharper in terms of the control of .

Theorem 2.1.

Let be a probability measure with a positive density on , cumulative distribution and median . Let and . Assume also that and are absolutely continuous. Then we have,

(2.3)

where and

Before proving Theorem 2.1, let us recall a representation formula for the covariance of two functions which first appeared in Menz and Otto [2013] and was further studied in Saumard and Wellner [2017]. We define a non-negative and symmetric kernel on by

(2.4)

where is the distribution function associated with the probability measure on .

Theorem 2.2 (Corollary 2.2, Saumard and Wellner [2017]).

If and are absolutely continuous and , for some and , then

(2.5)

The fact that the covariance representation (2.5) of Theorem 2.2 is valid for any probability measure is specific to dimension one. More precisely, Bobkov, Götze and Houdré [2001] proved that, in dimension greater than two, a covariance identity of the form of (2.5) implies that is Gaussian. Bobkov, Götze and Houdré [2001] also proved some genuine concentration inequalities from such covariance identities.

We will also need the following formulas, which are in fact special instances of the previous covariance representation formula.

Theorem 2.3 (Corollary 2.1, Saumard and Wellner [2017]).

For an absolutely continuous function ,

(2.6)

and

(2.7)

We are now able to give a proof of Theorem 2.1.

Proof.

Using the notation of Theorem 2.1, Proposition 2.2 yields

Now, by using (2.6) and (2.7), we get

Remark 2.1.

The proof of Theorem 2.1 allows to give other variants of covariance inequalities. Indeed, we have

Then, using Theorem 2.3, we get

(2.8)

The latter covariance inequality generalizes Menz and Otto’s covariance inequality (1.6) for strictly log-concave measures to any measure with positive density on the real line. Indeed, let us write , with . If is strictly log-concave, and is absolutely continuous, so that is convex and on , then by Proposition 2.3 and Corollary 2.3 of Saumard and Wellner [2017], we have

Applying the latter bound in inequality (2.8) indeed yields the asymmetric Brascamp - Lieb inequality presented in Menz and Otto [2013]. Thus the covariance inequality (2.8) may be viewed as a generalization of Menz and Otto’s result in dimension one.


By setting, for ,

Inequality (2.3) of Theorem 2.1 yields,

It is straightforward to see that , so that we recover inequality (2.2).

We will say that a measure satisfies a covariance inequality with constant , if, for every and , and are absolutely continuous, if

(2.9)

In this case, we denote the smallest constant achieving a covariance inequality for the measure . In other words, if (2.9) is valid for every and , with and absolutely continuous, then . We have the following optimality result.

Theorem 2.4.

Let be a probability measure with a positive density on and cumulative distribution . If satisfies a -covariance inequality then its isoperimetric constant is finite and

Furthermore, if has a finite isoperimetric constant, then satisfies a -covariance inequality with constant . In other words, the inverse of the isoperimetric constant is the optimal constant achieving inequality (2.9) when available,

Proof.

The second part simply corresponds to Theorem 2.1. For the first part, we will use Lemma 2.1 below that is due to Bobkov and Houdré [1997a]. Indeed, consider an absolutely continuous function . As the function sign, we deduce from Lemma 2.1 that there exists a sequence of Lipschitz functions on with values in such that sign pointwise as . Hence, the dominated convergence theorem and identity (1.5) give

Furthermore, again by dominated convergence,

Now, the conclusion simply follows from (1.3). ∎

Remark 2.2.

In dimension , if a measure has a finite isoperimetric constant, Inequality (1.3) combined with Hölder’s inequality implies that for any locally Lipschitz and any ,

Now, if a measure satisfies for any locally Lipschitz and any ,

for some finite constant , then the same arguments as those developed in the proof of Theorem 2.4 and in particular the use of Lemma 2.1 imply that has a finite isoperimetric constant.

Lemma 2.1 (Bobkov and Houdré [1997a], Lemma 3.5).

For any Borel set with , there exists a sequence of Lipschitz functions on with values in such that pointwise as ,and .

3 covariance inequalities

Here we generalize some of the results obtained in the previous section. Indeed, we derive the following covariance inequalities.

Theorem 3.1.

Let be a probability measure with a positive density on and cumulative distribution . Let and . Assume also that and are absolutely continuous. Then we have,

(3.1)

Consequently, we also have

(3.2)

Before proving Theorem 3.1, we note that Inequality (3.2) is a consequence of Inequality (3.1) applied together with the following weighted Hardy type inequality, that is of independent interest.

Theorem 3.2 (Generalized Hardy Inequality).

Let be a continuous distribution on , with and For a function and ,

(3.3)

In particular,

and

The proof of Theorem 3.2 can be found at the end of this section. The fact that Hardy-type inequalities naturally come into play here is appealing, since it is well-known from the work of Miclo [1999] and Bobkov and Götze [1999] - see also Ané et al. [2000], Chapter 6 and Bobkov and Zegarlinski [2005], Chapters 4 and 5 - that Hardy-type inequalities can be used to have access to sharp constant in Poincaré and log-Sobolev inequalities on the real line and also in the discrete setting.
It is also worth noting that Inequality (3.2) of Theorem 3.1 induces the celebrated Cheeger’s inequality as a corollary and thus in particular gives a new proof of it, avoiding the classical use of the co-area formula.

Corollary 3.1 (Cheeger’s inequality).

Let be a probability measure with a positive density on and cumulative distribution . Let , absolutely continuous. Then we have,

(3.4)

Consequently, if denotes the best constant in the Poincaré inequality, it follows that

Proof.

Simply take and in Inequality (3.2). ∎

Proof of Theorem 3.1.

We have

Hence, inequality (3.1) is proved. To prove inequality (3.2), simply combine inequality (3.1) with inequality (3.3) of Theorem 3.2. ∎

Proof of Theorem 3.2.

It suffices to prove the inequalities for . From now on, we assume that . To prove Inequality (3.3) we first define the functions and : for ,

and, for , ,

It follows that

Now, by Hölder’s inequality,

The use of Fubini’s theorem then gives

In order to conclude, it suffices to prove that for an appropriate choice of ,

But we find that

Choosing

the conclusion for follows. ∎

4 -Poincaré inequalities

-Poincaré inequalities are an essential tool of functional analysis in relation with the concentration of measure phenomenon. In particular, the pathbreaking work of Milman [2009] shows that under convexity assumptions that include the log-concave setting, the exponential concentration property is equivalent to any -Poincaré inequality for . Theorem 3.1 induces the following sharp -Poincaré inequalities.

Theorem 4.1.

Let be a probability measure of finite isoperimetric constant with a positive density on and cumulative distribution . Let be an integer and be absolutely continuous. Then we have,

(4.1)

Consequently, if (for instance if is even and ), then

(4.2)

In general,

(4.3)

If is odd and , then