Uniform Hanson-Wright type concentration inequalities for unbounded entries via the entropy method
This paper is devoted to uniform versions of the Hanson-Wright inequality for a random vector with independent subgaussian components. The core technique of the paper is based on the entropy method combined with truncations of both gradients of functions of interest and of the coordinates of vector itself. Our results recover, in particular, the classic uniform bound of Talagrand (1996) for Rademacher chaoses and the more recent uniform result of Adamczak (2015) which holds under certain rather strong assumptions on the distribution of . We provide several applications of our techniques: we establish a version of the standard Hanson-Wright inequality, which is tighter in some regimes. Extending our results we show a version of the dimension-free matrix Bernstein inequality that holds for random matrices with a subexponential spectral norm. We apply the derived inequality to the problem of covariance estimation with missing observations and prove an almost optimal high probability version of the recent result of Lounici (2014). Finally, we show a uniform Hanson-Wright type inequality in the Ising model under Dobrushin’s condition. A closely related question was posed by Marton (2003).
Keywords: concentration inequalities, modified logarithmic Sobolev inequalities, uniform Hanson-Wright inequalities, Rademacher chaos, matrix Bernstein inequality
The concentration properties of quadratic forms of random variables is a classic topic in probability. The well-known result is due to Hanson and Wright (we refer to the form of this inequality presented in Rudelson and Vershynin (2013)) which claims that if is an real matrix and is a random vector in with independent centered coordinates satisfying (we will recall the definition of below) then for all
for some absolute and defines the Hilbert-Schmidt norm and is an operator norm of . An important extension of these results is when instead of just one matrix we have a family of matrices and want to understand the behaviour of random quadratic forms simultaneously for all matrices in the family. As a concrete example we consider an order-2 Rademacher chaos: given a family of real symmetric matrices with zero diagonal, that is for all we have for all , one wants to study the following random variable
where is a sequence of independent Rademacher signs, taking values with equal probabilities. In the celebrated paper Talagrand (1996) it was shown, in particular, that there is an absolute constant , such that for any
Apart from the new techniques the significance of this result is that previously (see, for example, Ledoux and Talagrand (2013)) similar bounds were one-sided and had a multiplicative constant greater than before . These results are sometimes called deviation inequlities in contrast to the concentration bounds of the form (1.2) that will be studied below. A simplified proof of the upper-tail of (1.2) appeared later in Boucheron et al. (2003). Similar inequalities in the Gaussian case follow from the results in Borell (1984) and Arcones and Gine (1993).
Observe, that when the diagonal elements are zero, for each the corresponding quadratic form is centered, . In a general situation we will be interested in the analysis of
for a random vector taking its values in . As before, the analysis of both the expectation and the concentration properties of this random variable have appeared recently in many papers. Just to name a few: Kramer et al. (2014) study and deviations of for classes of positive semidefinite matrices with applications to compressive sensing, Dicker and Erdogdu (2017) prove deviation inequalities for and subgaussian vectors under some extra assumptions. Additionally, a recent paper Adamczak et al. (2018b) studies deviation bounds for with Banach space-valued matrices and Gaussian variables, providing upper and lower bounds for the moments. Finally, it was shown in Adamczak (2015) that if satisfies the so-called concentration property with constant , that is for every -Lipschitz function and any it holds and
then the following bound (similar to (1.2)) holds for every
This result has an application in the covariance estimation and recovers another recent concentration result of Koltchinskii and Lounici (2017); we will discuss this in what follows. The drawback of (1.5) is that the concentration property is quite restrictive: it works when has standard Gaussian distribution, for some log-concave distributions (see Ledoux (2001)), but at the same time does not hold for general subgaussian entries and even in the simplest case of Rademacher random vector .
In this paper we extend the mentioned results in two directions. On one hand we revisit the result of Boucheron et al. (2003) for bounded variables allowing non-zero diagonal values of the matrices, and on the other we allow unbounded subgaussian variables . First, let us recall the following definition. For denote the -norm of a random variable by
which is a proper norm whenever . A random variable with will be refereed to as subexponential and will be refereed to as subgaussian and the corresponding norm is usually named a subgaussian norm. We also use the norm. For we set . One of our main contributions is the following upper-tail bound.
Suppose that components of are independent centered random variables and is a finite family of real symmetric matrices. Denote . Then, for any it holds
where is an absolute constant and is defined by (1.3).
In Theorem 1.1 and below we assume that all is symmetric. This was done only for the convenience of presentation and in fact, the analysis may be performed for general square matrixes. The only difference will be that in many places should be replaced by .
In particular, Theorem 1.1 recovers the right-tail of the result of Talagrand (1.2) up to absolute constants, since in this case we obviously have . Furthermore, the result of Theorem 1.1 works without the assumption used in Talagrand (1996) and Boucheron et al. (2003) that diagonals of all matrices in are zero. Moreover, it is also applicable in some situations when the concentration property (1.4) holds: indeed, if is a standard normal vector in then it is well known (see Ledoux and Talagrand (2013)) that and at the same time if the identity matrix then . Therefore, in this case the factor is only of at most logarithmic order when compared to .
In a special case when consists of just one matrix our bound recovers the bound which is similar to the original Hanson-Wright inequality. On the one hand our bound may have an extra logarithmic factor that depends on the dimension . On the other hand the original term is replaced by the better term . We will discuss this phenomenon below. The core of the proof of the Hanson-Wright inequality in Rudelson and Vershynin (2013) is based on the decoupling technique which may be used (at least in a straightforward way) to prove the deviation, but not the concentration inequality for in the case when consists of more than one matrix.
A natural question to ask is whether one may improve Theorem 1.1 and replace by . In what follows we discuss that in the deviation version of Theorem 1.1 this replacement is not possible in some cases. This is quite unexpected in light of the fact that does not appear in the original Hanson-Wright inequality. Therefore, we believe that the form of our result is close to optimal. We also provide the following extension of Theorem 1.1, which may be better in some cases.
Suppose that components of are independent centered random variables. Suppose also, that the variables have symmetric distribution ( has the same distribution as ). Let be a finite family of real symmetric matrices. Denote and and let be a standard Gaussian vector in . Then, for any it holds
where are absolute constants and is defined by (1.3).
We proceed with some notations that will be used below. For a non-negative random variable , define its entropy as
Instead of the concentration property (1.4) we also discuss the following property:
We say that the random vector taking its values in satisfies the logarithmic Sobolev inequality with constant if for any continuously differentiable function it holds
whenever both sides of the inequality are not infinite.
To show that logarithmic Sobolev property is closely related to the concentration property we remind (Theorem 5.3 Ledoux (2001)) that Assumption 1 implies the concentration property (1.4) and the proof of this fact is based essentially on taking for which implies
This is known to imply (1.4) through Herbst argument, see Boucheron et al. (2013). Moreover, the last inequality is equivalent to concentration property. Indeed, from the concentration property we know that and this implies (see van Handel (2016)) that for all
One of the technical contributions of the paper is that we use a similar scheme to prove Theorem 1.1 and to recover (1.5) under the logarithmic Sobolev Assumption 1. The application of logarithmic Sobolev inequalities requires computation of the gradient of the function of interest, that is in our case the gradient of . It appears that in the analysis we need to control the behaviour of (or its analogs) and, as in Boucheron et al. (2003) and Adamczak (2015), we will use a truncation argument to do so. However, in both cases our proofs will pass through the entropy variational formula of Boucheron et al. (2013), that states that for random variables with it holds
This will allow us to shorten the proofs and avoid some technicalities appearing in previous papers. Finally, to prove Theorem 1.1 we use a second truncation argument: that will be based on Hoffman-Jørgensen inequality (see Ledoux and Talagrand (2013)). We also present two lemmas, which will be used several times in the text. Both results have short proofs and may be of independent interest.
Suppose, that for random variables and any it holds
where are positive constants. Then, the following concentration result holds
where is an absolute constant. Moreover, if (1.8) holds as well for , we have
The second technical result is a version of the convex concentration inequality of Talagrand (1996), which does not require the boundedness of components of .
Let be a convex, -Lipschitz function with respect to Euclidian norm in and be a random vector with independent components. Then, it holds for any
where are absolute constants.
We discuss the optimality of this result in what follows. Finally, we sum up the structure of the paper and outline the main contributions:
Section 2 is devoted to applications and discussions and consists of several parts. At first, we give a simple proof of the uniform bound of Adamczak (2015) under the logarithmic Sobolev assumption. The second paragraph is devoted to improvements in the non-uniform Hanson-Wright inequality (1.1) in the subgaussian regime. Furthermore, we apply our techniques to obtain a uniform concentration result similar to Theorem 1.1 in a particular case of non-independent components. We consider the Ising model under Dobrushin’s condition that caught some attention recently (see Adamczak et al. (2018a) and Götze et al. (2018)). The question we study was raised by Marton (2003) in a closely related scenario. Finally, we show that it is not possible in general to replace with in Theorem 1.1 by providing an appropriate counterexample.
In Section 4 we prove a dimension-free matrix Bernstein inequality that holds for random matrices with the subexponential spectral norm. The proof is based on the same truncation approach as in the proof of Theorem 1.1. We demonstrate how our Bernstein inequality can be used in the context of covariance estimation for subgaussian observations, improving the state-of-the-art result of Lounici (2014) for covariance estimation with missing observations.
2 Some applications and discussions
We begin with some notations that will be used throughout the paper. For a random vector taking its values in let denote its components. In the case when all the components of are independent let denote the independent copy of the component . Symbol denotes equivalence up to absolute constants and denotes an inequality up to some absolute constant. Throughout the paper are absolute constants which may change from line to line.
A uniform Hanson-Wright inequality under the logarithmic Sobolev condition
Following Adamczak (2015) we assume without the loss of generality, that is a finite set of matrices, then is Lebesgue-a.e. differentiable and
bounded by a Lipschitz function of with good concentration properties.
In particular, since satisfies log-Sobolev condition with constant , we have (Theorem 5.3 in Ledoux (2001))
Taking square and using , we get
Furthermore, the logarithmic Sobolev condition implies for any
Therefore, by Lemma 1.3 it holds for any ,
which coincides with (1.5) for -concentrated vectors up to absolute constant factors.
Improving Hanson-Wright inequality in the subgaussian regime
Our analysis implies, in particular, an improved version of Hanson-Wright inequality (1.1) in some cases. We consider a centered random vector with independent subgaussian components and set , . In this case (1.1) implies that with probability at least it holds
At the same time, Theorem 1.1 for a single matrix implies with the same probability
Observe that when almost surely for each , we have . The following example illustrates the difference between these two bounds.
Assume, is a sequence of independent Bernoulli random variables with the mean and let . For we easily get
On the other hand, for it holds
where the last line follows directly from Theorem 1.1 in Schlemm (2016). Therefore, the standard Hanson-Wright inequality implies that with probability at least it holds,
while (2.2) and imply that for and it holds with probability at least
It is easy to verify that , thus the inequality (2.3) is better than Hanson-Wright inequality for this in the subgaussian regime (when the -term is dominated by the -term).
Uniform concentration results in the Ising model
Suppose, we have a discrete random vector with the distribution defined by
where is a normalizing factor. This distribution defines the Ising model with parameters and .
For an arbitrary function on denote a difference operator,
where the operator flips the sign of the th coordinate, and is conditional distribution of the th coordinate, given the rest of the elements. The following recent result provides log-Sobolev inequality for vector under Dobrushin-type conditions.
Theorem 2.1 (Proposition 1.1, Götze et al. (2018)).
Suppose, and satisfies and
There is a constant , such that for an arbitrary function on it holds,
Let be a finite set of symmetric matrices with zero diagonal. It holds in the Ising model under Dobrushin’s condition and that for any
where depends only on .
Let given all but the -th element, the variables and are independent and are distributed according to . Obviously, we may have all and defined on the same discrete probability space, and thus we will use the notation and for the distribution and the conditional distribution. Then, we have
where we switched from to due to the symmetry between and .
Observe, that denoting for short and using the independence of and given , we have , and therefore by the chain rule,
Finally, we get
Now we want to consider the function
where is a given set of symmetric matrices with zero diagonal (the diagonal is not important here, since ). Applying Theorem 2.1 to , we have
where for being maximizer of (2.6) we have,
Note, that concentration for is implied by the same result. Indeed, we have
where is such that . Thus, the expectation of corresponding difference operator is bounded by , so that due to standard Herbst argument, Theorem 2.1 implies
To sum up, by Theorem 2.1 it holds,
It is left to apply Lemma 1.3, which brings us to a uniform Hanson-Wright-type concentration bound for the Ising model
where only depends on from Theorem 2.1. The claim follows. ∎
In the case when our result implies the upper tail of the recent concentration inequality proved in Adamczak et al. (2018a) (see Theorem 2.2 and Example 2.5). To show this fact (denoting ) we observe that
Now, it is well known that implies Poincaré’s inequlity and therefore,
where we used that , which holds for any symmetric and nonnegative . Finally,
The right-hand side term appears instead of in Example 2.5 mentioned above.
Replacing with in Theorem 1.1
Here we show that it is essentially not possible in general to substitute with in Theorem 1.1 by presenting a concrete counterexample, which was kindly suggested by Radosław Adamczak. Suppose the opposite, that there is an absolute constant such that for any set of matrices and any subgaussian random variables it holds with probability at least ,
which implies with some other constant
Notice, that here we also allow a constant in front of the expectation.
Let us take with having only one nonzero element . For simplicity take i.i.d. with , so that
Then, assuming, say we have
which since implies
Note, that this inequality also holds if we rescale for an arbitrary . Therefore, if we have a moment equivalence , we can always rescale to have and , so that the above inequality holds.
Taking the latter into account, we conclude that there is a constant , such that if a centred random satisfies , then for any the following holds,
It is known that such hypercontractivity of maxima implies certain regularity of tails of the distribution of . In this case by Theorem 4.6 in Hitczenko et al. (1998) for any there is another constant such that for all it holds,
so that in our case of and and taking , there is such that for all it holds
The latter does not have to hold for any subgaussian random variable . For instance, taking a symmetric random variable with and for we have which implies . Moreover, for we also have thus and the conditions of (2.9) are satisfied. But for large enough for , we have
therefore breaking the tail regularity (2.10). Thus, it is impossible to establish inequality of form (2.8). We also note that it is also possible to prove that (2.9) may not hold for defined above via some direct computations.
By the same reason it is not possible to replace