Quantitative homogenization of degenerate random environments

Quantitative homogenization of degenerate random environments

Abstract.

We study discrete linear divergence-form operators with random coefficients, also known as random conductance models. We assume that the conductances are bounded, independent and stationary; the law of a conductance may depend on the orientation of the associated edge. We give a simple necessary and sufficient condition for the relaxation of the environment seen by the particle to be diffusive, in the sense of every polynomial moment. As a consequence, we derive polynomial moment estimates on the corrector.


MSC 2010: 35B27, 35K65, 60K37.

Keywords: Quantitative homogenization, environment viewed by the particle, mixing of Markov chains, corrector estimate.

1. Introduction

We study the homogenization of discrete divergence-form operators

(1.1)

where is a family of independent random variables indexed by the nearest-neighbor, unoriented edges of the graph , . We assume that the coefficients take values in , that the law of depends only on the orientation of the edge , and that none of these probability laws is a Dirac mass at . In this setting, we give a necessary and sufficient condition for the relaxation of the “environment viewed by the particle” to be diffusive in the sense of every polynomial moment. As the name implies, this process can be described in terms of the random walk with generator given by (1.1). We will rather define the flow of its semigroup directly by means of the PDE (1.6) below. Denoting by the solution to (1.6) with bounded, local and centered initial condition , we show that the property

(1.2)

holds as soon as

(1.3)

where is the canonical basis of , and denotes the expectation with respect to the law of . Moreover, if (1.2) holds for a single “generic” bounded and local initial datum , then (1.3) holds as well. Finally, we show that if the initial condition in (1.6) is the divergence of a local bounded function and if (1.3) holds, then (1.2) can be improved to

(1.4)

A rather degenerate example of environment satisfying (1.3) can be constructed by letting be i.i.d. Bernoulli random variables in directions, and letting with i.i.d. exponential random variables in the remaining direction.

As shown in [20, 13] and recalled below, these estimates imply a range of other quantitative homogenization results, including bounds on the corrector. They can also be used to prove quenched central limit theorems for the associated random walk.

Under the condition that the coefficients are uniformly bounded away from  and infinity, the estimate (1.2) and the stronger estimate (1.4) with were proved in [13]. We refer to [6, 14, 15, 5] for more recent developments under this assumption. Bounds on the corrector under assumptions similar to ours and for were obtained in [17]. The case of coefficients that are bounded away from but not from infinity was considered in [20, 10].

Our approach is inspired by the strategy of [13], which rests on quenched heat kernel bounds. In the context of degenerate environments, quenched diffusive bounds on the heat kernel are false in general. However, under the condition (1.3), we will be able to control the anomalous behavior of the heat kernel, in the sense of every polynomial moment, by exploiting the method presented in [21]. We then show that these weaker bounds are sufficient to imply (1.2) and (1.4).

Our long-term goal is to develop a comparable strategy in the context of interacting particle systems, in particular to study the relaxation of the environment viewed by a tagged particle in the symmetric exclusion process. We expect the results of this paper to be a first step in this direction. Loosely speaking, in the present work, we leverage on the existence of one “good direction” where conductances are well-behaved. For the exclusion process, we hope to benefit from the good behavior of the model in the time direction, as illustrated for instance by [21, Lemma 5.3].

The results we present here shed light on the associated process of the random walk among random conductances. This is the Markov process with infinitesimal generator given by (1.1). In this view, the corrector provides us with harmonic coordinates that turn the walk into a martingale. As is well-known, these coordinates allow to show an annealed invariance principle for the random walk, under very general conditions on the conductances [16, 11]. The qualifier “annealed” indicates that convergence in law is only known if one averages over the environment as well as on the trajectories of the walk. When the conductances are uniformly elliptic, it was quickly realized [22] that the statement can be improved to a quenched invariance principle: that is to say, one that holds for almost every realization of the environment. What needs to be shown is that the corrector, evaluated at the position of the random walk, is of lower order compared with the position of the walk itself, with probability one with respect to the environment. By general arguments, one only knows that the corrector is sublinear in an -averaged sense, and this is not sufficient in itself to guarantee a quenched result. One possibility to overcome this difficulty is to show that the walk is sufficiently “spread out” (in the sense that it satisfies heat kernel estimates), so that the averaged information on the corrector becomes sufficient to conclude. This was the route explored in a majority of papers on the subject [22, 23, 19, 8, 9, 18, 2]; see however [12, 7, 4, 3] for approaches more similar to ours. The results we derive here give much more precise information than these earlier works, since Corollary 1.2 below implies that the corrector is not only sublinear, but in fact grows slower than any power of the distance, with probability one.

1.1. Notation and main result

We say that are neighbors, and write , when . This endows with a graph structure, so that we may introduce the associated set of unoriented edges . Throughout the paper, we will typically denote points of by , and edges in by . For a given edge , we write and to denote its two endpoints, with the convention that for some , where we recall that is the canonical basis of . We identify the vector with the edge , for each .

The space of “environments” we consider is . The group naturally acts by translations on in the following way: for every and , we define

(1.5)

where for , we write . We consider a random whose law we denote by . We assume the family of random variables to be independent and stationary, i.e. for every , the random variables and have the same law. In other words, the random variables are independent, and the law of only depends on the orientation of the edge . We assume that for every , , since otherwise the model would truly be defined on a lower-dimensional space.

For a random variable and a fixed edge , we define

and simply write for the -dimensional random vector defined as

We observe that for every the operator is bounded and that its adjoint in , which we denote by , is defined as

Given a random variable , with , our goal is to understand the relaxation to equilibrium of , solution of

(1.6)

where

Whenever no confusion occurs, we write instead of . For , we say that a function is local with support of size if depends only on a finite number of conductances . Here is our main result.

Theorem 1.1.

Under the moment condition (1.3), the following statements hold.

  • For every , there exists a constant such that if is local with support of size , bounded and centered, then

    (1.7)
  • For every and , there exists a constant such that if is local with support of size and bounded, and if , then

    (1.8)

1.2. Consequences of the main result

As was shown in [20, Section 9] and [13, Section 6], Theorem 1.1 implies a host of other results of interest in stochastic homogenization. In particular, estimates on the corrector can be derived, by integration in time, from the relaxation to equilibrium of the solution to (1.6) with .

Corollary 1.2.

Assume that the moment condition (1.3) holds, and let . If , then there exists solution to the equation

(1.9)

Moreover, is in for every .

We refer to [13, Proposition 4] for the proofs of these results. As another example, we can estimate the corrector with massive term , i.e. the solution of

By [13, Proposition 5], we obtain that for every and ,

1.3. On the necessity of the moment condition

We now explain why our assumption (1.3) on the law of is necessary in order to have the optimal relaxation decay (1.2). In this subsection we introduce the notation for where the constant only depends on the dimension of the lattice .

If (1.3) does not hold, then we can find and a sequence , such that for every ,

(1.10)

We show that the solution of

(1.11)

with for a fixed such that , does not satisfy the bound (1.2). We make this choice of initial datum for convenience, but as will be seen shortly, this is inessential. From (1.6), we may bound by stationarity

and hence, by the maximum principle,

Therefore, if , we get

(1.12)

and thus

(1.13)

where in the last equality we use that and are independent and have the same law. For with we may apply (1.10) and estimate for every

Thus, for any we contradict (1.2).

1.4. Organisation of the paper

In the rest of the paper, we assume that the moment condition (1.3) holds. We derive the necessary heat kernel bounds in Section 2, and proceed to prove Theorem 1.1 in Section 3.

2. Heat kernel bounds

We say that a random field is stationary if for every , we have . Conversely, given a random variable , we define its stationary extension as the random field given by . If the function solves (1.6), then its stationary extension is a solution in of the parabolic PDE

(2.1)

with the spacial discrete gradient defined, for an edge and a random field , as

and the adjoint of in .

Let be the parabolic Green function associated to the operator , i.e. for every the unique bounded solution in of

(2.2)

with being the indicator function defined for

For every , and , we write

(2.3)

The goal of this section is to show the heat kernel upper bound summarized in the following Lemma 2.1, which we then lift to an estimate on the gradient of the heat kernel in Lemma 2.2.

Lemma 2.1.

Let be as in (2.2). There exists a random variable such that for all ,

(2.4)

and

(2.5)

or, equivalently

(2.6)
Lemma 2.2.

For every , there exists such that for every it holds

(2.7)
(2.8)

where is the stationary extension of the random variable defined in Lemma 2.1 and

(2.9)

Moreover, for every it holds

(2.10)

The proof of Lemma 2.1 consists in showing that the environments we consider are “-moderate” in the sense defined in [21].

Lemma 2.3.

There exists a family of non-negative random variables and a family of nearest-neighbor paths such that the following properties hold:

  • For every and

    (2.11)
  • Let be a random variable and a random field; for every , the path connects the two endpoints of , it is such that its length satisfies, for every ,

    (2.12)

    and it holds

    (2.13)
    (2.14)
  • Both and are stationary;

Proof.

In this proof the notation stands for with . For every edge , we define

(2.15)

Since is bounded from below, there exists a path that achieves the infimum above. We choose one according to a fixed, deterministic tie-breaking rule, and denote it by . With this definition of weights and paths, the point (iii) immediately follows by stationarity of . We also have

i.e. inequality (2.13). Note that by definition of , an analogous calculation yields (2.14). Moreover, since , we have

and thanks to (2.15), the bound (2.12) is directly implied by (2.11). In order to show this last bound, we want to argue that for every and

(2.16)

We proceed in the following way: Thanks to assumption (1.3) and independence, it holds for to be fixed below that

and therefore there exists a (random) such that

(2.17)

The main idea is to explicitly construct a path , connecting the two endpoints of for which we have some control on the quantity : From that, thanks to definition (2.15), we also obtain the same bound for (2.16). Without loss of generality, let us assume that for some . Therefore, if we just choose and get by stationarity that for every

(2.18)

i.e. the bound (2.11). If otherwise , then by stationarity and our assumption on the random variables to be non-degenerate, we may fix a (independent on and ) and consider

which satisfies

(2.19)

for a positive constant . Therefore, we estimate for any

(2.20)

We control the first term on the r.h.s of (2.20) by

(2.21)

For the second term the idea is two observe that, if , then we might consider as path

the one starting from z, moving k steps in direction , then moving in direction and finally going back with other k steps to . Therefore,

and since by construction

we may control

Independence and then stationarity hence yield

Fixing now with , we get

so that if we choose in (2.17), this turns into

and (2) and (2.18) respectively into

By wrapping up the previous three inequalities we conclude (2.16) and hence (2.11). ∎

Lemma 2.4.

For every , let

(2.22)

Then, for every

(2.23)
Proof.

For a fixed edge and any , let us consider

We observe that if there are edges whose optimal path passes through b, then there must be and edge with . Therefore,

The path being connected, allows us to estimate

Chebyshev’s inequality yields for every

We may now choose q big enough, e.g. , to conclude

which implies inequality (2.23) for every . ∎

We now show the following general result on stationary random fields.

Lemma 2.5.

Let be a stationary random field. Then for every , edges and there exists a such that

(2.24)
(2.25)

whenever the r.h.s. of (2.24)-(2.25) is finite.

Proof.

For the sake of simplicity, we skip the argument in and write instead of , with depending on , and . We start with (2.25): Let us fix and . Since by (2.23) we have -almost surely that , we may write

and by Hölder’s inequality in

We now decompose the second term in the r.h.s. of the previous inequality as

and thus rewrite

Therefore, an application of Hölder’s inequality with exponents in yields

(2.26)