Homogenization with large spatial random potential
We consider the homogenization of parabolic equations with large spatially-dependent potentials modeled as Gaussian random fields. We derive the homogenized equations in the limit of vanishing correlation length of the random potential. We characterize the leading effect in the random fluctuations and show that their spatial moments converge in law to Gaussian random variables. Both results hold for sufficiently small times and in sufficiently large spatial dimensions , where is the order of the spatial pseudo-differential operator in the parabolic equation. In dimension , the solution to the parabolic equation is shown to converge to the (non-deterministic) solution of a stochastic equation in the companion paper . The results are then extended to cover the case of long range random potentials, which generate larger, but still asymptotically Gaussian, random fluctuations.
Homogenization theory, partial differential equations with random coefficients, Gaussian fluctuations, large potential, long range correlations
35R60, 60H05, 35K15.
Let and the pseudo-differential operator with symbol . We consider the following evolution equation in dimension :
Here, and is a mean zero stationary Gaussian process defined on a probability space . We assume that has bounded and integrable correlation function , where is the mathematical expectation associated with , and bounded, continuous in the vicinity of , and integrable power spectrum in the sense that . The size of the potential is constructed so that the limiting solution as is different from the unperturbed solution obtained by setting . The appropriate size of the potential is given by
The potential is bounded -a.s. on bounded domains but is unbounded -a.s. on . By using a method based on the Duhamel expansion, we nonetheless obtain that for a sufficiently small time , the above equation admits a weak solution uniformly in time and .
Moreover, as , the solution converges strongly in uniformly in to its limit solution of the following homogenized evolution equation
where the effective (non-negative) potential is given by
Here, is the volume of the unit sphere . We denote by the propagator for the above equation, which to associates solution of (3).
We assume that the non-negative (by Bochner’s theorem) power spectrum is bounded by , where is a positive, bounded, radially symmetric, and integrable function in the sense that . Then we have the following result.
There exists a time such that for all , there exists a solution uniformly in . Moreover, let us assume that is of class for some and let be the unique solution in to (3). Then, we have the convergence results
where means for some , , where is a deterministic function in uniformly in time, and where we have defined
The Fourier transform of the deterministic function is determined explicitly in (58) below.
The error term is dominated by deterministic components when and by random fluctuations when . In both situations, the random fluctuations may be estimated as follows. We show that
converges weakly in space and in distribution to a Gaussian random variable. More precisely, we have
Let be a test function such that its Fourier transform . Then we find that for all
where convergence holds in the sense of distributions, is the standard multiparameter Wiener measure on and is the standard deviation defined by
This shows that the fluctuations of the solution are asymptotically given by a Gaussian random variable, which is consistent with the central limit theorem.
We observe a sharp transition in the behavior of at . For , the following holds. The size of the potential that generates an order perturbation is now given by (see the last inequality in lemma 2.2)
Using the same methods as for the case , we may obtain that is uniformly bounded and thus converges weakly in for sufficiently small times to a function . The problem is addressed in , where it is shown that is the solution to the stochastic partial differential equation in Stratonovich form
with and d-parameter spatial white noise “density”. The above equation admits a unique solution that belongs to locally uniformly in time. Stochastic equations have also been analyzed in the case where (i.e., when ), see [9, 12]. However, our results show that such solutions cannot be obtained as a limit in of solutions corresponding to vanishing correlation length so that their physical justification is more delicate. In the case and with a bounded potential, we refer the reader to  for more details on the above stochastic equation.
The above theorems 1 and 2 assume short range correlations for the random potential. Mathematically, this is modeled by an integrable correlation function, or equivalently a bounded value for . Longer range correlations may be modeled by unbounded power spectra in the vicinity of the origin, for instance by assuming that , where is bounded in the vicinity of the origin and is a homogeneous function of degree for some . Provided that so that defined in (4) is still bounded, the results of theorems 1 and 2 may be extended to the case of long range fluctuations. We refer the reader to theorem 3 in section 3.3 below for the details. The salient features of the latter result is that the convergence properties stated in theorem 1 still hold with replaced by and that the random fluctuations are now asymptotically Gaussian processes of amplitude of order . Moreover, they may conveniently be written as stochastic integrals with respect to some multiparameter fractional Brownian motion in place of the Wiener measure appearing in (8).
Let us also mention that all the result stated here extend to the Schrödinger equation, where is replaced by in (1). We then verify that in (3) is replaced by so that the homogenized equation is given by
The main effect of the randomness is therefore a phase shift of the quantum waves as they propagate through the random medium. Because the semigroup associated to the free evolution of quantum waves does not damp high frequencies as efficiently as for the parabolic equation (1), some additional regularity assumptions on the initial condition are necessary to obtain the limiting behaviors described in theorems 1 and 2. We do not consider the case of the Schrödinger equation further here.
The rest of the paper is structured as follows. Section 2 recasts (1) as an infinite Duhamel series of integrals in the Fourier domain. The cross-correlations of the terms appearing in the series are analyzed by calculating moments of Gaussian variables and estimating the contributions of graphs similar to those introduced in [5, 11]. These estimates allow us to construct a solution to (1) in uniformly in time for sufficiently small times . The maximal time of validity of the theory depends on the power spectrum . The estimates on the graphs are then used in section 3 to characterize the limit and the leading random fluctuations of the solution . The extension of the results to long range correlations is presented in section 3.3.
The analysis of (1) and of similar operators has been performed for smaller potentials than those given in (2) in e.g. [1, 6] when converges strongly to the solution of the unperturbed equation (with ). The results presented in this paper may thus be seen as generalizations to the case of sufficiently strong potentials so that the unperturbed solution is no longer a good approximation of . The analysis presented below is based on simple estimates for the Feynman diagrams corresponding to Gaussian random potentials and does not extend to other potentials such as Poisson point potentials, let alone potentials satisfying some mild mixing conditions. Extension to other potentials would require more sophisticated estimates of the graphs than those presented here or a different functional setting than the setting considered here. For related estimates on the graphs appearing in Duhamel expansion, we refer the reader to e.g. [4, 5, 11].
2 Duhamel expansion and existence theory
Since is a stationary mean zero Gaussian random field, it admits the following spectral representation
where is the complex spectral process such that
for all and in with the power spectrum and correlation function of respectively defined by
In the sequel, we write so that and .
2.1 Duhamel expansion
Let us introduce , the Fourier transform of . We may now recast the parabolic equation (1) as
with , where
Here and below, we use the notation . After integration in time, the above equation becomes
This allows us to write the formal Duhamel expansion
Here, we have introduced the following notation:
We now show that for sufficiently small times, the expansion (15) converges (uniformly for all sufficiently small) in the sense. Moreover, the norm of is bounded by the norm of , which gives us an a priori estimate for the solution. The convergence results are based on the analysis of the following moments
which, thanks to (16), are given by
Let us introduce the notation and . We also define and for . Since is real-valued, we find that
where the domain of integration in the and variables is inherited from the previous expression. Note that no integration is performed in the variables and . The integral may be recast as
where the integrals in all the variables for are performed over . The functions ensure that the integration is equivalent to the one presented above. The latter form is used in the proof of lemma 2.1 below.
We need to introduce additional notation. The moments of are defined as
We also introduce the following covariance function
These terms allow us to analyze the convergence properties of the solution . Let be a smooth (integrable and square integrable is sufficient) test function on . We introduce the two random variables
2.2 Summation over graphs
We now need to estimate moments of the Gaussian process . The expectation in vanishes unless there is such that is even. The expectation of a product of Gaussian variables has an explicit structure written as a sum over all possible products of pairs of indices of the form . The moments are thus given as a sum of products of the expectation of pairs of terms , where the sum runs over all possible pairings. We define the pair , , as the contribution in the product given by
We have used here the fact that .
The number of pairings in a product of terms (i.e., the number of allocations of the set into unordered pairs) is equal to
There is consequently a very large number of terms appearing in . In each instance of the pairings, we have terms and terms . Note that . We denote by simple pairs the pairs such that , which thus involve a delta function of the form .
The collection of pairs for values of and values of constitutes a graph constructed as follows; see Fig.1 and . The upper part of the graph with bullets represents while the lower part with bullets represents . The two squares on the left of the graph represent the variables and in while the squares on the right represent and . The dotted pairing lines represent the pairs of the graph . Here, denotes the collection of all possible graphs that can be constructed for a given .
We denote by the collection of the values of and by the collection of the values of . We then find that
This provides us with an explicit expression for as a summation over all possible graphs generated by moments of Gaussian random variables. We need to introduce several classes of graphs.
We say that the graph has a crossing if there is a such that . We denote by the set of graphs with at least one crossing and by the non-crossing graphs. We observe that is the sum over the crossing graphs and that is the sum over the non-crossing graphs in .
The unique graph with only simple pairs is called the simple graph and we define . We denote by the crossing simple graphs with only simple pairs except for exactly one crossing. The complement of in the crossing graphs is denoted by .
As we shall see, only the simple graph contributes an term in the limit and only the graphs in contribute to the leading order in the fluctuations of .
The graphs are defined similarly in the calculation of in (18) for and , except that crossing graphs have no meaning in such a context. A summation over of all the arguments of the functions shows that the last delta function may be replaced without modifying the integral in by .
This allows us to summarize the above calculations as follows:
2.3 Analysis of crossing graphs
involves the summation over the crossing graphs . Let us consider a graph with crossing pairs, . Crossing pairs are defined by and . Denote by , the crossing pairs and define . By summing the arguments inside the delta functions for all , we observe that the last of these delta functions may be replaced by
Similarly, by summing over all pairs with , we obtain that the last of these delta functions may be replaced by
The product of the latter two delta functions is then equivalent to
Analysis of the crossing terms in .
We evaluate the expression for in (24) at and integrate in the variable over . Let us define . For each , we perform the change of variables . We then define
Note that since . This allows us to obtain that
Here also includes the integration in the variable . The estimates for here and in subsequent sections rely on integrating selected time variables. All estimates are performed as the following lemma indicates.
Let given and consider an integral of the form
where for and assume that . Then
Moreover, let be a permutation of the indices . Define as with replaced by . Then .
Using the above result with the permutation leaving all indices fixed except and for some allows us to estimate by integrating in the th variable.
Proof. The derivation of (28) is immediate. We also calculate
Note that and are bounded by . We now estimate the integrals in the variables , , and for in (26). Note that cannot belong to and that does not belong to either since either (last crossing) or is a receiving end of the pairing line . Each integral is bounded by:
The remaining exponential terms are bounded by . Using lemma 2.1, this allows us to obtain that
Here, corresponds to the integration in the remaining time variables for . There are such variables. Note the square on the last line, which comes from integrating in both variables and .
The delta functions allow us to integrate in the variables for and the initial condition in the variable . Thanks to lemma 2.2 below, the power spectra allow us to integrate in the remaining variables in