Precise asymptotics for the parabolic Anderson model with a moving catalyst or trap
Abstract
We consider the solution to the parabolic Anderson model, where the potential is given by with a simple symmetric random walk on . Depending on the parameter , the potential is interpreted as a randomly moving catalyst or trap.
In the trap case, i.e., , we look at the annealed time asymptotics in terms of the first moment of . Given a localized initial condition, we derive the asymptotic rate of decay to zero in dimensions 1 and 2 up to equivalence and characterize the limit in dimensions 3 and higher in terms of the Green’s function of a random walk. For a homogeneous initial condition we give a characterisation of the limit in dimension 1 and show that the moments remain constant for all time in dimensions 2 and higher.
In the case of a moving catalyst (), we consider the solution from the perspective of the catalyst, i.e., the expression . Focusing on the cases where moments grow exponentially fast (that is, sufficiently large), we describe the moment asymptotics of the expression above up to equivalence. Here, it is crucial to prove the existence of a principal eigenfunction of the corresponding Hamilton operator. While this is wellestablished for the first moment, we have found an extension to higher moments.
AMS 2010 Subject Classification. Primary 60K37, 82C44; Secondary 60H25.
Key words and phrases. Parabolic Anderson model, annealed asymptotics, dynamic random medium.
0.1 Introduction
The parabolic Anderson model (PAM) is the heat equation on the lattice with a random potential, given by
(0.1) 
where denotes a diffusion constant, a nonnegative function and the discrete Laplacian, defined by
Furthermore,
is a space and time dependent random potential.
We deal with the special case that the potential is given by
with a simple symmetric random walk with generator that starts in the origin and a parameter called coupling constant. In this paper we analyse the large time asymptotics after averaging over the potential which is usually referred to as annealed asymptotics. We denote expectation with respect to the potential by .
One possible interpretation of this system arises from chemistry. Here, describes the concentration of reactant particles in a point at time in presence of a randomly moving particle. In the case , the particle acts as a decatalyst (or trap) that kills reactant particles with rate at its position. In the case of positive , we consider a catalyst particle that causes reactants to multiply with rate . In both cases is interpreted as the averaged concentration. For further interpretations and an overview over the PAM see for instance (GM90), (CM94), (M94) and (GK05).
Annealed asymptotics in the case of a positive coupling constant have already been investigated in (GH06). In the present work, we derive similar results with regard to the expression , which can be interpreted as the particle concentration in a neighbourhood of the catalyst. In addition to logarithmic asymptotics in terms of Lyapunov exponents, we derive asymptotics up to equivalence for most of the parameter choices where exponential growth is observed.
The case that is negative has to the best of our knowledge not been investigated so far. Its analysis relies on techniques quite different from those in the catalyst case as a functional analytic approach proves unfeasible here. We calculate moment limits dependent on the model parameters and, in the case of moment convergence towards zero, specify the convergence speed up to equivalence.
Whereas the PAM with time independent potential or whitenoise potential is well understood, some other time dependent potentials have just been examined recently. In (GdH06), (GdHM11) and (KS03), for instance, the authors investigate the case of infinitely many randomly moving catalysts. In (CGM11) the authors deal with the case of finitely many catalysts, whereas the article (DGRS11) is dedicated to a model similar to the case of infinitely many moving traps. Further examples of time dependent potentials can be found in (GdHM07), (GdHM09a), (GdHM10), (MMS11), and the recent survey (GdHM09b). Within these proceedings, (KS11), (LM11) and (MZ11) deal with the parabolic Anderson model with timeindependent potential.
In Section 0.2.1 we analyze the PAM with localized initial condition and . Let
denote the expected total mass of the system at time if the solution is initially localized in and the trap starts in the origin. We find
Theorem 0.1.1
For and every ,
and
Theorem 0.1.2
For and every ,
where denotes the Green’s function of a random walk with generator .
Remark 1
In Section 0.2.2 we analyze the case of a homogeneous initial condition . We find that in dimensions 2 and higher the average total mass in each point remains constant for all . This seems surprising since a symmetric random walk is recurrent in dimensions 1 and 2, but it follows by a rescaling argument and the fact that a Brownian motion is point recurrent only in dimension 1. In dimension 1 we give a representation of the asymptotic mass that depends on
but not on the strength of the potential . Let
denote the expected mass at time in the lattice point . The main results of this section are for ,
Theorem 0.1.3
For all ,
and for higher dimensions
Theorem 0.1.4
For and all ,
Remark 2
Even though the formula in Theorem 0.1.3 looks quite clumsy we find that is decreasing in . It tends to as tends to zero and it tends to zero as tends to infinity.
The third section is dedicated to analysing the leading order asymptotics of moments of the PAM solution from the perspective of the catalyst, i.e., we consider and the expression . For and we denote by
the th mixed moment at . Moreover, introduce the th Hamilton operator on by
where the potential is defined as , and acts on as
Here, the first term represents the random movement of a collection of independent random walks accounting for particle diffusion, and the second term arises from the shift by the position of the catalyst. By application of the wellestablished FeynmanKac formula and calculating the generator of the resulting semigroup, we obtain the operator representation
(0.2) 
This gives the connection between large time moment asymptotics and spectral analysis of the above Hamiltonian. Let us denote by the supremum of the spectrum of . Gärtner and Heydenreich (GH06) have shown that, for all and independently of ,
This limit is called th Lyapunov exponent. It can be shown by similar methods that just as well
However, this does not enable us to derive large time asymptotics up to equivalence. Assuming the existence of an eigenfunction corresponding to with certain properties, we could on a heuristic level decompose the right hand side of equation (0.2) as
Our next main result contains criteria under which this is indeed possible.
Theorem 0.1.5
Fix , and let one of the following conditions be satisfied:

or , large enough to ensure ,

, .
Then, there exists a strictly positive and summable eigenfunction of corresponding to . Assuming to be normed in , the large time asymptotics of the th moment are given by
(0.3) 
where denotes the norm in .
Remark 3
Remark 4
For the cases , the condition is sufficient to have positive exponential growth (i.e., ). The condition also implies exponential growth of the th moment.
0.2 Moving trap
This section is devoted to the case . Our main proof tool is the FeynmanKac representation of the solution given by
, and denote the expectation of a random walk with generator , and , respectively. The subscript indicates the starting point and the corresponding probability measures will be denoted by . By
we denote the transition probability of a random walk with generator .
0.2.1 Localized initial condition
In this section we prove Theorems 0.1.1 and 0.1.2. With the help of the FeynmanKac representation and a time reversal we find that, for all and ,
Dimensions 1 and 2
We start with the dimensions where the random walk is recurrent.
Proof (Theorem 0.1.1)
Using the semigroup representation of the resolvent
we find that
This implies, for all ,
Now the claim for follows by a standard Tauberian theorem. The case follows due to the recurrence of . ∎
Dimensions 3 and higher
A Tauberian theorem is not applicable in transient dimensions because here the expected number of particles does not converge to zero.
Proof (Theorem 0.1.2)
Let
Notice that the Green’s function is finite in transient dimensions and admits the following probabilistic representation.
That implies for all . Furthermore, we find that is the unique solution to following boundary problem
Hence, for all ,
∎
0.2.2 Homogeneous initial condition
In this section we prove Theorems 0.1.3 and 0.1.4. For a homogeneous initial condition the FeynmanKac representation yields, for all and ,
Dimension 1
Let be the first hitting time of and . The density of with respect to , , will be denoted by . To prove Theorem 0.1.3, we split into two parts
where and have not met up to time , and
where they have already met by time . The next proposition shows that is asymptotically negligible. Notice that this implies that there is no difference between the hard trap () and the soft trap () case because does not depend on .
Proposition 1
For all ,
Proof
We can assume without loss of generality that since is recurrent. Let be the the first jumping time of . Furthermore, for let
In a first step we give an upper bound for the rate of decay of . Let us abbreviate . Using the strong Markov property of we find
Here denotes the distribution function of an exponentially distributed random variable with parameter and denotes the corresponding density. By iteration we find that, for any ,
Since there exists such that for all and , we see that asymptotically
where is a positive constant. Let and . Then it follows by Hölder’s inequality that
For let
Obviously admits the same asymptotic behaviour as . Fix . The strong Markov property and the central limit theorem yield
Here is a positive constant. This proves the claim .∎
Now we show what asymptotically looks like. Recall that .
Proposition 2
For all ,
Proof
Because of the strong Markov property of and we find
It follows by Donsker’s invariance principle that
Here and denote two independent Brownian motions that start in the origin with variance and , respectively.
Their expectations are denoted by and , respectively. Moreover, and denotes a Gaussian density with variance .
Indeed, the application of Donsker’s invariance principle is not trivial because we have to sum over all , where it cannot be applied uniformly.
Let
, and .
Notice that and are independent. It follows that
Now the claim follows by substituting .∎
Dimensions 2 and higher
In dimensions 2 and higher, we find that asymptotically the expected mass remains constant because a Brownian motion is point recurrent only in dimension 1.
Proof (Theorem 0.1.4)
Let be the first time that the process hits the centered ball with radius , and let
Similarly as in the case we find with the help of Donsker’s invariance principle that
However, for and ,
Hence, it follows by monotone convergence that which implies that for all .∎
0.3 Moving catalyst
In this section we stick to the homogeneous initial condition and examine the case of a randomly moving catalyst, i.e., we consider .
0.3.1 Spectral properties of higherorder Anderson Hamiltonians
Throughout this section, we write for all . Considering the first Hamilton operator given by
the existence of an eigenfunction corresponding to its largest spectral value, provided that this value is greater than zero, has been widely known for some time. The following theorem extends this to the case and constitutes the main statement of this section:
Theorem 0.3.1
Assume . Then, is isolated in the point spectrum of with onedimensional eigenspace. The corresponding eigenfunction may be chosen strictly positive.
For a start, we restrict the operator to the subspace of componentwise symmetric functions
which is obviously closed in . Recall the definition of the operators and from Section 0.1 and define and as their restrictions on the set above. In the same manner, we denote by the restricted secondorder Hamilton operator. The reader may easily retrace these operators are endomorphisms on . In particular, is a selfadjoint operator on the Hilbert space , and it is essential that the supremum of its spectrum coincides with , which can be shown elementarily. Each eigenfunction of corresponding to is an eigenfunction of as well. Moreover, we expect that an eigenfunction of is, or at least could be chosen as, an element of . In view of that, passing over to is just a natural approach. In the next step, we write rather than in order to emphasize the dependence on the potential parameter , and we establish a further translation of the main task:
Lemma 1
Suppose . Then, the resolvent operator exists on , and for all , we have
Moreover, for and ,
Proof
A Fourier transform reveals that the spectrum of is concentrated on the negative halfaxis, thus exists on for all . In particular, it exists on , and then it coincides with as is an endomorphism on . Assertions (i) and (iii) follow by rearranging the equations considered and applying the resolvent operator. The second relation is shown using the RayleighRitz formula.∎
As a next step, we introduce an operator on having the same spectrum and the same point spectrum as and that admits the decomposition . Here, is compact and the supremum of is strictly smaller than the supremum of . Then, we use Weyl’s theorem to obtain that the largest value in belongs to the point spectrum . The resolvent admits the representation
where the resolvent kernel is defined as
Here, is a random walk on with generator . Then we obtain
(0.4) 
If we assume , we get
for , and in particular
for . Let us therefore introduce the operator on with
Both operators are apparently selfadjoint. The lemma below identifies the spectra and point spectra of