Functional limit theorems for sums of independent geometric Lévy processes

Functional limit theorems for sums of independent geometric Lévy processes

\fnmsZakhar \snmKabluchko\correflabel=e1] [ Institute of Stochastics, Ulm University, Helmholtzstr. 18, 89069 Ulm, Germany
\smonth11 \syear2009\smonth6 \syear2010
\smonth11 \syear2009\smonth6 \syear2010
\smonth11 \syear2009\smonth6 \syear2010

Let \xi_{i}, i\in\mathbb{N}, be independent copies of a Lévy process \{\xi(t),t\geq 0\}. Motivated by the results obtained previously in the context of the random energy model, we prove functional limit theorems for the process


as N\to\infty, where s_{N} is a non-negative sequence converging to +\infty. The limiting process depends heavily on the growth rate of the sequence s_{N}. If s_{N} grows slowly in the sense that \liminf_{N\to\infty}\log N/s_{N}>\lambda_{2} for some critical value \lambda_{2}>0, then the limit is an Ornstein–Uhlenbeck process. However, if \lambda:=\lim_{N\to\infty}\log N/s_{N}\in(0,\lambda_{2}), then the limit is a certain completely asymmetric \alpha-stable process \mathbb{Y}_{\alpha;\xi}.


0 \volume17 \issue3 2011 \firstpage942 \lastpage968 \doi10.3150/10-BEJ299 \newremarkremarkRemark[section]


Sums of independent geometric Lévy processes


\alpha-stable processes \kwdfunctional limit theorem \kwdgeometric Brownian motion \kwdrandom energy model

1 Introduction and statement of main results

1.1 Introduction

One of the simplest models in the physics of disordered systems is the random energy model (REM). The partition function of the random energy model at an inverse temperature \beta>0 is a random variable S_{n}(\beta) given by

S_{n}(\beta)=\sum_{i=1}^{2^{n}}\mathrm{e}^{\beta\sqrt{n}\zeta_{i}}, (1)

where \zeta_{i}, i\in\mathbb{N}, are i.i.d. standard Gaussian random variables. Bovier et al. [7] studied the limit laws of S_{n}(\beta) as n\to\infty in dependence on the parameter \beta. They showed that for \beta<\sqrt{\log 2/2}, the random variable S_{n}(\beta) obeys a central limit theorem with a Gaussian limit law, whereas for \beta>\sqrt{\log 2/2}, the limit distribution is a completely asymmetric \alpha-stable law. The results of [7] have been extended by Ben Arous et al. [3] to the case when the random variables \zeta_{i} are non-Gaussian; see also [5, 6, 12]. Extending [7] in a different direction, Cranston and Molchanov [8] considered sums of the form

R_{n}(\beta)=\sum_{i=1}^{N(n)}\mathrm{e}^{\beta\sum_{j=1}^{n}\zeta_{i,j}}, (2)

where \zeta_{i,j}, (i,j)\in\mathbb{N}^{2}, is a two-dimensional array of i.i.d. random variables, N(n) is a certain exponentially growing function of n, \beta>0, and n\to\infty. The sum R_{n}(\beta) reduces to S_{n}(\beta) if the random variables \zeta_{i,j} are standard Gaussian and N(n)=2^{n}. Cranston and Molchanov [8] have shown that the behavior of the sum R_{n}(\beta) is rather similar to that of the sum S_{n}(\beta), with Gaussian and completely asymmetric \alpha-stable limit laws. Unaware of [8], the author proved essentially the same result in [14].

The aim of the present paper is to obtain functional limit theorems corresponding to the results of [7, 8, 14]. That is, we will consider sums of exponentials of stochastic processes (Lévy processes or random walks) rather than sums of exponentials of random variables. We prefer to work with Lévy processes, but it should be stressed that all our results have straightforward analogues for random walks. Let \xi_{i}, i\in\mathbb{N}, be independent copies of a Lévy process \{\xi(t),t\geq 0\}, and let \{s_{N}\}_{N\in\mathbb{N}} be a non-negative sequence. We are interested in the limiting properties, as N\to\infty, of the stochastic process Z_{N} defined by

Z_{N}(t)=\sum_{i=1}^{N}\mathrm{e}^{\xi_{i}(s_{N}+t)}. (3)

Since the random variable Z_{N}(0) reduces essentially to R_{s_{N}}(\beta), we will recover the results of [7, 8, 14] by restricting our processes to t=0. If s_{N}=\beta^{2}n, N=2^{n}, and \xi is a standard Brownian motion, then Z_{N}(0) has the same distribution as the partition function of the random energy model S_{n}(\beta) given in (1). The results of [7, 8, 14] suggest that the limiting process for Z_{N} as N\to\infty should be either Gaussian or completely asymmetric \alpha-stable depending on the rate of growth of the sequence s_{N}. We will show that this is indeed the case, obtaining in the limit an Ornstein–Uhlenbeck process in the “slow growth regime”, and a certain completely asymmetric \alpha-stable process \mathbb{Y}_{\alpha;\xi} in the “fast growth regime”. The family of processes \mathbb{Y}_{\alpha;\xi} has not been studied in the literature so far, although a similar class of max-stable processes has been considered in [23].

To give a motivation for studying the process Z_{N}, consider the following problem. Suppose that we are given a portfolio consisting of a large number N of financial assets whose prices are modeled by independent geometric Brownian motions (or, somewhat more generally, by independent geometric Lévy processes). Then, the price of the whole portfolio after s_{N} units of time have passed is given by the process Z_{N}. It will be shown below that if s_{N}, as a function of N, grows slowly (i.e., if we are looking at the price in the near future), then the price of the portfolio is approximated by an Ornstein–Uhlenbeck process, whereas if s_{N} grows rapidly (i.e., if we are interested in the remote future), then the price is approximated by the \alpha-stable process \mathbb{Y}_{\alpha;\xi}. For example, if we are summing standard geometric Brownian motions, then the boundary between the near future and the remote future lies at s_{N}\sim\frac{1}{2}\log N.

1.2 Notation

Before we can state our results, we need to recall some facts related to Cramér’s large deviations theorem; see, for instance, [9], Chapter 2.2. A Lévy process is a process with stationary, independent increments and cadlag sample paths. Let \{\xi(t),t\geq 0\} be a Lévy process such that

\psi(u):=\log\mathbb E\mathrm{e}^{u\xi(1)}\qquad\mbox{is finite for all }u\in% \mathbb{R}. (4)

We always assume that \xi(1) is not a.s. constant. The function \psi is infinitely differentiable and strictly convex with \psi(0)=0. It follows that \psi^{\prime}\dvtx[0,\infty)\to[\beta_{0},\beta_{\infty}) is a monotone increasing bijection, where

\beta_{0}=\psi^{\prime}(0)=\mathbb E\xi(1),\qquad\beta_{\infty}=\lim_{u\to+% \infty}\psi^{\prime}(u). (5)

Let I\dvtx[\beta_{0},\beta_{\infty})\to[0,+\infty) be the Legendre–Fenchel transform of \psi defined by

I(\psi^{\prime}(u))=u\psi^{\prime}(u)-\psi(u),\qquad u\geq 0. (6)

The function I is strictly increasing, strictly convex, infinitely differentiable with I(\beta_{0})=0. As in [8, 14], it will turn out that the limiting properties of the process Z_{N} undergo phase transitions at the “critical points” \lambda_{1},\lambda_{2} given by

\lambda_{1}=I(\psi^{\prime}(1))=\psi^{\prime}(1)-\psi(1),\qquad\lambda_{2}=I(% \psi^{\prime}(2))=2\psi^{\prime}(2)-\psi(2). (7)

For example, if \xi is a standard Brownian motion, then \psi(u)=I(u)=u^{2}/2 and the critical points are given by \lambda_{1}=1/2, \lambda_{2}=2.

1.3 Statement of main results

Our first result deals with the case s_{N}=0 (but covers automatically also the case s_{N}=\mathit{const}). It is a consequence of the central limit theorem in the Skorokhod space, and is stated merely for completeness.

Theorem 1.1

If s_{N}=0 and condition (4) holds, then for every T>0, we have the following weak convergence of stochastic processes on the Skorokhod space D[0,T]:

\frac{Z_{N}(\cdot)-\mathbb EZ_{N}(\cdot)}{\sqrt{N}}\lx@stackrel{{\scriptstyle w% }}{{\rightarrow}}\mathbb{G}(\cdot),\qquad N\to\infty, (8)

where \{\mathbb{G}(t),t\geq 0\} is a zero-mean Gaussian process with covariance function

\operatorname{Cov}(\mathbb{G}(t_{1}),\mathbb{G}(t_{2}))=\mathrm{e}^{\psi(2)t_{% 1}+\psi(1)(t_{2}-t_{1})}-\mathrm{e}^{\psi(1)(t_{1}+t_{2})},\qquad 0\leq t_{1}% \leq t_{2}. (9)

Our next theorem deals with the case in which s_{N} grows slowly as a function of N. We will assume that the following slow growth condition is satisfied:

\lim_{N\to\infty}s_{N}=\infty,\qquad\liminf_{N\to\infty}\frac{\log N}{s_{N}}>% \lambda_{2}. (10)
Theorem 1.2

If conditions (4) and (10) hold, then for every T>0, we have the following weak convergence of stochastic processes on the Skorokhod space D[-T,T]:

\frac{Z_{N}(\cdot)-\mathbb EZ_{N}(\cdot)}{\sqrt{\operatorname{Var}Z_{N}(\cdot)% }}\lx@stackrel{{\scriptstyle w}}{{\rightarrow}}\mathbb{X}(\cdot),\qquad N\to\infty, (11)

where \{\mathbb{X}(t),t\in\mathbb{R}\} is a zero-mean Gaussian process with covariance function

\operatorname{Cov}(\mathbb{X}(t_{1}),\mathbb{X}(t_{2}))=\mathrm{e}^{(\psi(1)-{% \psi(2)/2})|t_{2}-t_{1}|},\qquad t_{1},t_{2}\in\mathbb{R}. (12)

Note that \mathbb{X} is an Ornstein–Uhlenbeck process and that the process on the left-hand side of (11) is well defined on [-T,T] if N is sufficiently large. In the next theorem, which deals with the “critical case”, we still obtain an Ornstein–Uhlenbeck process in the limit, but an additional factor appears. We will assume that the following critical growth condition holds: For some \vartheta\in\mathbb{R},

\log N=\lambda_{2}s_{N}+2\vartheta\sqrt{\psi^{\prime\prime}(2)s_{N}}+\mathrm{o% }\bigl{(}\sqrt{s_{N}}\bigr{)},\qquad N\to\infty. (13)
Theorem 1.3

If conditions (4) and (13) are satisfied, then we have the following convergence of stochastic processes:

\frac{Z_{N}(\cdot)-\mathbb EZ_{N}(\cdot)}{\sqrt{\operatorname{Var}Z_{N}(\cdot)% }}\lx@stackrel{{\scriptstyle f.d.d.}}{{\rightarrow}}\sqrt{\Phi(\vartheta)}% \mathbb{X}(\cdot),\qquad N\to\infty, (14)

where \Phi is the standard normal distribution function, \mathbb{X} is as in Theorem 1.2, and \lx@stackrel{{\scriptstyle f.d.d.}}{{\rightarrow}} denotes the weak convergence of finite-dimensional distributions.

Let us stress that even when restricted to t=0, the above theorem gives a more “smooth” picture of the critical regime than the corresponding results of [7, 8, 14] where only the case \vartheta=0 has been considered.

The next theorem shows that in the fast growth case, a non-Gaussian process \mathbb{Y}_{\alpha;\xi} appears in the limit. We need the following fast growth condition:

\lambda:=\lim_{N\to\infty}\frac{\log N}{s_{N}}\in(0,\lambda_{2}). (15)

Recall also that a random variable is called lattice if its values are of the form an+b, n\in\mathbb{Z}, for some a,b\in\mathbb{R}, and non-lattice if no such a and b exist.

Theorem 1.4

Suppose that (4) and (15) hold, and assume that the distribution of \xi(1) is non-lattice. Define \alpha\in(0,2) as the unique solution of the equation I(\psi^{\prime}(\alpha))=\lambda and let

A_{N}(t)=\cases{\displaystyle 0,&\quad if $\lambda\in(0,\lambda_{1})$,\cr% \displaystyle\mathrm{e}^{\psi(1)t}N\mathbb E\bigl{[}\mathrm{e}^{\xi(s_{N})}1_{% \xi(s_{N})\leq\log B_{N}(0)}\bigr{]}+l(t)B_{N}(t),&\quad if $\lambda=\lambda_{% 1}$,\cr\displaystyle\mathrm{e}^{\psi(1)t}\mathbb EZ_{N}(0),&\quad if $\lambda% \in(\lambda_{1},\lambda_{2})$,} (16)

where l(t)=(\psi^{\prime}(0)-\psi^{\prime}(1))t1_{t<0}, and

B_{N}(t)=\mathrm{e}^{{(\psi(\alpha)/\alpha)}t}\exp\biggl{\{}s_{N}I^{-1}\biggl{% (}\frac{\log N-\log(\alpha\sqrt{2\uppi\psi^{\prime\prime}(\alpha)s_{N}})}{s_{N% }}\biggr{)}\biggr{\}}. (17)

Then, for every T>0, we have the following convergence of stochastic processes on the Skorokhod space D[-T,T]:

\frac{Z_{N}(\cdot)-A_{N}(\cdot)}{B_{N}(\cdot)}\lx@stackrel{{\scriptstyle w}}{{% \rightarrow}}\mathbb{Y}_{\alpha;\xi}(\cdot),\qquad N\to\infty. (18)

Here, \mathbb{Y}_{\alpha;\xi} is a completely asymmetric \alpha-stable process that will be defined below.


Our results have straightforward discrete-time analogues with geometric Lévy processes replaced by exponentials of independent random walks. If \xi is the standard Brownian motion, then in all our results the weak convergence in the Skorokhod space can be replaced by the weak convergence in the space of continuous functions. The non-lattice assumption in Theorem 1.4 cannot be dropped; see [15].

1.4 Definition of the process \mathbb{Y}_{\alpha;\xi}

We now define the \alpha-stable process \mathbb{Y}_{\alpha;\xi} which appeared in Theorem 1.4. Our main reference on \alpha-stable distributions and processes is [22]. First of all, fix some \alpha\in(0,2), and let \xi_{i}, i\in\mathbb{N}, be independent copies of a Lévy process \{\xi(t),t\geq 0\} satisfying condition (4). Independently, let \{\Gamma_{i},i\in\mathbb{N}\} be the arrivals of a unit intensity Poisson process on the positive half-line. In other words, \Gamma_{k}=\sum_{i=1}^{k}\varepsilon_{i}, where \varepsilon_{i}, i\in\mathbb{N}, are i.i.d. exponential random variables with mean 1. Define U_{i}=\Gamma_{i}^{-1/\alpha}, i\in\mathbb{N}, and note that \{U_{i},i\in\mathbb{N}\} are the points of a Poisson process on (0,\infty) with intensity \alpha u^{-(\alpha+1)}\,\mathrm{d}u, arranged in the descending order. The restriction of the process \mathbb{Y}_{\alpha;\xi} to the positive half-line is then defined as follows: For t\geq 0, we set

\mathbb{Y}_{\alpha;\xi}(t)=\cases{\displaystyle\sum_{i\in\mathbb{N}}U_{i}% \mathrm{e}^{\xi_{i}(t)-{(\psi(\alpha)/\alpha)}t},&\quad$0<\alpha<1$,\cr% \displaystyle\lim_{\tau\downarrow 0}\biggl{(}\mathop{\mathop{\sum}_{i\in% \mathbb{N}}}_{U_{i}>\tau}U_{i}\mathrm{e}^{\xi_{i}(t)-\psi(1)t}-\log\frac{1}{% \tau}\biggr{)},&\quad$\alpha=1$,\cr\displaystyle\lim_{\tau\downarrow 0}\biggl{% (}\mathop{\mathop{\sum}_{i\in\mathbb{N}}}_{U_{i}>\tau}U_{i}\mathrm{e}^{\xi_{i}% (t)-{(\psi(\alpha)/\alpha)}t}-\frac{\alpha\tau^{1-\alpha}}{\alpha-1}\mathrm{e}% ^{(\psi(1)-{\psi(\alpha)/\alpha})t}\biggr{)},&\quad$1<\alpha<2$.} (19)

For the definition of the process \mathbb{Y}_{\alpha;\xi} on the negative half-line we refer to [13]. The Poisson representation of \alpha-stable random vectors – see [22], Theorem 3.12.2 – implies that for every t\geq 0, the expression defining \mathbb{Y}_{\alpha;\xi}(t) converges with probability 1. Further, the finite-dimensional distributions of the process \mathbb{Y}_{\alpha;\xi} are \alpha-stable with skewness parameter \beta=1. If \alpha\in(0,1), then the process \mathbb{Y}_{\alpha;\xi} takes only positive values; otherwise, it takes any real values. For the proof of the next proposition we refer to [13].

Proposition 1.1

The expression on the right-hand side of (19) defining \mathbb{Y}_{\alpha;\xi} converges uniformly on compact sets with probability 1.

As a consequence, the process \mathbb{Y}_{\alpha;\xi} has cadlag sample paths. Moreover, if \xi is a Brownian motion, then the sample paths of \mathbb{Y}_{\alpha;\xi} are even continuous. The process \mathbb{Y}_{\alpha;\xi} is stationary for \alpha\neq 1; see the preprint version of this paper [13] for this and other properties of \mathbb{Y}_{\alpha;\xi}. The rest of the paper is devoted to proofs.

2 Large deviations and truncated exponential moments

The next proposition on the asymptotic behavior of truncated exponential moments will play a crucial role in the sequel. Parts of it are scattered over [8, 14], but we will give a simple unified proof below.

Proposition 2.1

Let \{\xi(t),t\geq 0\} be a Lévy process satisfying (4) and suppose that the distribution of \xi(1) is non-lattice. Let \kappa\geq 0, and let b_{N}\to\infty and x_{N}\to\infty be two sequences. Let I be the large deviation function of \xi(1), as defined in (6).

  1. (1)

    If for some \vartheta\in\mathbb{R}, b_{N}=\psi^{\prime}(\kappa)x_{N}+\vartheta\sqrt{\psi^{\prime\prime}(\kappa)x_{% N}}+\mathrm{o}(\sqrt{x_{N}}) as N\to\infty, then

    \lim_{N\to\infty}\mathrm{e}^{-\psi(\kappa)x_{N}}\mathbb E\bigl{[}\mathrm{e}^{% \kappa\xi(x_{N})}1_{\xi(x_{N})\leq b_{N}}\bigr{]}=\Phi(\vartheta), (20)

    where \Phi is the standard Gaussian distribution function.

  2. (2)

    If \liminf_{N\to\infty}b_{N}/x_{N}>\psi^{\prime}(\kappa), then

    \lim_{N\to\infty}\mathrm{e}^{-\psi(\kappa)x_{N}}\mathbb E\bigl{[}\mathrm{e}^{% \kappa\xi(x_{N})}1_{\xi(x_{N})>b_{N}}\bigr{]}=0. (21)

    If, moreover, \lim_{N\to\infty}b_{N}/x_{N}=\psi^{\prime}(\alpha) for some \alpha>\kappa, then

    \mathbb E\bigl{[}\mathrm{e}^{\kappa\xi(x_{N})}1_{\xi(x_{N})>b_{N}}\bigr{]}\sim% \frac{\mathrm{e}^{\kappa b_{N}}}{(\alpha-\kappa)\sqrt{2\uppi\psi^{\prime\prime% }(\alpha)x_{N}}}\mathrm{e}^{-I(b_{N}/x_{N})x_{N}},\qquad N\to\infty. (22)
  3. (3)

    If \limsup_{N\to\infty}b_{N}/x_{N}<\psi^{\prime}(\kappa), then

    \lim_{N\to\infty}\mathrm{e}^{-\psi(\kappa)x_{N}}\mathbb E\bigl{[}\mathrm{e}^{% \kappa\xi(x_{N})}1_{\xi(x_{N})\leq b_{N}}\bigr{]}=0. (23)

    If, moreover, \lim_{N\to\infty}b_{N}/x_{N}=\psi^{\prime}(\alpha) for some \alpha\in(0,\kappa), then

    \mathbb E\bigl{[}\mathrm{e}^{\kappa\xi(x_{N})}1_{\xi(x_{N})\leq b_{N}}\bigr{]}% \sim\frac{\mathrm{e}^{\kappa b_{N}}}{(\kappa-\alpha)\sqrt{2\uppi\psi^{\prime% \prime}(\alpha)x_{N}}}\mathrm{e}^{-I(b_{N}/x_{N})x_{N}},\qquad N\to\infty. (24)

The following precise form of Cramér’s large deviations theorem was stated and proved in [2, 18] for sums of i.i.d. random variables, but applies equally well to Lévy processes.

Theorem 2.1

Let \{\xi(t),t\geq 0\} be a Lévy process satisfying (4) and suppose that the distribution of \xi(1) is non-lattice. Let \beta=\psi^{\prime}(\alpha), where \alpha>0. Then,

\mathbb{P}[\xi(T)\geq\beta T]\sim\frac{1}{\alpha\sqrt{2\uppi\psi^{\prime\prime% }(\alpha)T}}\mathrm{e}^{-I(\beta)T},\qquad T\to\infty. (25)

The statement holds uniformly in \beta\in K for any compact set K\subset(\beta_{0},\beta_{\infty}).


Proof of Proposition 2.1 We will use an exponential change of measure argument. Denote by F_{t} the distribution function of \xi(t). There exists a Lévy process \{\tilde{\xi}(t),t\geq 0\} (an exponential twist of \xi) such that \tilde{F}_{t}, the distribution function of \tilde{\xi}(t), is given by

\frac{\tilde{F}_{t}(\mathrm{d}x)}{F_{t}(\mathrm{d}x)}=\mathrm{e}^{\kappa x-% \psi(\kappa)t},\qquad x\in\mathbb{R}. (26)

Recall from (4) that \psi(u)=\log\mathbb E\mathrm{e}^{u\xi(1)} and let \tilde{\psi}(u)=\log\mathbb E\mathrm{e}^{u\tilde{\xi}(1)}. By (26), we have

\tilde{\psi}(u)=\log\int_{\mathbb{R}}\mathrm{e}^{ux}\,\mathrm{d}\tilde{F}_{1}(% x)=\log\int_{\mathbb{R}}\mathrm{e}^{ux}\mathrm{e}^{\kappa x-\psi(\kappa)}\,% \mathrm{d}F_{1}(x)=\psi(u+\kappa)-\psi(\kappa). (27)


\mathbb E\tilde{\xi}(T)=\tilde{\psi}^{\prime}(0)T=\psi^{\prime}(\kappa)T,% \qquad\operatorname{Var}\tilde{\xi}(T)=\tilde{\psi}^{\prime\prime}(0)T=\psi^{% \prime\prime}(\kappa)T. (28)

The study of the truncated exponential moment

M_{N}:=\mathrm{e}^{-\psi(\kappa)x_{N}}\mathbb E\bigl{[}\mathrm{e}^{\kappa\xi(x% _{N})}1_{\xi(x_{N})\leq b_{N}}\bigr{]} (29)

can be reduced to the study of the probability \mathbb{P}[\tilde{\xi}(x_{N})\leq b_{N}] as follows:

M_{N}=\int_{-\infty}^{b_{N}}\mathrm{e}^{\kappa x-\psi(\kappa)x_{N}}\,\mathrm{d% }F_{x_{N}}(x)=\int_{-\infty}^{b_{N}}\,\mathrm{d}\tilde{F}_{x_{N}}(x)=\mathbb{P% }[\tilde{\xi}(x_{N})\leq b_{N}]. (30)

Having the central limit theorem in mind, we write

\mathbb{P}[\tilde{\xi}(x_{N})\leq b_{N}]=\mathbb{P}\biggl{[}\frac{\tilde{\xi}(% x_{N})-\psi^{\prime}(\kappa)x_{N}}{\sqrt{\psi^{\prime\prime}(\kappa)x_{N}}}% \leq r_{N}\biggr{]},\qquad\mbox{where }r_{N}=\frac{b_{N}-\psi^{\prime}(\kappa)% x_{N}}{\sqrt{\psi^{\prime\prime}(\kappa)x_{N}}}. (31)

Let us prove part 1 of the proposition. By the assumption of part 1, we have \lim_{N\to\infty}r_{N}=\vartheta. Then, it follows from (30) and the central limit theorem that

\lim_{N\to\infty}M_{N}=\lim_{N\to\infty}\mathbb{P}[\tilde{\xi}(x_{N})\leq b_{N% }]=\Phi(\vartheta),

which proves (20).

Let us prove part 2 of the proposition. If \liminf_{N\to\infty}b_{N}/x_{N}>\psi^{\prime}(\kappa), then \lim_{N\to\infty}r_{N}=+\infty, and the central limit theorem implies that

\lim_{N\to\infty}M_{N}=\lim_{N\to\infty}\mathbb{P}[\tilde{\xi}(x_{N})\leq b_{N% }]=1,

which proves (21). To prove (22), we will apply Theorem 2.1 to the process \tilde{\xi}. The large deviation function of the process \tilde{\xi} is defined by \tilde{I}(\tilde{\psi}^{\prime}(u))=u\tilde{\psi}^{\prime}(u)-\tilde{\psi}(u). Hence, setting \beta=\tilde{\psi}^{\prime}(u) and taking into account (27), we obtain

\tilde{I}(\beta)=\tilde{I}(\tilde{\psi}^{\prime}(u))=u\tilde{\psi}^{\prime}(u)% -\tilde{\psi}(u)=u\psi^{\prime}(u+\kappa)-\psi(u+\kappa)+\psi(\kappa).

Note that \beta=\psi^{\prime}(u+\kappa) by (27). It follows that we have the following formula for the function \tilde{I}:

\tilde{I}(\beta)=I(\beta)+\psi(\kappa)-\kappa\beta. (32)

If \lim_{N\to\infty}b_{N}/x_{N}=\psi^{\prime}(\alpha)=\tilde{\psi}^{\prime}(% \alpha-\kappa), then we apply Theorem 2.1 to obtain that

\mathbb{P}[\tilde{\xi}(x_{N})>b_{N}]\sim\frac{1}{(\alpha-\kappa)\sqrt{2\uppi% \psi^{\prime\prime}(\alpha)x_{N}}}\mathrm{e}^{-\tilde{I}(b_{N}/x_{N})x_{N}},% \qquad N\to\infty.

A straightforward calculation using (32) leads to (22). The proof of part 3 of the proposition is analogous to the proof of part 2.

We will need the following lemmas; see [14], Lemma 3, and [13], Lemma 8.1, for their proofs.

Lemma 2.1

For every u>0, I^{\prime}(\psi^{\prime}(u))=u.

Lemma 2.2

Let \xi be a Lévy process satisfying (4). Let p\in[1,2] and fix some T>0. Then, there is C>0 such that for all t\in[0,T],

\mathbb E\bigl{|}\mathrm{e}^{\xi(t)}-1\bigr{|}^{p}\leq Ct^{p/2},\qquad\mathbb E% \bigl{|}\mathrm{e}^{2\xi(t)}-\mathrm{e}^{\xi(t)}\bigr{|}^{p}\leq Ct^{p/2}. (33)

3 Proof of Theorem 1.1

The proof is a standard application of the central limit theorem in the Skorokhod space. First let us compute the covariance function of the process \mathrm{e}^{\xi}. We have, for 0\leq t_{1}\leq t_{2},

\mathbb E\bigl{[}\mathrm{e}^{\xi(t_{1})}\mathrm{e}^{\xi(t_{2})}\bigr{]}=% \mathbb E\mathrm{e}^{2\xi(t_{1})}\cdot\mathbb E\mathrm{e}^{\xi(t_{2})-\xi(t_{1% })}=\mathrm{e}^{\psi(2)t_{1}+\psi(1)(t_{2}-t_{1})}.

Since \mathbb E\mathrm{e}^{\xi(t)}=\mathrm{e}^{\psi(1)t}, we have

\operatorname{Cov}\bigl{(}\mathrm{e}^{\xi(t_{1})},\mathrm{e}^{\xi(t_{2})}\bigr% {)}=\mathrm{e}^{\psi(2)t_{1}+\psi(1)(t_{2}-t_{1})}-\mathrm{e}^{\psi(1)(t_{1}+t% _{2})}.

An application of the multidimensional central limit theorem proves that (8) holds in the sense of the weak convergence of finite-dimensional distributions. To prove the weak convergence in the space D[0,T], we will verify the conditions of [10], Theorem 2. For every 0\leq t_{1}\leq t_{2}\leq T, we have

\mathbb E\bigl{(}\mathrm{e}^{\xi(t_{2})}-\mathrm{e}^{\xi(t_{1})}\bigr{)}^{2}=% \mathbb E\mathrm{e}^{2\xi(t_{1})}\cdot\mathbb E\bigl{(}\mathrm{e}^{\xi(t_{2})-% \xi(t_{1})}-1\bigr{)}^{2}<C(t_{2}-t_{1}),

where the last inequality follows from Lemma 2.2. This verifies the first condition of [10], Theorem 2. The second condition can be proved in a similar way: for every 0\leq t_{1}\leq t_{2}\leq t_{3}\leq T, we have

\displaystyle\mathbb E\bigl{[}\bigl{(}\mathrm{e}^{\xi(t_{2})}-\mathrm{e}^{\xi(% t_{1})}\bigr{)}^{2}\bigl{(}\mathrm{e}^{\xi(t_{3})}-\mathrm{e}^{\xi(t_{2})}% \bigr{)}^{2}\bigr{]}
\displaystyle\quad=\mathbb E\mathrm{e}^{2\xi(t_{1})}\cdot\mathbb E\bigl{(}% \mathrm{e}^{2(\xi(t_{2})-\xi(t_{1}))}-\mathrm{e}^{\xi(t_{2})-\xi(t_{1})}\bigr{% )}^{2}\cdot\mathbb E\bigl{(}\mathrm{e}^{\xi(t_{3})-\xi(t_{2})}-1\bigr{)}^{2}
\displaystyle\quad=\mathbb E\mathrm{e}^{2\xi(t_{1})}\cdot\mathbb E\bigl{(}% \mathrm{e}^{2\xi(t_{2}-t_{1})}-\mathrm{e}^{\xi(t_{2}-t_{1})}\bigr{)}^{2}\cdot% \mathbb E\bigl{(}\mathrm{e}^{\xi(t_{3}-t_{2})}-1\bigr{)}^{2}
\displaystyle\quad\leq C(t_{3}-t_{1})^{2},

where the last inequality follows from Lemma 2.2. This completes the proof.

4 Proof of Theorem 1.2

4.1 Weak convergence of finite-dimensional distributions

The first step in establishing Theorem 1.2 is to prove the weak convergence of finite-dimensional distributions in (11). It will be convenient to define a positive-valued stochastic process W_{N} by

W_{N}(t)=N^{-1/2}\mathrm{e}^{\xi(s_{N}+t)-{(\psi(2)/2)}(s_{N}+t)}. (34)

Let t_{1}\leq\cdots\leq t_{d} be fixed, and define a d-dimensional random vector \mathbf{W}_{N}=(W_{N}(t_{1}),\ldots,\break W_{N}(t_{d})). If \mathbf{W}_{1,N},\ldots,\mathbf{W}_{N,N} are independent copies of \mathbf{W}_{N}, then our aim is to prove that

\sum_{i=1}^{N}(\mathbf{W}_{i,N}-\mathbb E\mathbf{W}_{i,N})\lx@stackrel{{% \scriptstyle w}}{{\rightarrow}}(\mathbb{X}(t_{k}))_{k=1}^{d},\qquad N\to\infty. (35)

To see that this implies the weak convergence of finite-dimensional distributions in Theorem 1.2, it suffices to show that \operatorname{Var}Z_{N}(t)\sim N\mathrm{e}^{\psi(2)(s_{N}+t)} as N\to\infty. This can be done as follows:

\displaystyle\operatorname{Var}Z_{N}(t) \displaystyle= \displaystyle N\bigl{(}\mathbb E\mathrm{e}^{2\xi(s_{N}+t)}-\bigl{(}\mathbb E% \mathrm{e}^{\xi(s_{N}+t)}\bigr{)}^{2}\bigr{)} (36)
\displaystyle= \displaystyle N\bigl{(}\mathrm{e}^{\psi(2)(s_{N}+t)}-\mathrm{e}^{2\psi(1)(s_{N% }+t)}\bigr{)}
\displaystyle\sim \displaystyle N\mathrm{e}^{\psi(2)(s_{N}+t)},\qquad N\to\infty,

where we have used that \lim_{N\to\infty}s_{N}=\infty by (10) and that \psi(2)>2\psi(1) by the strict convexity of \psi.

We start proving (35). First of all, let us compute the covariance matrix of the random vector \mathbf{W}_{N}. Using (34) and (4), as well as the fact that \xi is a Lévy process, we obtain that for every 1\leq k\leq l\leq d,

\displaystyle\mathbb E[W_{N}(t_{k})W_{N}(t_{l})] \displaystyle= \displaystyle N^{-1}\mathrm{e}^{-\psi(2)s_{N}}\mathrm{e}^{-{(\psi(2)/2)}(t_{k}% +t_{l})}\mathbb E\mathrm{e}^{\xi(s_{N}+t_{k})+\xi(s_{N}+t_{l})} (37)
\displaystyle= \displaystyle N^{-1}\mathrm{e}^{-\psi(2)s_{N}}\mathrm{e}^{-{(\psi(2)/2)}(t_{k}% +t_{l})}\mathbb E\mathrm{e}^{2\xi(s_{N}+t_{k})}\cdot\mathbb E\mathrm{e}^{\xi(s% _{N}+t_{l})-\xi(s_{N}+t_{k})}
\displaystyle= \displaystyle N^{-1}\mathrm{e}^{-\psi(2)s_{N}}\mathrm{e}^{-{(\psi(2)/2)}(t_{k}% +t_{l})}\mathrm{e}^{\psi(2)(s_{N}+t_{k})}\mathrm{e}^{\psi(1)(t_{l}-t_{k})}
\displaystyle= \displaystyle N^{-1}\mathrm{e}^{(\psi(1)-{\psi(2)/2})(t_{l}-t_{k})}.

Since \psi(2)>2\psi(1) by the strict convexity of \psi, and \lim_{N\to\infty}s_{N}=\infty by (10), we have for every k=1,\ldots,d,

\sqrt{N}\mathbb EW_{N}(t_{k})=\mathrm{e}^{\psi(1)(s_{N}+t_{k})}\mathrm{e}^{-{(% \psi(2)/2)}(s_{N}+t_{k})}\to 0,\qquad N\to\infty. (38)

It follows from (37) and (38) that

\lim_{N\to\infty}N\operatorname{Cov}(W_{N}(t_{k}),W_{N}(t_{l}))=\mathrm{e}^{(% \psi(1)-{\psi(2)/2})(t_{l}-t_{k})}=\operatorname{Cov}(\mathbb{X}(t_{k}),% \mathbb{X}(t_{l})). (39)

In order to establish (35), we will verify the Lindeberg condition, that is, we will show that for every \varepsilon>0,

\lim_{N\to\infty}N\mathbb E\bigl{[}\|\mathbf{W}_{N}-\mathbb E\mathbf{W}_{N}\|^% {2}1_{\|\mathbf{W}_{N}-\mathbb E\mathbf{W}_{N}\|>\varepsilon}\bigr{]}=0, (40)

where \|\cdot\| is the Euclidean norm on \mathbb{R}^{d}. The multivariate form of the Lindeberg condition we are using can be found, for example, in [1], Example 4 on page 41. Since \lim_{N\to\infty}\sqrt{N}\mathbb E\mathbf{W}_{N}=0 by (38), we have \|\mathbb E\mathbf{W}_{N}\|<\varepsilon/2 for N large enough. Thus, for N large enough,

\mathbb E\bigl{[}\|\mathbf{W}_{N}-\mathbb E\mathbf{W}_{N}\|^{2}1_{\|\mathbf{W}% _{N}-\mathbb E\mathbf{W}_{N}\|>\varepsilon}\bigr{]}\leq\mathbb E\bigl{[}\|% \mathbf{W}_{N}-\mathbb E\mathbf{W}_{N}\|^{2}1_{\|\mathbf{W}_{N}\|>\varepsilon/% 2}\bigr{]}. (41)

Applying the inequality \|w_{1}+w_{2}\|^{2}\leq 2\|w_{1}\|^{2}+2\|w_{2}\|^{2} to the right-hand side of (41), we get

N\mathbb E\bigl{[}\|\mathbf{W}_{N}-\mathbb E\mathbf{W}_{N}\|^{2}1_{\|\mathbf{W% }_{N}-\mathbb E\mathbf{W}_{N}\|>\varepsilon}\bigr{]}\leq 2N\mathbb E\bigl{[}\|% \mathbf{W}_{N}\|^{2}1_{\|\mathbf{W}_{N}\|>\varepsilon/2}\bigr{]}+2N\|\mathbb E% \mathbf{W}_{N}\|^{2}.

Note that the second term on the right-hand side converges to 0 by (38). Hence, in order to prove (40), it suffices to show that for every \varepsilon>0,

\lim_{N\to\infty}N\mathbb E\bigl{[}\|\mathbf{W}_{N}\|^{2}1_{\|\mathbf{W}_{N}\|% >\varepsilon}\bigr{]}=0. (42)

Let \mathcal{A}_{N,k}, k=1,\ldots,d, be the random event \{W_{N}(t_{k})\geq W_{N}(t_{l}),l=1,\ldots,d\}. On \mathcal{A}_{N,k}, we have \|\mathbf{W}_{N}\|^{2}\leq dW_{N}^{2}(t_{k}). Hence,

\displaystyle\mathbb E\bigl{[}\|\mathbf{W}_{N}\|^{2}1_{\|\mathbf{W}_{N}\|>% \varepsilon}\bigr{]} \displaystyle\leq \displaystyle\sum_{k=1}^{d}\mathbb E\bigl{[}\|\mathbf{W}_{N}\|^{2}1_{\|\mathbf% {W}_{N}\|>\varepsilon}1_{\mathcal{A}_{N,k}}\bigr{]}
\displaystyle\leq \displaystyle d\sum_{k=1}^{d}\mathbb E\bigl{[}W_{N}^{2}(t_{k})1_{W_{N}(t_{k})>% \varepsilon/\sqrt{d}}\bigr{]}.

Thus, in order to prove (40), it suffices to show that for every t\in\mathbb{R} and every \varepsilon>0,

\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}^{2}(t)1_{W_{N}(t)>\varepsilon}\bigr{]% }=0. (43)

Recalling (34) and setting x_{N}=s_{N}+t and b_{N}=\frac{1}{2}(\log N+\psi(2)x_{N})+\log\varepsilon, we may write

N\mathbb E\bigl{[}W_{N}^{2}(t)1_{W_{N}(t)>\varepsilon}\bigr{]}=\mathrm{e}^{-% \psi(2)x_{N}}\mathbb E\bigl{[}\mathrm{e}^{2\xi(x_{N})}1_{\xi(x_{N})>b_{N}}% \bigr{]}. (44)

Note that by the slow growth condition (10),

\liminf_{N\to\infty}\frac{b_{N}}{x_{N}}>\frac{1}{2}\bigl{(}\lambda_{2}+\psi(2)% \bigr{)}=\psi^{\prime}(2).

Applying part 2 of Proposition 2.1 with \kappa=2 to the right-hand side of (44) we obtain (43). This verifies the Lindeberg condition (40) and, together with (39), completes the proof of the weak convergence of finite-dimensional distributions in Theorem 1.2.

4.2 Tightness

In the rest of the section we complete the proof of Theorem 1.2 by showing that the sequence

\biggl{\{}\frac{Z_{N}(t)-\mathbb EZ_{N}(t)}{\sqrt{\operatorname{Var}Z_{N}(t)}}% ,t\in[-T,T]\biggr{\}}_{N\in\mathbb{N}} (45)

is a tight sequence of stochastic processes in the Skorokhod space D[-T,T], where T>0 is fixed. Since the sequence (45) does not change if we replace the Lévy process \xi by the Lévy process \tilde{\xi}(t):=\xi(t)-\psi(1)t, we may and will assume that

\mathbb E\mathrm{e}^{\xi(t)}=1,\qquad t\geq 0. (46)

Further, since by (36), \operatorname{Var}Z_{N}(t)\sim N\mathrm{e}^{\psi(2)(s_{N}+t)} as N\to\infty, showing the tightness of (45) is equivalent to showing the tightness of the sequence \{Z_{N}^{\prime}(t),t\in[-T,T]\}_{N\in\mathbb{N}}, where Z_{N}^{\prime} is a process defined by

Z_{N}^{\prime}(t)=\frac{Z_{N}(t)-N}{N^{1/2}\mathrm{e}^{\psi(2)s_{N}/2}}. (47)

By a standard tightness criterion in the Skorokhod space given in [4], page 128, it suffices to show that there are p>1 and C>0 such that for all sufficiently large N\in\mathbb{N} and all t_{1},t_{2},t_{3}\in[-T,T] with t_{1}<t_{2}<t_{3},

\mathbb E[|Z_{N}^{\prime}(t_{2})-Z_{N}^{\prime}(t_{1})|^{p}|Z_{N}^{\prime}(t_{% 3})-Z_{N}^{\prime}(t_{2})|^{p}]\leq C|t_{3}-t_{1}|^{p}. (48)

It will be convenient to define random variables X_{1},\ldots,X_{N} and Y_{1},\ldots,Y_{N} (which depend on N,t_{1},t_{2},t_{3}) by

X_{i}=\mathrm{e}^{\xi_{i}(s_{N}+t_{2})}-\mathrm{e}^{\xi_{i}(s_{N}+t_{1})},% \qquad Y_{i}=\mathrm{e}^{\xi_{i}(s_{N}+t_{3})}-\mathrm{e}^{\xi_{i}(s_{N}+t_{2}% )}.

Then, we may rewrite (48) as follows:

\mathbb E\Biggl{|}\sum_{i=1}^{N}\sum_{j=1}^{N}X_{i}Y_{j}\Biggr{|}^{p}\leq CN^{% p}\mathrm{e}^{p\psi(2)s_{N}}|t_{3}-t_{1}|^{p}. (49)

First of all, we would like to treat the terms of the form X_{i}Y_{i} on the left-hand side of (49) separately. Applying Jensen’s inequality |\sum_{i=1}^{k}x_{i}|^{p}\leq k^{p-1}\sum{}_{i=1}^{k}|x_{i}|^{p}, x_{i}\in\mathbb{R}, we obtain

\displaystyle\mathbb E\Biggl{|}\sum_{i=1}^{N}\sum_{j=1}^{N}X_{i}Y_{j}\Biggr{|}% ^{p} \displaystyle= \displaystyle\mathbb E\biggl{|}\sum_{1\leq i<j\leq N}X_{i}Y_{j}+\sum_{1\leq j<% i\leq N}X_{i}Y_{j}+\sum_{i=1}^{N}X_{i}Y_{i}\biggr{|}^{p} (50)
\displaystyle\leq \displaystyle 2\cdot 3^{p-1}\mathbb E\biggl{|}\sum_{1\leq i<j\leq N}X_{i}Y_{j}% \biggr{|}^{p}+3^{p-1}\mathbb E\Biggl{|}\sum_{i=1}^{N}X_{i}Y_{i}\Biggr{|}^{p}.

In the rest of the proof we estimate the terms on the right-hand side. We start by showing that

\mathbb E\Biggl{|}\sum_{i=1}^{N}X_{i}Y_{i}\Biggr{|}^{p}\leq CN^{p}\mathrm{e}^{% p\psi(2)s_{N}}|t_{3}-t_{1}|^{p}. (51)

By an inequality of Rosenthal [20], Lemma 1 (or see [11]),

\mathbb E\Biggl{|}\sum_{i=1}^{N}X_{i}Y_{i}\Biggr{|}^{p}\leq C\max\Biggl{\{}% \sum_{i=1}^{N}\mathbb E|X_{i}Y_{i}|^{p},\Biggl{(}\sum_{i=1}^{N}\mathbb E|X_{i}% Y_{i}|\Biggr{)}^{p}\Biggr{\}}. (52)

Thus, to establish (51), it suffices to show that

\displaystyle\mathbb E|X_{i}Y_{i}|^{p} \displaystyle\leq \displaystyle CN^{p-1}\mathrm{e}^{p\psi(2)s_{N}}|t_{3}-t_{1}|^{p}, (53)
\displaystyle\mathbb E|X_{i}Y_{i}| \displaystyle\leq \displaystyle C\mathrm{e}^{\psi(2)s_{N}}|t_{3}-t_{1}|. (54)

Since \xi is a process with stationary and independent increments, we have

\displaystyle\mathbb E|X_{i}Y_{i}|^{p} \displaystyle= \displaystyle\mathbb E\bigl{|}\bigl{(}\mathrm{e}^{\xi(s_{N}+t_{2})}-\mathrm{e}% ^{\xi(s_{N}+t_{1})}\bigr{)}\bigl{(}\mathrm{e}^{\xi(s_{N}+t_{3})}-\mathrm{e}^{% \xi(s_{N}+t_{2})}\bigr{)}\bigr{|}^{p} (55)
\displaystyle= \displaystyle\mathbb E\bigl{[}\mathrm{e}^{2p\xi(s_{N}+t_{1})}\bigr{]}\cdot% \mathbb E\bigl{|}\mathrm{e}^{\xi(t_{3}-t_{2})}-1\bigr{|}^{p}\cdot\mathbb E% \bigl{|}\mathrm{e}^{2\xi(t_{2}-t_{1})}-\mathrm{e}^{\xi(t_{2}-t_{1})}\bigr{|}^{% p}.

The first factor on the right-hand side of (55) equals \mathrm{e}^{\psi(2p)(s_{N}+t_{1})}. Applying Lemma 2.2 to the last two factors on the right-hand side of (55), we get

\mathbb E|X_{i}Y_{i}|^{p}\leq C\mathrm{e}^{\psi(2p)s_{N}}|t_{3}-t_{1}|^{p}.

To complete the proof of (53), we need to show that for some p>1,

\mathrm{e}^{(\psi(2p)-p\psi(2))s_{N}}\leq N^{p-1}. (56)

This is done as follows. Write for a moment p=1+\delta, where \delta>0. By Assumption (10), there is \varepsilon>0 such that for sufficiently large N we have N^{p-1}>\mathrm{e}^{(\lambda_{2}+\varepsilon)\delta s_{N}}. On the other hand, by Taylor’s expansion,

\psi(2p)-p\psi(2)=\delta\bigl{(}2\psi^{\prime}(2)-\psi(2)\bigr{)}+\mathrm{o}(% \delta)=\lambda_{2}\delta+\mathrm{o}(\delta),\qquad\delta\to 0,

which is smaller than (\lambda_{2}+\varepsilon)\delta if \delta is sufficiently small. Taking \delta small enough, we obtain (56). This completes the proof of (53).

Let us prove (54). Arguing as in (55), we obtain

\mathbb E|X_{i}Y_{i}|=\mathbb E\bigl{[}\mathrm{e}^{2\xi(s_{N}+t_{1})}\bigr{]}% \cdot\mathbb E\bigl{|}\mathrm{e}^{\xi(t_{3}-t_{2})}-1\bigr{|}\cdot\mathbb E% \bigl{|}\mathrm{e}^{2\xi(t_{2}-t_{1})}-\mathrm{e}^{\xi(t_{2}-t_{1})}\bigr{|}. (57)

The first factor on the right-hand side of (57) equals \mathrm{e}^{\psi(2)(s_{N}+t_{1})}. An application of Lemma 2.2 to the last two factors on the right-hand side of (57) yields (54).

We will now estimate the first term on the right-hand side of (50). We will show that

\mathbb E\biggl{|}\sum_{1\leq i<j\leq N}X_{i}Y_{j}\biggr{|}^{p}\leq CN^{p}% \mathrm{e}^{p\psi(2)s_{N}}|t_{3}-t_{1}|^{p}. (58)

For k=1,\ldots,N, denote by \mathcal{F}_{k} the \sigma-algebra generated by the random variables X_{1},\ldots,X_{k} and Y_{1},\ldots,Y_{k}. Let S_{1}=0 and

S_{k}=\sum_{1\leq i<j\leq k}X_{i}Y_{j},\qquad k=2,\ldots,N. (59)

We introduce also the sequence of differences \Delta_{1}=0 and

\Delta_{k}=S_{k}-S_{k-1}=Y_{k}(X_{1}+\cdots+X_{k-1}),\qquad k=2,\ldots,N. (60)

We claim that the sequence \{S_{k}\}_{k=1}^{N} is a martingale with respect to the filtration \{\mathcal{F}_{k}\}_{k=1}^{N}. Indeed, the random variable S_{k} is by definition \mathcal{F}_{k}-measurable, and we have

\mathbb E[S_{k}|\mathcal{F}_{k-1}]=S_{k-1}+\mathbb E[\Delta_{k}|\mathcal{F}_{k% -1}]=S_{k-1}+(X_{1}+\cdots+X_{k-1})\mathbb EY_{k}=S_{k-1},

where the last equality follows from (46). Having shown that \{S_{k}\}_{k=1}^{N} is a martingale, we apply Burkholder’s inequality to obtain that for some constant C=C(p),

\mathbb E|S_{N}|^{p}\leq C\mathbb E\Biggl{(}\sum_{i=1}^{N}\Delta_{i}^{2}\Biggr% {)}^{p/2}. (61)

The function x\to x^{p/2}, x>0, is concave since we choose p to be close to 1. By Jensen’s inequality applied to the right-hand side of (61),

\mathbb E|S_{N}|^{p}\leq C\Biggl{(}\sum_{i=1}^{N}\mathbb E\Delta_{i}^{2}\Biggr% {)}^{p/2}. (62)

The random variables Y_{k} and X_{1}+\cdots+X_{k-1} are independent, and \mathbb EX_{k}=0, k=1,\ldots,N, by (46). Hence, by (60), \mathbb E\Delta_{k}^{2}=(k-1)\mathbb EY_{1}^{2}\mathbb EX_{1}^{2}. It follows from (62) that

\mathbb E|S_{N}|^{p}\leq C(N^{2}\mathbb EY_{1}^{2}\mathbb EX_{1}^{2})^{p/2}. (63)

We have, by Lemma 2.2,

\mathbb EX_{1}^{2}=\mathbb E\bigl{[}\mathrm{e}^{2\xi(s_{N}+t_{1})}\bigr{]}% \cdot\mathbb E\bigl{(}\mathrm{e}^{\xi(t_{2}-t_{1})}-1\bigr{)}^{2}\leq C\mathrm% {e}^{\psi(2)s_{N}}(t_{2}-t_{1}).

Similarly, \mathbb EY_{1}^{2}\leq C\mathrm{e}^{\psi(2)s_{N}}(t_{3}-t_{2}). Inserting this into (63), we obtain

\mathbb E|S_{N}|^{p}\leq CN^{p}\mathrm{e}^{p\psi(2)s_{N}}|t_{3}-t_{1}|^{p}.

This proves (58) and completes the proof of tightness in Theorem 1.2.

5 Proof of Theorem 1.3

Let W_{N} be a positive-valued stochastic process defined as in (34), that is,

W_{N}(t)=N^{-1/2}\mathrm{e}^{\xi(s_{N}+t)-{(\psi(2)/2)}(s_{N}+t)}. (64)

Fix t_{1}\leq\cdots\leq t_{d} and let \mathbf{W}_{1,N},\ldots,\mathbf{W}_{N,N} be independent copies of the d-dimensional random vector \mathbf{W}_{N}=(W_{N}(t_{1}),\ldots,W_{N}(t_{d})). Our aim is to show that we have the following weak convergence of random vectors:

\sum_{i=1}^{N}(\mathbf{W}_{i,N}-\mathbb E\mathbf{W}_{i,N})\lx@stackrel{{% \scriptstyle w}}{{\rightarrow}}\bigl{(}\sqrt{\Phi(\vartheta)}\mathbb{X}(t_{k})% \bigr{)}_{k=1}^{d},\qquad N\to\infty. (65)

In the one-dimensional case, the papers [3, 8, 14] use the classical summation theory of triangular arrays of random variables. We will use a multidimensional version of this theory established in [21]; see [17] for a monograph treatment. According to [17], Theorem 3.2.2 on page 53, we have to verify that the following three conditions hold:

  1. (1)

    For every \varepsilon>0,

    \lim_{N\to\infty}N\mathbb{P}[\|\mathbf{W}_{N}\|_{\infty}>\varepsilon]=0. (66)
  2. (2)

    For every \varepsilon>0 and for every \mathbf{v}=(v_{1},\ldots,v_{d})\in\mathbb{R}^{d},

    \lim_{N\to\infty}N\operatorname{Var}\bigl{[}\langle\mathbf{W}_{N},\mathbf{v}% \rangle 1_{\|\mathbf{W}_{N}\|_{\infty}\leq\varepsilon}\bigr{]}=\Phi(\vartheta)% \sum_{k,l=1}^{d}\mathrm{e}^{(\psi(1)-{\psi(2)/2})|t_{l}-t_{k}|}v_{k}v_{l}. (67)
  3. (3)

    For every \varepsilon>0,

    \lim_{N\to\infty}N\mathbb E\bigl{[}\mathbf{W}_{N}1_{\|\mathbf{W}_{N}\|_{\infty% }>\varepsilon}\bigr{]}=0. (68)

Here, \Phi is the standard normal distribution function and \|\cdot\|_{\infty} denotes the maximum norm on \mathbb{R}^{d}.

5.1 Proof of (66) and (68)

Let us first show that for every t\in\mathbb{R} and every \varepsilon>0, we have

\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(t)>\varepsilon}\bigr{]}=0. (69)

With x_{N}=s_{N}+t and b_{N}=\frac{1}{2}(\log N+\psi(2)x_{N})+\log\varepsilon, we may write

N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(t)>\varepsilon}\bigr{]}=N^{1/2}\mathrm{e}^{% -{(\psi(2)/2)}x_{N}}\mathbb E\bigl{[}\mathrm{e}^{\xi(x_{N})}1_{\xi(x_{N})>b_{N% }}\bigr{]}. (70)

Noting that by the critical growth condition (13), \lim_{N\to\infty}b_{N}/x_{N}=\psi^{\prime}(2) and applying part 2 of Proposition 2.1 with \kappa=1 to the right-hand side of (70), we obtain

\displaystyle N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(t)>\varepsilon}\bigr{]} \displaystyle\leq \displaystyle CN^{1/2}\mathrm{e}^{-{(\psi(2)/2)}x_{N}}\mathrm{e}^{b_{N}}x_{N}^% {-1/2}\mathrm{e}^{-I(b_{N}/x_{N})x_{N}} (71)
\displaystyle\leq \displaystyle CNx_{N}^{-1/2}\mathrm{e}^{-I(b_{N}/x_{N})x_{N}}.

Using the convexity of the function I, as well as the fact that I(\psi^{\prime}(2))=\lambda_{2} (see (7)) and I^{\prime}(\psi^{\prime}(2))=2 (see Lemma 2.1), we obtain

\displaystyle I\biggl{(}\frac{b_{N}}{x_{N}}\biggr{)} \displaystyle= \displaystyle I\biggl{(}\psi^{\prime}(2)+\frac{1}{2}\biggl{(}\frac{\log N+2% \log\varepsilon}{x_{N}}-\lambda_{2}\biggr{)}\biggr{)} (72)
\displaystyle\geq \displaystyle I(\psi^{\prime}(2))+I^{\prime}(\psi^{\prime}(2))\cdot\frac{1}{2}% \biggl{(}\frac{\log N+2\log\varepsilon}{x_{N}}-\lambda_{2}\biggr{)}
\displaystyle= \displaystyle\frac{\log N+2\log\varepsilon}{x_{N}}.

It follows from (71) and (72) that

N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(t)>\varepsilon}\bigr{]}\leq CNx_{N}^{-1/2}% \mathrm{e}^{-\log N-2\log\varepsilon}\to 0,\qquad N\to\infty.

This proves (69). To prove (66), note that

N\mathbb{P}[\|\mathbf{W}_{N}\|_{\infty}>\varepsilon]\leq N\sum_{k=1}^{d}% \mathbb{P}[W_{N}(t_{k})>\varepsilon]\leq\varepsilon^{-1}N\sum_{k=1}^{d}\mathbb E% \bigl{[}W_{N}(t_{k})1_{W_{N}(t_{k})>\varepsilon}\bigr{]}.

By (69), the right-hand side converges to 0 as N\to\infty. This proves (66).

We proceed to the proof of (68). Let \mathcal{A}_{N,m}, m=1,\ldots,d, be the random event \{W_{N}(t_{m})\geq W_{N}(t_{l}),l=1,\ldots,d\}. Then, for every k=1,\ldots,d, we have

\displaystyle\mathbb E\bigl{[}W_{N}(t_{k})1_{\|\mathbf{W}_{N}\|_{\infty}>% \varepsilon}\bigr{]} \displaystyle\leq \displaystyle\sum_{m=1}^{d}\mathbb E\bigl{[}W_{N}(t_{k})1_{\|\mathbf{W}_{N}\|_% {\infty}>\varepsilon}1_{\mathcal{A}_{N,m}}\bigr{]}
\displaystyle\leq \displaystyle\sum_{m=1}^{d}\mathbb E\bigl{[}W_{N}(t_{m})1_{W_{N}(t_{m})>% \varepsilon}\bigr{]}.

An application of (69) to the right-hand side yields (68).

5.2 Proof of (67)

It suffices to show that for every 1\leq k\leq l\leq d and every \varepsilon>0,

\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{\|\mathbf{W}_{N}% \|_{\infty}\leq\varepsilon}\bigr{]}=\Phi(\vartheta)\mathrm{e}^{(\psi(1)-{\psi(% 2)/2})(t_{l}-t_{k})}. (73)

Let us start by computing a closely related limit. We will show that

\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{W_{N}(t_{1})\leq% \varepsilon}\bigr{]}=\Phi(\vartheta)\mathrm{e}^{(\psi(1)-{\psi(2)/2})(t_{l}-t_% {k})}. (74)

It follows from (64) that

\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{W_{N}(t_{1})\leq\varepsilon}\bigr{% ]}=\frac{\mathbb E[\mathrm{e}^{\xi(s_{N}+t_{k})+\xi(s_{N}+t_{l})}1_{W_{N}(t_{1% })\leq\varepsilon}]}{N\mathrm{e}^{\psi(2)s_{N}}\mathrm{e}^{{(\psi(2)/2)}(t_{k}% +t_{l})}}. (75)

Using the fact that \xi is a Lévy process, we obtain

\displaystyle\mathbb E\bigl{[}\mathrm{e}^{\xi(s_{N}+t_{k})+\xi(s_{N}+t_{l})}1_% {W_{N}(t_{1})\leq\varepsilon}\bigr{]}
\displaystyle\quad=\mathbb E\bigl{[}\mathrm{e}^{2\xi(s_{N}+t_{1})}1_{W_{N}(t_{% 1})\leq\varepsilon}\bigr{]}\cdot\mathbb E\mathrm{e}^{\xi(s_{N}+t_{k})+\xi(s_{N% }+t_{l})-2\xi(s_{N}+t_{1})}
\displaystyle\quad=\mathbb E\bigl{[}\mathrm{e}^{2\xi(s_{N}+t_{1})}1_{W_{N}(t_{% 1})\leq\varepsilon}\bigr{]}\cdot\mathbb E\mathrm{e}^{\xi(t_{k}-t_{1})+\xi(t_{l% }-t_{1})}
\displaystyle\quad=\mathbb E\bigl{[}\mathrm{e}^{2\xi(x_{N})}1_{\xi(x_{N})\leq b% _{N}}\bigr{]}\cdot\mathbb E\mathrm{e}^{\xi(t_{k}-t_{1})+\xi(t_{l}-t_{1})},

where we have used the notation

x_{N}=s_{N}+t_{1},\qquad b_{N}=\tfrac 12\bigl{(}\log N+\psi(2)x_{N}\bigr{)}+% \log\varepsilon. (77)

The critical growth condition (13) implies that

b_{N}=\psi^{\prime}(2)x_{N}+\vartheta\sqrt{\psi^{\prime\prime}(2)x_{N}}+% \mathrm{o}\bigl{(}\sqrt{x_{N}}\bigr{)},\qquad N\to\infty. (78)

Applying part 1 of Proposition 2.1 with \kappa=2, we obtain

\mathbb E\bigl{[}\mathrm{e}^{2\xi(x_{N})}1_{\xi(x_{N})\leq b_{N}}\bigr{]}\sim% \Phi(\vartheta)\mathrm{e}^{\psi(2)(s_{N}+t_{1})},\qquad N\to\infty. (79)

Recalling that \xi is a Lévy process and taking into account that t_{k}\leq t_{l}, we obtain

\mathbb E\mathrm{e}^{\xi(t_{k}-t_{1})+\xi(t_{l}-t_{1})}=\mathrm{e}^{\psi(2)(t_% {k}-t_{1})}\mathrm{e}^{\psi(1)(t_{l}-t_{k})}. (80)

Bringing equations (75), (5.2), (79) and (80) together, we obtain (74). Trivially, it follows from (74) that

\limsup_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{\|\mathbf{W}_% {N}\|_{\infty}\leq\varepsilon}\bigr{]}\leq\Phi(\vartheta)\mathrm{e}^{(\psi(1)-% {\psi(2)/2})(t_{l}-t_{k})}. (81)

We are going to prove the converse inequality:

\liminf_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{\|\mathbf{W}_% {N}\|_{\infty}\leq\varepsilon}\bigr{]}\geq\Phi(\vartheta)\mathrm{e}^{(\psi(1)-% {\psi(2)/2})(t_{l}-t_{k})}. (82)

Note that for every (small) \eta>0, the following inclusion of random events holds:

\{\|\mathbf{W}_{N}\|_{\infty}\leq\varepsilon\}\supset\{W_{N}(t_{1})\leq\eta% \varepsilon\}\Big{\backslash}\bigcup_{m=1}^{d}\mathcal{A}_{N,m},

where \mathcal{A}_{N,m} is the random event \{\xi(s_{N}+t_{m})-\xi(s_{N}+t_{1})>-\log\eta\}. Thus,

\displaystyle\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{\|\mathbf{W}_{N}\|_{% \infty}\leq\varepsilon}\bigr{]}
\displaystyle\quad\geq\mathbb E\bigl{[}W_{N}(t_{k})W_{N}(t_{l})1_{W_{N}(t_{1})% \leq\eta\varepsilon}\bigr{]}-\sum_{m=1}^{d}\mathbb E[W_{N}(t_{k})W_{N}(t_{l})1% _{\mathcal{A}_{N,m}}].

Since the asymptotic behavior of the first term on the right-hand side was computed in (74), we need to show that for every m=1,\ldots,d, and every 1\leq k\leq l\leq d,

\lim_{\eta\downarrow 0}\limsup_{N\to\infty}N\mathbb E[W_{N}(t_{k})W_{N}(t_{l})% 1_{\mathcal{A}_{N,m}}]=0. (83)

By (64), we have

\displaystyle\mathbb E[W_{N}(t_{k})W_{N}(t_{l})1_{\mathcal{A}_{N,m}}]
\displaystyle\quad\leq CN^{-1}\mathrm{e}^{-\psi(2)s_{N}}\mathbb E\bigl{[}% \mathrm{e}^{\xi(s_{N}+t_{k})+\xi(s_{N}+t_{l})}1_{\mathcal{A}_{N,m}}\bigr{]}
\displaystyle\quad=CN^{-1}\mathrm{e}^{-\psi(2)s_{N}}\mathbb E\bigl{[}\mathrm{e% }^{2\xi(s_{N}+t_{1})}\mathrm{e}^{\xi(s_{N}+t_{k})+\xi(s_{N}+t_{l})-2\xi(s_{N}+% t_{1})}1_{\mathcal{A}_{N,m}}\bigr{]}
\displaystyle\quad\leq CN^{-1}\mathbb E\bigl{[}\mathrm{e}^{\xi(t_{k}-t_{1})+% \xi(t_{l}-t_{1})}1_{\xi(t_{m}-t_{1})>-\log\eta}\bigr{]}.

Note that by (4), \mathbb E\mathrm{e}^{\xi(t_{k}-t_{1})+\xi(t_{l}-t_{1})}<\infty. Hence, by the dominated convergence theorem,

\lim_{\eta\downarrow 0}\mathbb E\bigl{[}\mathrm{e}^{\xi(t_{k}-t_{1})+\xi(t_{l}% -t_{1})}1_{\xi(t_{m}-t_{1})>-\log\eta}\bigr{]}=0. (85)

To complete the proof of (83), combine (5.2) and (85).

6 Proof of Theorem 1.4

6.1 Notation and preliminaries

We will concentrate on proving the convergence in the Skorokhod space D[0,T]. For the proof of the two-sided convergence on D[-T,T] we refer to [13].

We start by introducing some notation. Let W_{1,N},\ldots,W_{N,N} be independent copies of a positive-valued random process \{W_{N}(t),t\geq 0\} defined by

W_{N}(t)=\mathrm{e}^{\xi(s_{N}+t)-b_{N}(t)}, (86)

where b_{N}(t) is given by

b_{N}(t)=\log B_{N}(t)=\frac{\psi(\alpha)}{\alpha}t+s_{N}I^{-1}\biggl{(}\frac{% \log N-\log(\alpha\sqrt{2\uppi\psi^{\prime\prime}(\alpha)s_{N}})}{s_{N}}\biggr% {)}. (87)

Define a process Y_{N} by

Y_{N}(t)=\frac{Z_{N}(t)-A_{N}(t)}{B_{N}(t)}=\cases{\displaystyle\sum_{i=1}^{N}% W_{i,N}(t),&\quad$0<\alpha<1$,\cr\displaystyle\sum_{i=1}^{N}W_{i,N}(t)-N% \mathbb E\bigl{[}W_{N}(t)1_{W_{N}(0)\leq 1}\bigr{]},&\quad$\alpha=1$,\cr% \displaystyle\sum_{i=1}^{N}W_{i,N}(t)-N\mathbb EW_{N}(t),&\quad$1<\alpha<2$.} (88)

Our aim is to show that we have the following weak convergence of stochastic processes on the Skorokhod space D[0,T]:

Y_{N}(\cdot)\lx@stackrel{{\scriptstyle w}}{{\rightarrow}}\mathbb{Y}_{\alpha;% \xi}(\cdot),\qquad N\to\infty. (89)

We will use an approach based on considering the extremal order statistics. This method goes back to LePage et al. [16] and was used in the context of the random energy model by Bovier et al. [7] (note that the papers [3, 8, 14] use a different method). To describe the method of our proof of (89), let us consider the case \alpha\in(0,1) only. The first step is to prove that the upper order statistics of the sequence W_{1,N}(0),\ldots,W_{N,N}(0) can be approximated, as N\to\infty, by the Poisson process \{U_{i},i\in\mathbb{N}\} defined as in Section 1.4. In the second step we write, for t\geq 0,

\sum_{i=1}^{N}W_{i,N}(t)=\sum_{i=1}^{N}W_{i,N}(0)\mathrm{e}^{\eta_{i,N}(t)}, (90)

where \{\eta_{i,N}(t),t\geq 0\}, i=1,\ldots,N, are processes defined by

\eta_{i,N}(t)=\xi_{i}(s_{N}+t)-\xi_{i}(s_{N})-\frac{\psi(\alpha)}{\alpha}t. (91)

Note that the processes \eta_{1,N},\ldots,\eta_{N,N} are independent of each other, independent of W_{1,N}(0),\ldots,W_{N,N}(0), and have the same law as the process \eta defined by \eta(t)=\xi(t)-\frac{\psi(\alpha)}{\alpha}t. Bringing everything together, we may write

\sum_{i=1}^{N}W_{i,N}(t)\to\sum_{i=1}^{\infty}U_{i}\mathrm{e}^{\xi_{i}(t)-{(% \psi(\alpha)/\alpha)}t}=\mathbb{Y}_{\alpha;\xi}(t),\qquad N\to\infty. (92)

The rest of the section is devoted to the justification of the above argument.

6.2 Asymptotics for truncated moments

The following corollary of Proposition 2.1 will play a crucial role in the sequel.

Proposition 6.1

Let the assumptions of Theorem 1.4 be satisfied. Let W_{N} be a process defined by (86). The following three statements hold true.

  1. (1)

    Let 0\leq\kappa<\alpha. Then, for every \tau>0,

    \lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)>\tau}\bigr{]}=% \frac{\alpha}{\alpha-\kappa}\tau^{\kappa-\alpha}. (93)
  2. (2)

    Let \kappa>\alpha. Then, for every \tau>0,

    \lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)\leq\tau}\bigr{% ]}=\frac{\alpha}{\kappa-\alpha}\tau^{\kappa-\alpha}. (94)
  3. (3)

    Let \kappa=\alpha. Then, for every 0<\tau_{1}\leq\tau_{2},

    \lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)\in(\tau_{1},% \tau_{2})}\bigr{]}=\kappa(\log\tau_{2}-\log\tau_{1}). (95)

We prove part 1 of the proposition. Recall from (87) that

b_{N}(0)=s_{N}I^{-1}(c_{N}),\qquad\mbox{where }c_{N}=\frac{\log N-\log(\alpha% \sqrt{2\uppi\psi^{\prime\prime}(\alpha)s_{N}})}{s_{N}}. (96)

We have \lim_{N\to\infty}I^{-1}(c_{N})=\psi^{\prime}(\alpha) by the fast growth condition (15). By part 2 of Proposition 2.1, we have as N\to\infty,

\displaystyle\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)>\tau}\bigr{]} \displaystyle= \displaystyle\mathrm{e}^{-\kappa b_{N}(0)}\mathbb E\bigl{[}\mathrm{e}^{\kappa% \xi(s_{N})}1_{\xi(s_{N})>b_{N}(0)+\log\tau}\bigr{]} (97)
\displaystyle\sim \displaystyle\frac{\tau^{\kappa}}{(\alpha-\kappa)\sqrt{2\uppi\psi^{\prime% \prime}(\alpha)s_{N}}}\mathrm{e}^{-I((b_{N}(0)+\log\tau)/s_{N})s_{N}}.

To compute the asymptotic behavior of the right-hand side of (97), we will prove that

s_{N}I\biggl{(}\frac{b_{N}(0)+\log\tau}{s_{N}}\biggr{)}=s_{N}c_{N}+\alpha\log% \tau+\mathrm{o}(1),\qquad N\to\infty. (98)

We have \lim_{N\to\infty}I^{-1}(c_{N})=\psi^{\prime}(\alpha), hence \lim_{N\to\infty}I^{\prime}(I^{-1}(c_{N}))=\alpha by Lemma 2.1. Using Taylor’s expansion of I around the point I^{-1}(c_{N}), we obtain

I\biggl{(}\frac{b_{N}(0)+\log\tau}{s_{N}}\biggr{)}=I\biggl{(}I^{-1}(c_{N})+% \frac{\log\tau}{s_{N}}\biggr{)}=c_{N}+\frac{\alpha\log\tau+\mathrm{o}(1)}{s_{N% }},\qquad N\to\infty.

This proves (98). Inserting (98) into (97), we obtain part 1 of the proposition. Part 2 can be proved in a similar way.

Let us prove part 3 of the proposition. We write F_{N}(\tau)=\mathbb{P}[W_{N}(0)\leq\tau] for the distribution function of W_{N}(0), and \bar{F}_{N}(\tau)=1-F_{N}(\tau) for its tail. Taking \kappa=0 in (93), we obtain

\lim_{N\to\infty}N\bar{F}_{N}(\tau)=\tau^{-\alpha}. (99)

Note that this holds uniformly in \tau\in(\tau_{1},\tau_{2}), cf. Theorem 2.1. Trivially, we have

N\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)\in(\tau_{1},\tau_{2})}\bigr{]}=% N\int_{\tau_{1}}^{\tau_{2}}w^{\kappa}\,\mathrm{d}F_{N}(w)=-N\int_{\tau_{1}}^{% \tau_{2}}w^{\kappa}\,\mathrm{d}\bar{F}_{N}(w).

Integrating by parts, we obtain

N\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)\in(\tau_{1},\tau_{2})}\bigr{]}=% -w^{\kappa}N\bar{F}_{N}(w)|_{\tau_{1}}^{\tau_{2}}+\kappa\int_{\tau_{1}}^{\tau_% {2}}w^{\kappa-1}N\bar{F}_{N}(w)\,\mathrm{d}w.

Applying (99) to the right-hand side and recalling that \kappa=\alpha, we obtain

\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}^{\kappa}(0)1_{W_{N}(0)\in(\tau_{1},% \tau_{2})}\bigr{]}=\kappa\int_{\tau_{1}}^{\tau_{2}}w^{-1}\,\mathrm{d}w=\kappa(% \log\tau_{2}-\log\tau_{1}),

which completes the proof of part 3.

6.3 Convergence of the upper order statistics

For \tau>0, we define a process \mathbb{Y}_{\alpha;\xi}^{(\tau,\infty)}, which is a “truncated version” of the process \mathbb{Y}_{\alpha;\xi}, by

\mathbb{Y}_{\alpha;\xi}^{(\tau,\infty)}(t)=\cases{\displaystyle\mathop{\mathop% {\sum}_{i\in\mathbb{N}}}_{U_{i}>\tau}U_{i}\mathrm{e}^{\xi_{i}(t)-{(\psi(\alpha% )/\alpha)}t},&\quad$0<\alpha<1$,\cr\displaystyle\mathop{\mathop{\sum}_{i\in% \mathbb{N}}}_{U_{i}>\tau}U_{i}\mathrm{e}^{\xi_{i}(t)-\psi(1)t}-\log\frac{1}{% \tau},&\quad$\alpha=1$,\cr\displaystyle\mathop{\mathop{\sum}_{i\in\mathbb{N}}}% _{U_{i}>\tau}U_{i}\mathrm{e}^{\xi_{i}(t)-{(\psi(\alpha)/\alpha)}t}-\frac{% \alpha\tau^{1-\alpha}}{\alpha-1}\mathrm{e}^{(\psi(1)-{(\psi(\alpha)/\alpha)})t% },&\quad$1<\alpha<2$.} (100)

Similarly, we define Y_{N}^{(\tau,\infty)}, a truncated version of the process Y_{N} given by (88), by

Y_{N}^{(\tau,\infty)}(t)=\cases{\displaystyle\mathop{\mathop{\sum}_{1\leq i% \leq N}}_{W_{i,N}(0)>\tau}W_{i,N}(t),&\quad$0<\alpha<1$,\cr\displaystyle% \mathop{\mathop{\sum}_{1\leq i\leq N}}_{W_{i,N}(0)>\tau}W_{i,N}(t)-N\mathbb E% \bigl{[}W_{N}(t)1_{W_{N}(0)\in(\tau,1)}\bigr{]},&\quad$\alpha=1$,\cr% \displaystyle\mathop{\mathop{\sum}_{1\leq i\leq N}}_{W_{i,N}(0)>\tau}W_{i,N}(t% )-N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(0)>\tau}\bigr{]},&\quad$1<\alpha<2$.} (101)

The next lemma is the main result of this subsection.

Lemma 6.1

For every \tau>0, we have the following weak convergence of stochastic processes on the Skorokhod space D[0,T]:

Y^{(\tau,\infty)}_{N}(\cdot)\lx@stackrel{{\scriptstyle w}}{{\rightarrow}}% \mathbb{Y}^{(\tau,\infty)}_{\alpha;\xi}(\cdot),\qquad N\to\infty.

First, we establish the convergence of regularizing terms in (101) to those in (100). If \alpha\in(1,2), then writing W_{N}(t)=W_{N}(0)\mathrm{e}^{\eta_{N}(t)} with \eta_{N}(t)=\xi(s_{N}+t)-\xi(s_{N})-\frac{\psi(\alpha)}{\alpha}t (see equations (90) and (91)) and applying part 1 of Proposition 6.1, we obtain

\displaystyle\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(0)>\tau}\bigr% {]} \displaystyle= \displaystyle\mathrm{e}^{(\psi(1)-{\psi(\alpha)/\alpha})t}\lim_{N\to\infty}N% \mathbb E\bigl{[}W_{N}(0)1_{W_{N}(0)>\tau}\bigr{]}
\displaystyle= \displaystyle\frac{\alpha\tau^{1-\alpha}}{\alpha-1}\mathrm{e}^{(\psi(1)-{\psi(% \alpha)/\alpha})t}.

If \alpha=1, then part 3 of Proposition 6.1 yields

\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(0)\in(\tau,1)}\bigr{]}=% \lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}(0)1_{W_{N}(0)\in(\tau,1)}\bigr{]}=% \log\frac{1}{\tau}.

Thus, in proving Lemma 6.1, we may drop the regularizing terms in (100) and (101). More precisely, we define stochastic processes \tilde{\mathbb}{Y}_{\alpha;\xi}^{(\tau,\infty)} and \tilde{Y}^{(\tau,\infty)}_{N} by

\displaystyle\tilde{\mathbb}{Y}_{\alpha;\xi}^{(\tau,\infty)}(t) \displaystyle= \displaystyle\mathop{\mathop{\sum}_{i\in\mathbb{N}}}_{U_{i}>\tau}U_{i}\mathrm{% e}^{\xi_{i}(t)-{(\psi(\alpha)/\alpha)}t}, (102)
\displaystyle\tilde{Y}^{(\tau,\infty)}_{N}(t) \displaystyle= \displaystyle\mathop{\mathop{\sum}_{1\leq i\leq N}}_{W_{i,N}(0)>\tau}W_{i,N}(t% )=\mathop{\mathop{\sum}_{1\leq i\leq N}}_{W_{i,N}(0)>\tau}W_{i,N}(0)\mathrm{e}% ^{\eta_{i,N}(t)}; (103)

see (90) and (91) for the last equality. With this notation, we may restate Lemma 6.1 as follows.

Lemma 6.2

For every \tau>0, we have the following weak convergence of stochastic processes on the Skorokhod space D[0,T]:

\tilde{Y}^{(\tau,\infty)}_{N}(\cdot)\lx@stackrel{{\scriptstyle w}}{{% \rightarrow}}\tilde{\mathbb}{Y}^{(\tau,\infty)}_{\alpha;\xi}(\cdot),\qquad N% \to\infty. (104)

We start by considering the upper order statistics of the summands on the right-hand side of (103) at t=0. More precisely, let \{W_{i:N}(0)\}_{i=1}^{N} be the rearrangement of the numbers \{W_{i,N}(0)\}_{i=1}^{N} in the descending order, and set also W_{i:N}(0)=0 for i>N. Let \mathbb{S} be the space of all sequences w=(w_{i})_{i=1}^{\infty} with w_{1}\geq w_{2}\geq\cdots\geq 0. Then, \mathbb{S} is a closed subset of \mathbb{R}^{\infty} endowed with the product topology.

Lemma 6.3

Let \{U_{i},i\in\mathbb{N}\} be the points of a Poisson process on (0,\infty) with intensity \alpha u^{-(\alpha+1)}\,\mathrm{d}u, arranged in the descending order. Then, we have the following weak convergence of random elements in \mathbb{S}:

\{W_{i:N}(0)\}_{i=1}^{\infty}\lx@stackrel{{\scriptstyle w}}{{\rightarrow}}\{U_% {i}\}_{i=1}^{\infty},\qquad N\to\infty. (105)

By part 1 of Proposition 6.1 with \kappa=0, we have for every u>0,

\lim_{N\to\infty}N\mathbb{P}[W_{N}(0)>u]=u^{-\alpha}. (106)

To complete the proof, use [19], Proposition 3.21 on page 154. {pf*}Proof of Lemma 6.2 Let f\dvtx D[0,T]\to\mathbb{R} be a continuous bounded function. To prove (104), we need to verify that

\lim_{N\to\infty}\mathbb Ef\bigl{(}\tilde{Y}^{(\tau,\infty)}_{N}\bigr{)}=% \mathbb Ef\bigl{(}\tilde{\mathbb}{Y}^{(\tau,\infty)}_{\alpha;\xi}\bigr{)}. (107)

Let \mathbb{S}_{\tau}\subset\mathbb{S} be the set of all sequences (w_{i})_{i\in\mathbb{N}}\in\mathbb{S} with \lim_{i\to\infty}w_{i}=0 and such that w_{i}\neq\tau for all i\in\mathbb{N}. Define a function \bar{f}\dvtx\mathbb{S}_{\tau}\to\mathbb{R} by

\bar{f}(w)=\mathbb Ef\biggl{(}\mathop{\mathop{\sum}_{i\in\mathbb{N}}}_{w_{i}>% \tau}w_{i}\mathrm{e}^{\xi_{i}(\cdot)-{\psi(\alpha)/\alpha}\cdot}\biggr{)},% \qquad w=(w_{i})_{i\in\mathbb{N}}\in\mathbb{S}_{\tau}.

Note that \bar{f} is bounded and continuous on \mathbb{S}_{\tau}, and \mathbb{S}_{\tau} has full measure with respect to the law of (U_{i})_{i=1}^{\infty}. By Fubini’s theorem,

\mathbb Ef\bigl{(}\tilde{Y}^{(\tau,\infty)}_{N}\bigr{)}=\mathbb E\bar{f}((W_{i% :N}(0))_{i=1}^{\infty}),\qquad\mathbb Ef\bigl{(}\tilde{\mathbb}{Y}^{(\tau,% \infty)}_{\alpha;\xi}\bigr{)}=\mathbb E\bar{f}((U_{i})_{i=1}^{\infty}). (108)

It follows from Lemma 6.3 and the properties of the weak convergence that

\lim_{N\to\infty}\mathbb E\bar{f}((W_{i:N}(0))_{i=1}^{\infty})=\mathbb E\bar{f% }((U_{i})_{i=1}^{\infty}). (109)

Putting (108) and (109) together, we obtain (107). This completes the proof of the lemma.

6.4 Estimating the lower order statistics

In this section we estimate the difference between the processes \mathbb{Y}_{\alpha;\xi} and Y_{N} and their truncated versions \mathbb{Y}_{\alpha;\xi}^{(\tau,\infty)} and Y_{N}^{(\tau,\infty)}. Define a process \mathbb{Y}_{\alpha;\xi}^{(0,\tau)} by

\mathbb{Y}_{\alpha;\xi}^{(0,\tau)}(t)=\mathbb{Y}_{\alpha;\xi}(t)-\mathbb{Y}_{% \alpha;\xi}^{(\tau,\infty)}(t). (110)
Lemma 6.4

For every \varepsilon>0, we have

\lim_{\tau\downarrow 0}\mathbb{P}\Bigl{[}\sup_{t\in[0,T]}\bigl{|}\mathbb{Y}_{% \alpha;\xi}^{(0,\tau)}(t)\bigr{|}>\varepsilon\Bigr{]}=0. (111)

The proof follows immediately from Proposition 1.1.

Next we define a process Y^{(0,\tau)}_{N} representing the sum of the lower order statistics in (88) by Y^{(0,\tau)}_{N}(t)=Y_{N}(t)-Y_{N}^{(\tau,\infty)}(t). Equivalently,

Y^{(0,\tau)}_{N}(t)=\cases{\displaystyle\mathop{\mathop{\sum}_{1\leq i\leq N}}% _{W_{i,N}(0)\leq\tau}W_{i,N}(t),&\quad$\alpha\in(0,1)$,\cr\displaystyle\mathop% {\mathop{\sum}_{1\leq i\leq N}}_{W_{i,N}(0)\leq\tau}W_{i,N}(t)-N\mathbb E\bigl% {[}W_{N}(t)1_{W_{N}(0)\leq\tau}\bigr{]},&\quad$\alpha\in[1,2)$.} (112)
Lemma 6.5

For every \varepsilon>0, we have

\lim_{\tau\downarrow 0}\limsup_{N\to\infty}\mathbb{P}\Bigl{[}\sup_{t\in[0,T]}% \bigl{|}Y_{N}^{(0,\tau)}(t)\bigr{|}>\varepsilon\Bigr{]}=0. (113)

The proof will be carried out in the rest of the subsection. First we consider the regularizing term in (112). If \alpha\in(0,1), then applying part 2 of Proposition 6.1 with \kappa=1, we obtain

\lim_{\tau\downarrow 0}\limsup_{N\to\infty}N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(% 0)\leq\tau}\bigr{]}=0. (114)

Define a process \tilde{Y}^{(0,\tau)}_{N} coinciding with Y^{(0,\tau)}_{N} for \alpha\in[1,2) and containing an additional term for \alpha\in(0,1) by

\tilde{Y}^{(0,\tau)}_{N}(t)=\mathop{\mathop{\sum}_{1\leq i\leq N}}_{W_{i,N}(0)% \leq\tau}W_{i,N}(t)-N\mathbb E\bigl{[}W_{N}(t)1_{W_{N}(0)\leq\tau}\bigr{]}. (115)

In view of (114), we may restate Lemma 6.5 as follows.

Lemma 6.6

For every \varepsilon>0, we have

\lim_{\tau\downarrow 0}\limsup_{N\to\infty}\mathbb{P}\Bigl{[}\sup_{t\in[0,T]}% \bigl{|}\tilde{Y}_{N}^{(0,\tau)}(t)\bigr{|}>\varepsilon\Bigr{]}=0. (116)

For a function f\dvtx[0,T]\to\mathbb{R} we write \|f\|_{\infty}=\sup_{t\in[0,T]}|f(t)|. We have

\displaystyle\tilde{Y}_{N}^{(0,\tau)}(t) \displaystyle= \displaystyle\sum_{i=1}^{N}\bigl{(}W_{i,N}(0)1_{W_{i,N}(0)\leq\tau}-\mathbb E% \bigl{[}W_{N}(0)1_{W_{N}(0)\leq\tau}\bigr{]}\bigr{)}\mathrm{e}^{\eta_{i,N}(t)} (117)
\displaystyle{}+\mathbb E\bigl{[}W_{N}(0)1_{W_{N}(0)\leq\tau}\bigr{]}\sum_{i=1% }^{N}\bigl{(}\mathrm{e}^{\eta_{i,N}(t)}-\mathbb E\mathrm{e}^{\eta_{i,N}(t)}% \bigr{)}.

It follows from (117) that \|\tilde{Y}_{N}^{(0,\tau)}\|_{\infty}\leq M_{N,\tau}^{\prime}+M_{N,\tau}^{% \prime\prime}, where M_{N,\tau}^{\prime} and M_{N,\tau}^{\prime\prime} are random variables defined by

\displaystyle M_{N,\tau}^{\prime} \displaystyle= \displaystyle\sum_{i=1}^{N}\|\mathrm{e}^{\eta_{i,N}}\|_{\infty}\bigl{|}W_{i,N}% (0)1_{W_{i,N}(0)\leq\tau}-\mathbb E\bigl{[}W_{N}(0)1_{W_{N}(0)\leq\tau}\bigr{]% }\bigr{|},
\displaystyle M_{N,\tau}^{\prime\prime} \displaystyle= \displaystyle\mathbb E\bigl{[}W_{N}(0)1_{W_{N}(0)\leq\tau}\bigr{]}\cdot\Biggl{% \|}\sum_{i=1}^{N}(\mathrm{e}^{\eta_{i,N}}-\mathbb E\mathrm{e}^{\eta_{i,N}})% \Biggr{\|}_{\infty}.

Thus, to prove the lemma, it suffices to show that

\displaystyle\lim_{\tau\downarrow 0}\limsup_{N\to\infty}\mathbb{P}[M_{N,\tau}^% {\prime}>\varepsilon/2] \displaystyle= \displaystyle 0, (118)
\displaystyle\lim_{\tau\downarrow 0}\limsup_{N\to\infty}\mathbb{P}[M_{N,\tau}^% {\prime\prime}>\varepsilon/2] \displaystyle= \displaystyle 0. (119)

Let us prove (118). Note that the process \{\mathrm{e}^{\alpha\eta(t)},t\geq 0\} is a martingale. By Doob’s maximal L^{p}-inequality, \mathbb E\|\mathrm{e}^{2\eta}\|_{\infty}\leq C\mathbb E\mathrm{e}^{2\eta(T)}<\infty. Thus, \mathbb E\|\mathrm{e}^{\eta_{i,N}}\|_{\infty}^{2} is finite and

\limsup_{N\to\infty}\mathbb EM_{N,\tau}^{\prime 2}\leq C\lim_{N\to\infty}N% \mathbb E\bigl{[}W_{N}^{2}(0)1_{W_{N}(0)\leq\tau}\bigr{]}=\frac{C\alpha}{2-% \alpha}\tau^{2-\alpha},

where the last step follows from part 2 of Proposition 6.1 with \kappa=2. The right-hand side goes to 0 as \tau\downarrow 0. By Chebyshev’s inequality, this proves (118).

Let us prove (119). By Theorem 1.1, the random variable N^{-1/2}\|\sum_{i=1}^{N}(\mathrm{e}^{\eta_{i,N}}-\mathbb E\mathrm{e}^{\eta_{i,% N}})\|_{\infty} converges as N\to\infty to some limiting (a.s. finite) random variable. Thus, we need to prove that

\lim_{\tau\downarrow 0}\limsup_{N\to\infty}\sqrt{N}\mathbb E\bigl{[}W_{N}(0)1_% {W_{N}(0)\leq\tau}\bigr{]}=0. (120)

We have, by part 2 of Proposition 6.1 with \kappa=2,

\limsup_{N\to\infty}N\mathbb E\bigl{[}W_{N}(0)1_{W_{N}(0)\leq\tau}\bigr{]}^{2}% \leq\lim_{N\to\infty}N\mathbb E\bigl{[}W_{N}^{2}(0)1_{W_{N}(0)\leq\tau}\bigr{]% }=\frac{\alpha}{2-\alpha}\tau^{2-\alpha}.

This proves (120) and completes the proof of the lemma.

6.5 Completing the proof of the one-sided convergence

In this section we complete the proof of the one-sided version of Theorem 1.4. We will need to introduce some notation. Let d be the Skorokhod metric on D[0,T]. Given a process X with sample paths in D[0,T], we denote by \mathcal{L}(X) the law of X considered as a probability measure on D[0,T]. Let further \pi be the Lévy–Prokhorov distance on the space of probability measures on D[0,T]. That is, given two probability measures \mu_{1} and \mu_{2} on D[0,T], we define

\pi(\mu_{1},\mu_{2})=\inf\{\varepsilon>0\dvtx\mu_{1}(B)\leq\mu_{2}(B^{% \varepsilon})+\varepsilon\mbox{ for all Borel }B\subset D[0,T]\},

where B^{\varepsilon}=\{b\in D[0,T]\dvtx d(b,B)\leq\varepsilon\} is the \varepsilon-neighborhood of the set B. The next lemma is standard.

Lemma 6.7

Let \{X(t),t\in[0,T]\} and \{Y(t),t\in[0,T]\} be two (generally, dependent) stochastic processes with sample paths in D[0,T], and suppose that for some \varepsilon>0,


Then, \pi(\mathcal{L}(X),\mathcal{L}(X+Y))\leq\varepsilon.


By the definition of the Skorokhod metric, d(X,X+Y)\leq\sup_{t\in[0,T]}|Y(t)|. By assumption, it follows that \mathbb{P}[d(X,X+Y)>\varepsilon]\leq\varepsilon. For every Borel set B\subset D[0,T], we have

\mathbb{P}[X+Y\in B]\leq\mathbb{P}[X\in B^{\varepsilon}]+\mathbb{P}[d(X,X+Y)>% \varepsilon]\leq\mathbb{P}[X\in B^{\varepsilon}]+\varepsilon,

whence the statement of the lemma.

We are now in position to complete the proof of the one-sided version of Theorem 1.4, as restated in (89). Let \varepsilon>0 be fixed. Our aim is to show that for sufficiently large N, we have

\pi(\mathcal{L}(Y_{N}),\mathcal{L}(\mathbb{Y}_{\alpha;\xi}))\leq 3\varepsilon. (121)

By Lemma 6.4, we can find a \delta>0 such that \mathbb{P}[\sup_{t\in[0,T]}|\mathbb{Y}_{\alpha;\xi}^{(0,\tau)}(t)|>\varepsilon% ]\leq\varepsilon for all \tau<\delta. By Lemma 6.7 and (110), this implies that for all \tau<\delta,

\pi\bigl{(}\mathcal{L}\bigl{(}\mathbb{Y}_{\alpha;\xi}^{(\tau,\infty)}\bigr{)},% \mathcal{L}(\mathbb{Y}_{\alpha;\xi})\bigr{)}\leq\varepsilon. (122)

By Lemma 6.5, we can find \tau<\delta and N_{1}\in\mathbb{N} such that \mathbb{P}[\sup_{t\in[0,T]}|Y_{N}^{(0,\tau)}(t)|>\varepsilon]\leq\varepsilon for N>N_{1}. By Lemma 6.7, this implies that for all N>N_{1},

\pi\bigl{(}\mathcal{L}\bigl{(}Y_{N}^{(\tau,\infty)}\bigr{)},\mathcal{L}(Y_{N})% \bigr{)}\leq\varepsilon. (123)

By Lemma 6.1, we can find N_{1}\in\mathbb{N} such that for all N>N_{1},

\pi\bigl{(}\mathcal{L}\bigl{(}Y^{(\tau,\infty)}_{N}\bigr{)},\mathcal{L}\bigl{(% }\mathbb{Y}^{(\tau,\infty)}_{\alpha;\xi}\bigr{)}\bigr{)}\leq\varepsilon. (124)

To complete the proof of (121), combine equations (122)–(124).


The author is grateful to Leonid Bogachev for pointing out reference [8] after [14] was completed, and to Ilya Molchanov, Michael Schmutz and Martin Schlather for useful discussions.


  • 1 Araujo, A. and Giné, E. (1980). The Central Limit Theorem for Real and Banach Valued Random Variables. New York: Wiley. \MR0576407
  • 2 Bahadur, R. and Ranga Rao, R. (1960). On deviations of the sample mean. Ann. Math. Statist. 31 1015–1027. \MR0117775
  • 3 Ben Arous, G., Bogachev, L. and Molchanov, S. (2005). Limit theorems for sums of random exponentials. Probab. Theory Related Fields 132 579–612. \MR2198202
  • 4 Billingsley, P. (1999). Convergence of Probability Measures, 2nd ed. Chichester: Wiley. \MR1700749
  • 5 Bogachev, L. (2006). Limit laws for norms of IID samples with Weibull tails. J. Theoret. Probab. 19 849–873. \MR2279606
  • 6 Bogachev, L. (2007). Extreme value theory for random exponentials. In Probability and Mathematical Physics. A Volume in Honor of Stanislav Molchanov (D. Dawson et al., eds.). CRM Proceedings and Lecture Notes 42 41–64. Providence, RI: Amer. Math. Soc. \MR2352261
  • 7 Bovier, A., Kurkova, I. and Löwe, M. (2002). Fluctuations of the free energy in the REM and the p-spin SK models. Ann. Probab. 30 605–651. \MR1905853
  • 8 Cranston, M. and Molchanov, S. (2005). Limit laws for sums of products of exponentials of i.i.d. random variables. Israel J. Math. 148 115–136. \MR2191226
  • 9 Dembo, A. and Zeitouni, O. (1998). Large Deviations Techniques and Applications, 2nd ed. Applications of Mathematics 38. New York: Springer. \MR1619036
  • 10 Hahn, M.G. (1978). Central limit theorems in D[0,1]. Z. Wahrsch. Verw. Gebiete 44 89–101. \MR0501231
  • 11 Ibragimov, R. and Sharakhmetov, S. (2002). On extremal problems and best constants in moment inequalities. Sankhyā A 64 42–56. \MR1968374
  • 12 Janßen, A. (2010). Limit laws for power sums and norms of i.i.d. samples. Probab. Theory Related Fields 146 515–533. \MR2574737
  • 13 Kabluchko, Z. (2009). Functional limit theorems for sums of independent geometric Lévy processes. Preprint version of the present paper. Available at
  • 14 Kabluchko, Z. (2009). Limiting distributions for sums of independent random products. Not published. Available at
  • 15 Kabluchko, Z. (2010). Limit laws for sums of independent random products: The lattice case. J. Theoret. Probab. To appear. Available at
  • 16 LePage, R., Woodroofe, M. and Zinn, J. (1981). Convergence to a stable distribution via order statistics. Ann. Probab. 9 624–632. \MR0624688
  • 17 Meerschaert, M. and Scheffler, H.-P. (2001). Limit Distributions for Sums of Independent Random Vectors. Heavy Tails in Theory and Practice. New York: Wiley. \MR1840531
  • 18 Petrov, V. (1965). On the probabilities of large deviations for sums of independent random variables. Theor. Probab. Appl. 10 287–298. \MR0185645
  • 19 Resnick, S.I. (2008). Extreme Values, Regular Variation and Point Processes. New York: Springer. \MR2364939
  • 20 Rosenthal, H.P. (1970). On the subspaces of L_{p} spanned by sequences of independent random variables. Israel J. Math. 8 273–303. \MR0271721
  • 21 Rvačeva, E.L. (1962). On domains of attraction of multi-dimensional distributions. In Select. Transl. Math. Statist. Probab. 2 183–205. Providence, RI: Amer. Math. Soc. \MR0150795
  • 22 Samorodnitsky, G. and Taqqu, M. (1994). Stable Non–Gaussian Random Processes: Stochastic Models with Infinite Variance. New York: Chapman Hall. \MR1280932
  • 23 Stoev, S. (2008). On the ergodicity and mixing of max-stable processes. Stochastic Process. Appl. 118 1679–1705. \MR2442375
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Comments 0
The feedback must be of minumum 40 characters
Add comment
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description