Operator self-similar processes and functional central limit theorems

# Operator self-similar processes and functional central limit theorems

Vaidotas Characiejus  Alfredas Račkauskas
Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 24, 03225 Vilnius, Lithuania
(e-mail: vaidotas.characiejus@gmail.com; alfredas.rackauskas@mif.vu.lt)
Corresponding author. Tel.: +37064034677.
March 6, 2014
###### Abstract

Let be a linear process with values in the separable Hilbert space given by for each , where is defined by for each with and are independent and identically distributed -valued random elements with and . We establish sufficient conditions for the functional central limit theorem for when the series of operator norms diverges and show that the limit process generates an operator self-similar process.

Keywords: linear process; long memory; self-similar process; functional central limit theorem.

AMS MSC 2010: 60B12; 60F17; 60G18.

## 1 Introduction

Self-similar processes are stochastic processes that are invariant in distribution under suitable scaling of time and space. More precisely, let be an -valued stochastic process defined on some probability space . The process is said to be self-similar if for any there exists such that

 {ξ(at):t≥0}fdd={bξ(t):t≥0},

where denotes the equality of the finite-dimensional distributions.

Self-similar processes were first studied rigorously by Lamperti [12]. Well-known examples are the Brownian motion and the fractional Brownian motion with Hurst parameter (in these cases is equal to and respectively). We refer to Embrechts and Maejima [6] for the current state of knowledge about self-similar processes and their applications.

Laha and Rohatgi [11] introduced operator self-similar processes taking values in . They extended the notion of self-similarity to allow scaling by a class of matrices. Such processes were later studied by Hudson and Mason [9], Maejima and Mason [15], Lavancier, Philippe, and Surgailis [13], Didier and Pipiras [5] among others.

Matache and Matache [16] consider and study operator self-similar processes valued in (possibly infinite-dimensional) Banach spaces. Let denote a Banach space and let be the algebra of all bounded linear operators on . Matache and Matache [16] give the following definition.

###### Definition.

An operator self-similar process is a stochastic process on such that there is a family in with the property that for each ,

 {ξ(at):t≥0}fdd={T(a)ξ(t):t≥0}.

The family is called the scaling family of operators. If operators have the particular form , where is some fixed scalar and is an identity operator, then a stochastic process is called self-similar instead of operator self-similar.

In this paper, we obtain an example of an operator self-similar process with values in the real separable Hilbert space of equivalence classes of -almost everywhere equal square-integrable functions, where is a -finite measure space. Our example arises from the functional central limit theorem for a sequence of -valued random elements.

Let be random elements with values in the separable Banach space  given by

 Xk=∞∑j=0ujεk−j (1)

for each , where and are independent and identically distributed -valued random elements with , , where is the norm of the Banach space . Let be random polygonal functions (piecewise linear functions) constructed from the partial sums . The asymptotic behaviour of and that of strongly depend on the convergence of the series , where is the operator norm. Roughly speaking, if the series converges, the asymptotic behaviour of and is inherited from (see Merlevède, Peligrad, and Utev [17], Račkauskas and Suquet [18] for more details). However, this is not the case when (see Račkauskas and Suquet [19] and Characiejus and Račkauskas [2]).

Račkauskas and Suquet [19] consider with values in an abstract separable Hilbert space when with and for , where satisfies and commutes with the covariance operator of . We obtain an operator self-similar process with the covariance structure different from Račkauskas and Suquet [19] since does not necessarily commute with the covariance operator of in our case.

Specifically, we investigate with values in and given by

 uj=(j+1)−D (2)

for each , where is a multiplication operator defined by for each with a measurable function . Our main results (Theorem 1 and Theorem 2 in Section 4) establish sufficient conditions for the convergence in distribution of in the space in the following two cases: either (shorthand for for all ) or (shorthand for for all ). In the former case, we provide sufficient conditions for the convergence in distribution of to a Gaussian stochastic process , where are multiplication operators given by for each and . In the latter case, we establish convergence in distribution of to an -valued Wiener process. The results of this paper generalize our previous results since in Characiejus and Račkauskas [2] only the central limit theorem is investigated.

The rest of the paper is organized as follows. In Section 2, we give two alternative ways to construct and establish some properties of and . The existence of an operator self-similar process with values in is established in Section 3. In Section 4, we establish sufficient conditions for the functional central limit theorem.

## 2 Preliminaries

### 2.1 Construction of {Xk}

There are two approaches to construct with values in . The first approach is to define  as stochastic processes with space varying memory and square -integrable sample paths. The second approach is to define valued random variable for each as series (1) with given by (2) and to investigate the convergence of such series. We present both of these two approaches.

#### First approach

Let be independent and identically distributed measurable stochastic processes defined on the probability space , i.e.  are -measurable functions . We require that and for each and denote

 σ(r,s)=E[ε0(r)ε0(s)],σ2(s)=Eε20(s),r,s∈S.

Define stochastic processes by setting

 Xk(s)=∞∑j=0(j+1)−d(s)εk−j(s) (3)

for each and each . Observe that is a necessary and sufficient condition for the almost sure convergence of series (3) (this fact follows from Kolmogorov’s three-series theorem). It is well-known that the growth rate of the partial sums depends on . Viewing as the set of space indexes and as the set of time indexes, we thus have a functional process with space varying memory. We refer to Giraitis, Koul, and Surgailis [8] for an encyclopedic treatment of long memory phenomenon of stochastic processes.

We denote

 γh(r,s)=E[X0(r)Xh(s)],γh(s)=E[X0(s)Xh(s)],r,s∈S,h∈N.

For fixed , the sequences and are stationary sequences of random variables with zero means and cross-covariance

 γh(r,s)=σ(r,s)∞∑j=0(j+1)−d(r)(j+h+1)−d(s). (4)

Throughout the paper

 d(r,s)=d(r)+d(s),r,s∈S, (5)

and

 c(r,s)=∫∞0x−d(r)(x+1)−d(s)dx,r,s∈S, (6)

provided that , . Let us observe that , where is the beta function. If , we denote by . can be estimated from above with the following inequality

 c(s)≤11−d(s)+12d(s)−1. (7)

Proposition 1 gives the asymptotic behaviour of and Proposition 2 provides a necessary and sufficient condition for the summability of the series (for the proof, see Characiejus and Račkauskas [2]). The notation indicates that the ratio of the two sequences tends to as .

###### Proposition 1.

If and , then

 γh(r,s)∼c(r,s)σ(r,s)⋅h1−d(r,s).

If , then

 γh(r,s)∼σ(r,s)⋅h−1logh.
###### Proposition 2.

The series

 ∞∑k=0γk(r,s)

converges if and only if and .

###### Remark 1.

The series converges if and only if .

Let be a separable space of real valued square -integrable functions with a seminorm

 ∥f∥=[∫S|f(v)|2μ(dv)]1/2,f∈L2(μ),

and let be the corresponding Hilbert space of equivalence classes of -almost everywhere equal functions with an inner product

 ⟨f,g⟩=∫Sf(v)g(v)μ(dv),f,g∈L2(μ).

With an abuse of notation, we denote by both a function and its equivalence class to avoid cumbersome notation. The intended meaning should be clear from the context.

Proposition 3 establishes a necessary and sufficient condition for the sample paths of the stochastic process to be almost surely square -integrable with for each (see Characiejus and Račkauskas [2] for the proof).

###### Proposition 3.

The sample paths of the stochastic process almost surely belong to the space and for each if and only if both of the integrals

 E∥ε0∥2=∫Sσ2(v)μ(dv)and∫Sσ2(v)2d(v)−1μ(dv)

are finite.

A stochastic process defined on a probability space with sample paths in induces the -measurable function , where is the Borel -algebra of (for more details, see Cremers and Kadelka [3]). Therefore we shall frequently consider each stochastic process with sample paths in as a random element with values in and denote it by or simply by .

#### Second approach

Now we establish a necessary and sufficient condition for the mean square convergence of series (1) with given by (2). Recall that for each and since and for and .

###### Proposition 4.

Series (1) with given by (2) and -valued random elements such that and converges in mean square if and only if there exists a measurable set such that , for all and the integral

 ∫Sσ2(v)2d(v)−1μ(dv)

is finite.

###### Proof.

Let , , , and observe that

 E∥∥∥N∑j=M+1ujεj−k∥∥∥2=N∑j=M+1∫S(j+1)−2d(v)σ2(v)μ(dv).

Since

 ∞∑j=0∫S(j+1)−2d(r)σ2(r)μ(dr)=∫S∞∑j=1j−2d(r)σ2(r)μ(dr)

and

 12d(r)−1≤∞∑j=1j−2d(r)≤1+12d(r)−1

we have that

 ∫Sσ2(r)2d(r)−1μ(dr)≤∫Sσ2(r)∞∑j=1j−2d(r)μ(dr)≤E∥ε0∥2+∫Sσ2(r)2d(r)−1μ(dr)

and the proof is complete. ∎

###### Remark 2.

Since are independent, it follows from Lévy-Itô-Nisio theorem (see Ledoux and Talagrand [14], Theorem 6.1, p. 151) and Proposition 4 that series (1) also converges almost surely. Hence, for each is an -valued random element and Proposition 4 is consistent with Proposition 3.

###### Remark 3.

Since given by (2) are multiplication operators from to , we have that the operator norm . If , then we have that , but series (1) might still converge. The square summability of the operator norms of is not a necessary condition for the almost sure convergence of series (1).

### 2.2 Random polygonal functions {ζn}

Let be random polygonal functions (piecewise linear functions) constructed from partial sums :

 ζn(t)=S⌊nt⌋+{nt}X⌊nt⌋+1

for each and each , where is the floor function defined by for and is the fractional part of . We adopt the usual convention that an empty sum equals . For each the random function can be expressed as a series

 ζn(t)=⌊nt⌋+1∑j=−∞anj(t)εj,

where

 anj(t)=⌊nt⌋∑k=1vk−j+{nt}v⌊nt⌋+1−j (8)

and

 vj={uj, if j≥0;0, if j<0. (9)

Denote for and . Each random variable can be expressed as a series , where

 anj(s,t)=⌊nt⌋∑k=1vk−j(s)+{nt}v⌊nt⌋+1−j(s) (10)

and

 vj(s)={(j+1)−d(s), if j≥0;0, if j<0. (11)

Observe that if since if . Notice that the upper bounds of summation of the series in the expressions of and can be extended up to since and if .

Set and define the function by

 V((r,t),(s,u))==σ(r,s)[2−d(r,s)][3−d(r,s)][c(s,r)t3−d(r,s)+c(r,s)u3−d(r,s)−C(r,s;t−u)|t−u|3−d(r,s)], (12)

where is given by (5), is given by (6) and

 C(r,s;t)={c(r,s) if t<0;c(s,r) if t>0.

Now we are prepared to derive the asymptotic behavior of the sequence of cross-covariances of .

###### Proposition 5.

Suppose either and or . In both cases, the following asymptotic relation holds

 E[ζn(r,t)ζn(s,u)]∼E[S⌊nt⌋(r)S⌊nu⌋(s)].
###### Proposition 6.

If and , then

 E[S⌊nt⌋(r)S⌊nu⌋(s)]∼V((r,t),(s,u))⋅n3−d(r,s)

for , where is given by (12).

If , then

 E[S⌊nt⌋(r)S⌊nu⌋(s)]∼σ(r,s)⋅min(t,u)⋅nlog2n.
###### Remark 4.

Let us assume that and . By setting in Proposition 6 and using Proposition 5, we obtain that

 E[ζn(s,t)ζn(s,u)]∼σ2(s)c(s)[1−d(s)][3−2d(s)]⋅E[B3/2−d(s)(t)B3/2−d(s)(u)]⋅n3−2d(s),

where

 E[B3/2−d(s)(t)B3/2−d(s)(u)]=12[t3−2d(s)+u3−2d(s)−|t−u|3−2d(s)]

is the covariance function of the fractional Brownian motion with the Hurst parameter and is given by (6).

###### Remark 5.

The asymptotic behaviour of the variance follows from Proposition 5 and Proposition 6 by setting and : if , then

 Eζ2n(s,t)∼c(s)σ2(s)[1−d(s)][3−2d(s)]⋅t3−2d(s)⋅n3−2d(s);

if , then

 Eζ2n(s,t)∼σ2(s)⋅t⋅nlog2n.
###### Proof of Proposition 6.

Suppose and split the cross-covariance of the partial sums into two terms

 (13)

The following two asymptotic relations are proved in Characiejus and Račkauskas [2]: if and , then

 E[Sn(r)Sn(s)]∼[c(r,s)+c(s,r)]σ(r,s)[2−d(r,s)][3−d(r,s)]⋅n3−d(r,s); (14)

if , then

 E[Sn(r)Sn(s)]∼σ(r,s)⋅nlog2n. (15)

The asymptotic behaviour of the first term of sum (13) is established using (14) and (15): if and , then

 E[S⌊nt⌋(r)S⌊nt⌋(s)]∼[c(r,s)+c(s,r)]σ(r,s)[2−d(r,s)][3−d(r,s)]⋅t3−d(r,s)⋅n3−d(r,s); (16)

if , then

 E[S⌊nt⌋(r)S⌊nt⌋(s)]∼σ(r,s)⋅t⋅nlog2n. (17)

In order to establish the asymptotic behaviour of the second term of sum (13), we express it in the following way

 E[S⌊nt⌋(r)[S⌊nu⌋(s)−S⌊nt⌋(s)]]=mn−1∑k=1k[γk(r,s)+γ⌊nu⌋−k(r,s)]+mn|⌊nu⌋−2⌊nt⌋|∑k=0γmn+k(r,s), (18)

where (we also use the notation ). For simplicity, denote

 κ(a,b)=b∑k=a+1γk(r,s)andν(a,b)=b∑k=a+1kγk(r,s).

Then we have that

 mn−1∑k=1kγ⌊nu⌋−k(r,s)=⌊nu⌋κ(⌊nu⌋−mn,⌊nu⌋−1)−ν(⌊nu⌋−mn,⌊nu⌋−1). (19)

Let us a recall a few facts about sequences. We use these facts to establish asymptotic behaviour of the sums in (18) and (19). Suppose and are sequences of positive real numbers such that . Then provided either of these partial sums diverges. Let be a continuous strictly increasing or strictly decreasing function such that as and as . Then .

Since if and (see Proposition 1), we obtain the following asymptotic relations using the facts about sequences mentioned above:

 ν(0,mn−1) ∼c(r,s)σ(r,s)m3−d(r,s)3−d(r,s)⋅n3−d(r,s); (20) ⌊nu⌋κ(⌊nu⌋−mn,⌊nu⌋−1) ∼c(r,s)σ(r,s)u[u2−d(r,s)−(u−m)2−d(r,s)]2−d(r,s)⋅n3−d(r,s); (21) ν(⌊nu⌋−mn,⌊nu⌋−1) ∼c(r,s)σ(r,s)[u3−d(r,s)−(u−m)3−d(r,s)]3−d(r,s)⋅n3−d(r,s); (22)
 mnκ(mn−1,mn+|⌊nu⌋−2⌊nt⌋|)∼∼c(r,s)σ(r,s)m[(m+|u−2t|)2−d(r,s)−m2−d(r,s)]2−d(r,s)⋅n3−d(r,s). (23)

We have that

 E[S⌊nt⌋(r)[S⌊nu⌋(s)−S⌊nt⌋(s)]]∼∼c(r,s)σ(r,s)[2−d(r,s)][3−d(r,s)][−t3−d(r,s)+u3−d(r,s)−(u−t)3−d(r,s)]⋅n3−d(r,s) (24)

using asymptotic relations (20)-(23). Combining (16) with (24), we obtain

 E[S⌊nt⌋(r)S⌊nu⌋(s)]∼∼σ(r,s)[2−d(r,s)][3−d(r,s)][c(s,r)t3−d(r,s)+c(r,s)[u3−d(r,s)−(u−t)3−d(r,s)]]⋅n3−d(r,s).

Similarly, if , then (see Proposition 1) and the following asymptotic relations are true

 ν(0,mn−1) ∼σ(r,s)m⋅nlogn; (25) ⌊nu⌋κ(⌊nu⌋−mn,⌊nu⌋−1) ∼σ(r,s)[logu−log(u−m)]u⋅nlogn; (26) ν(⌊nu⌋−mn,⌊nu⌋−1) ∼σ(r,s)m⋅nlogn; (27) mnκ(mn−1,mn+|⌊nu⌋−2⌊nt⌋|) ∼σ(r,s)[log(m+|u−2t|)−logm]m⋅nlogn. (28)

Since sequences (25)-(28) grow slower than sequence (17), we conclude that

If , the proof is exactly the same as in the case of . If , then we just use asymptotic relations (16) and (17). The proof of Proposition 6 is complete. ∎

###### Proof of Proposition 5.

We have that

 E[ζn(r,t)ζn(s,u)] =E[S⌊nt⌋(r)S⌊nu⌋(s)] +{nu}E[S⌊nt⌋(r)X⌊nu⌋+1(s)] +{nt}E[S⌊nu⌋(s)X⌊nt⌋+1(r)] +{nt}{nu}E[X⌊nt⌋+1(r)X⌊nu⌋+1(s)]

and

 E[S⌊nt⌋(r)X⌊nu⌋+1(s)]≤⌊nt⌋γ0(r,s).

The result follows from Proposition 6 since is the only term in the expression of that grows faster than linearly. ∎

## 3 Operator self-similar process

In this section, we show that there exists a Gaussian stochastic process with zero mean and covariance function given by (12). The stochastic process is an operator self-similar process with values in .

We begin by showing that the function is a covariance function.

###### Proposition 7.

The function , given by (12), with is a covariance function of a stochastic process indexed by the set .

###### Proof.

It follows from equation (12) that the function is symmetric, i.e.

 V(τ,τ′)=V(τ′,τ),τ,τ′∈T.

So we need to prove that the function is positive definite. Let , and , where . Denote and , . Using equation (12) and Propositions 5 and 6, we obtain that

 N∑i=1N∑j=1wiwjV(τi,τj) =N∑i=1N∑j=1wiwjM3−[d(si)+d(sj)]V((si,ti/M),(sj,tj/M)) =N∑i=1N∑j=1˜wi˜wjlimn→∞1n3−[d(si)+d(sj)]E[ζn(si,ti/M)ζn(sj,tj/M)]≥0

since

 1n3−d(r,s)E[ζn(r,t)ζn(s,u)]

is a covariance function for all and for all