Extremes of Locally-stationary Chi-square processes on discrete grids

# Extremes of Locally-stationary Chi-square processes on discrete grids

Long Bai Long Bai, Department of Actuarial Science, University of Lausanne, UNIL-Dorigny, 1015 Lausanne, Switzerland
July 17, 2019
July 17, 2019

Abstract: For centered Gaussian processes, the chi-square process appears naturally as limiting processes in various statistical models. In this paper, we are concerned with the exact tail asymptotics of the supremum taken over discrete grids of a class of locally stationary chi-square processes where are not identical. An important tool for establishing our results is a generalisation of Pickands lemma under the discrete scenario. An application related to the change-point problem is discussed.

Key Words: Chi-square processes; asymptotic methods; Pickands constant; change-point problem.

AMS Classification: Primary 60G15; secondary 60G70

## 1. Introduction and Main Result

Numerous applications, especially from statistics are concerned with the supremum of chi-square processes over a discrete grid, which is threshold dependent.
The investigate of the extremes of chi-square process is initiated by the studies of high excursions of envelope of a Gaussian process, see e.g., [1] and generalized in [2, 3, 4]. When are stationary, [5, 6] develop the Berman’s approach in [7] to obtain an asymptotic behavior of large deviation probabilities of the stationary chi-square processes. When are locally-stationary Gaussian processes, [8] obtains the extreme of the supremum of the locally-stationary Gaussian process. See [9, 10] for more literature about locally stationary Gaussian processes.
In the case that the grid is very dense, then the tail asymptotics of supremum over or the dense grid is the same. The deference up to the constant appears for so-called Pickands grids, and in the case of rare grids, the asymptotics of supremum is completely different and usually very simple, see [11].
Before giving the locally-stationary chi-square processes, we introduce a class of locally stationary Gaussian processes, considered by Berman in [12], see also [13, 14, 15, 16, 17, 18]. Specifically, let be centered Gaussian processes with unit variance and correlation function satisfying

 (1.1) limϵ→0supt,t+s∈[0,T],|s|<ϵ∣∣∣1−ri(t,t+s)|s|α−ai(t)∣∣∣=0,

where are a continuous positive function on and .
Then the locally-stationary chi-square process can be defined as

 χ2(t)=n∑i=1X2i(t), t∈[0,T].

Let be the Pickands constant defined for and by

 Hηα(a)=limS→∞1SE{supt∈[0,S]∩ηZe√2aBα(t)−a|t|α},

where if . See [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] for various properties of .

###### Theorem 1.1.

Let be centered, sample path continuous Gaussian processes with unit variance and correlation functions satisfying assumption (1.1). Suppose that are such that

 (1.2) limu→∞ηuu1/α=η∈[0,∞).

If further for all , we have as

 (1.3) P{supt∈[0,T]∩ηuZχ2(t)>u}∼∫T0∫v∈Sn−1Hηα(n∑i=1v2iai(t))dvdtu1/αP{χ2(0)>u},

where .

Next in Section 2, we show an application related to change-point problem. The proof of Theorem 1.1 and some lemmas are relegated to Section 3 and Section 4.

## 2. Applications

In [31], they detect the change-point problem and give the generalized edge-count scan statistic expressed as

 S(t)=X21(t)+X22(t),

where are two independent Gaussian processes which, respectively, have covariance functions

 Cov(X1(s),X1(t))=(s∧t)(1−(s∨t))(s∨t)(1−(s∧t)), Cov(X2(s),X2(t))=(s∧t)(1−(s∨t))√(s∧t)(1−(s∧t))(s∨t)(1−(s∨t)).

Then in order to give the asymptotic -value approximations, we need to investigate when is large enough for

 P{supt∈[T1,T2]∩u−1ZS(t)>u}.
###### Theorem 2.1.

we have as

 P{supt∈[T1,T2]∩u−1ZS(t)>u}∼ue−u/22π∫v21+v22=1∫T2T1H11⎛⎝v21+12v22t(1−t)⎞⎠dv1dv2dt.

## 3. Proofs

Below stands for the integer part of and is the smallest integer not less than . Further is the survival function of an random variable. During the following proofs, are some positive constants which can be different from line by line and for interval we denote

 Ku(Δ1):=P{supt∈Δ1χ2(t)>u},  Ku(Δ1,Δ2):=P{supt∈Δ1χ2(t)>u,supt∈Δ2χ2(t)>u}.

Proof of Theorem 1.1 For any and , set

 Ik(θ)=[kθ,(k+1)θ]∩ηuZ,k∈N,N(θ)=⌊Tθ⌋, Jkl(u)=[kθ+lu−1/αλ,kθ+(l+1)u−1/αλ]∩ηuZ,M(u)=⌊θu1/αλ⌋, Kkl(u)=[kθ+lu−1/αλ,kθ+(l+1)u−1/αλ].

We have

 N(θ)−1∑k=0⎛⎝M(u)−1∑l=0Ku(Jkl(u))⎞⎠−4∑i=1Ai(u)≤Ku([0,T]∩ηuZ)≤N(θ)∑k=0Ku(Ik(θ))≤N(θ)∑k=0⎛⎝M(u)∑l=0Ku(Jkl(u))⎞⎠,

where

 Ai(u)=∑(k1,l1,k2,l2)∈LiKu(Kk1l1(u),Kk2l2(u)), i=1,2,3,4,

with

 L1={0≤k1=k2≤N(θ)−1,0≤l1+1=l2≤M(u)−1}, L2={0≤k1+1=k2≤N(θ)−1,l1=M(u),l2=0}, L3={0≤k1+1

By Lemma 4.1, we have as

 (3.1) N(θ)∑k=0⎛⎝M(u)∑l=0Ku(Jkl(u))⎞⎠ = N(θ)∑k=0⎛⎝M(u)∑l=0P⎧⎨⎩supt∈Jkl(u)χ2(t)>u⎫⎬⎭⎞⎠ ≤ N(θ)∑k=0⎛⎝M(u)∑l=0∫v∈Sn−1Hηα(n∑i=1v2i(ai(kθ)+εθ))λdvP{χ2(0)>u}⎞⎠ ∼ N(θ)∑k=0θ∫v∈Sn−1Hηα(n∑i=1v2i(ai(kθ)+εθ))dvu1/αP{χ2(0)>u} ∼ ∫T0∫v∈Sn−1Hηα(n∑i=1v2iai(t))dvdtu1/αP{χ2(0)>u}.

Similarly, we have as

 N(θ)−1∑k=0⎛⎝M(u)−1∑l=0Ku(Jkl(u))⎞⎠≥∫T0∫v∈Sn−1Hηα(n∑i=1v2iai(t))dvdtu1/αP{χ2(0)>u}.

Next we focus on the analysis of . For , without loss of generality, we assume and . Then set

 (Kk1l1(u))1=[k1θ+l1u−1/αλ,k1θ+(l1+1)u−1/α(λ−√λ)], (Kk1l1(u))2=[k1θ+(l1+1)u−1/α(λ−√λ),kθ+(l+1)u−1/αλ].

Then we have

 A1(u)≤∑(k1,l1,k2,l2)∈Li(Ku((Kk1l1(u))1,Kk2l2(u))+Ku((Kk1l1(u))2)).

Analogously as in (3.1), we have as

 ∑(k1,l1,k2,l2)∈L1Ku((Kk1l1(u))2) ≤ N(θ)−1∑k1=0M(u)−1∑l1=0Ku((Kk1l1(u))2) ≤ ∼ 1√λ∫T0∫v∈Sn−1Hηα(n∑i=1v2iai(t))dvdtu1/αP{χ2(0)>u}, = o(u1/αP{χ2(0)>u}).

Further, by Lemma 4.1

 A1(u) ≤ N(θ)−1∑k=0⎛⎝M(u)−1∑l=0(Ku(Kkl(u))+Ku(Kkl+1(u))−Ku(Kkl(u)∪Kkl+1(u)))⎞⎠ ∼ N(θ)−1∑k=0((Hα[0,(ak+εθ)1αd−1/αλ]+Hα[0,(ak+εθ)1αd−1/αλ]−Hα[0,2(ak−εθ)1αd−1/αλ]) ×M(u)−1∑l=0P{χ2(0)>u}⎞⎠ ≤ Q1⎛⎝N(θ)−1∑k=0((ak+εθ)1α−(ak−εθ)1α)θ⎞⎠u2/α∗P{χ2(0)>u} = o(u2/α∗P{χ2(0)>u}), u→∞,  λ→∞,θ→0.

Similarly, by Lemma 4.1

 A2(u) = N(θ)−1∑k=0Ku(JkM(u)−1(u),Jk+10(u)) ≤ N(θ)−1∑k=0P{supt∈[0,2λ]χ2((k+1)θ−u−2/αt)>u,supt∈[0,2λ]χ2((k+1)θ+u−2/αt)>u} = N(θ)−1∑k=0(P{supt∈[0,2λ]χ2((k+1)θ−u−2/αt)>u}+P{supt∈[0,2λ]χ2((k+1)θ+u−2/αt)>u} −P{supt∈[−2λ,2λ]χ2((k+1)θ−u−2/αt)>u}) ∼ N(θ)−1∑k=0((2Hα[0,2(ak+1+εθ)1αd−1/αλ]−Hα[−2(ak−εθ)1αd−1/αλ,2(ak−εθ)1αd−1/αλ]) ×M(u)−1∑l=0P{χ2(0)>u}⎞⎠ ≤ Q2⎛⎝N(θ)−1∑k=0((ak+εθ)1α−(ak−εθ)1α)θ⎞⎠u2/αP{χ2(0)>u} = o(u2/αP{χ2(0)>u}), u→∞,  λ→∞,θ→0.

For any

 E{Xi(t)Xi(s)}=r(s,t)≤1−δ(θ)

for where is related to . Then we have

 A3(u) ≤ N(θ)M(u)2Ψ(2u−Q3d√4−δ(θ)) ≤ Tλu2/α2Ψ(2u−Q3d√4−δ(θ)) = o(u2/αP{χ2(0)>u}), u→∞,λ→∞,θ→0.

where is a large constant. Finally by Lemma 4.2 for large enough and small enough

 A4(u) ≤ N(θ)−1∑k=0⎛⎝2M(u)∑l=02M(u)∑i=2Ku(Jkl(u),Jkl+i(u))⎞⎠ ≤ ≤ Q6Tλu−2/αP{χ2(0)>u}(∞∑i=1exp(−Q58|iλ|α)) = o(u2/αP{χ2(0)>u}), u→∞,λ→∞,θ→0.

Thus the claim follows.

## 4. Appendix

Proof of Theorem 2.1 For the correlation function of , a simple calculation shows that

where and .
Then by Theorem 1.1, the result fellows.

###### Lemma 4.1.

Let be centered, sample path continuous Gaussian processes with unit variance and correlation functions satisfying assumption (1.1). Suppose that are such that

 (4.1) limu→∞ηuu1/α=η∈[0,∞).

Set and a family of index sets. If for some small enough , we have for some constant , with when large enough

 ∫v∈Sn−1Hηα(n∑i=1v2i(ai−εθ))dv ≤ limu→∞∀k∈KuP{supt∈[S1,S2]∩ηuZχ2(t0+u−1/αkS+t)>u}P{χ2(0)>u} ≤ ∫v∈Sn−1Hηα(n∑i=1v2i(ai+εθ))dv.

Proof of Lemma 4.1 For large enough, using the duality property of norm we find

 P{supt∈[S1,S2]∩ηuZχ2(t0+u−1/αkS+u−1/αt)>u} =P{sup(t,v)∈([S1,S2]∩ηuZ)×(Sd−1)Yu(t,v)>u1/2},

where . We know that the variance function of always equal to 1 over . The fields can be represented as

 Yu(t,˜v)=n∑i=2viXi(t0+u−1/αkS+u−1/αt)+(1−n∑i=2v2i)1/2X1(t0+u−1/αkS+u−1/αt),˜v=(v2,⋯,vn),

which is defined in where

 ˜Sd−1=⎧⎨⎩˜v:⎛⎝(1−n∑i=2vqi)1/q,v2,⋯,vn,⎞⎠∈Sd−1⎫⎬⎭.

Furthermore, following the arguments as in [32] we conclude that the correlation function of satisfies for large enough

 ru(t,˜v,s,˜w)≥1−u−2(n∑i=1v2i(ai+εθ))|t−s|α−1+εθ2n∑i=2(vi−wi)2, ru(t,˜v,s,˜w)≤1−u−2(n∑i=1v2i(ai−εθ))|t−s|α−1−εθ2n∑i=2(vi−wi)2.

Then the proof follows by similar arguments as in the proof of [33] [Theorem 6.1] with the case . Consequently, we get

 P{sup(t,v)∈([S1,S2]∩ηuZ)×(Sd−1)Yu(t,v)>u1/2} = P⎧⎨⎩sup(t,˜v)∈([S1,S2]∩ηuZ)×(˜Sd−1)Yu(t,˜v)>u1/2⎫⎬⎭ ≤ ∫v∈Sn−1Hηα(n∑i=1v2i(ai+εθ))dvP{χ2(0)>u},

and

 P{sup(t,v)∈([S1,S2]∩ηuZ)×(Sd−1)Yu(t,v)>u1/2} ≥ ∫v∈Sn−1Hηα(n∑i=1v2i(ai−εθ))dvP{χ2(0)>u}.

###### Lemma 4.2.

Assume the same assumptions as in Lemma 4.1. Further, let be such that for all and

 a2|t−s|α≤1−ri(s,t)≤2a|t−s|α.

Then we can find a constant such that for all and ,

where , and

 limu→∞supk∈Ku∣∣u−2/αkS∣∣≤ε0.

Proof of Lemma 4.2 Through this proof, are some positive constant.
Set which is a centered Gaussian field and .
Below for , denote

 Yu(Δ1,Δ2)=P{sup(t,v)∈Δ1Yu(t,v)>uk,sup(t,v)∈Δ2Yu(t,v)>uk}.

We have

 P{A1(uk),A2(uk)} ≥ Yu([T1,T1+S]×Sδq,[T2,T2+S]×Sδq), P{A1(uk),A2(uk)} ≤ Yu([T1,T1+S]×Sδq,[T2,T2+S]×Sδq) +Yu([T1,T1+S]×Sδq,[T2,T2+S]×(Sq∖Sδq)) +Yu([T1,T1+S]×(Sq∖Sδq),[T2,T2+S]×Sδq),

and

 Yu([T1,T1+S]×Sδq,[T2,T2+S]×(Sq∖Sδq)) ≤ P⎧⎨⎩sup(t,v)∈[T2,T2+S]×(Sq∖Sδq)Yu(t,v)>uk⎫⎬⎭ ≤ exp(−(uk−C1)22(d2−δ)) = o(P{Z(t0)>uk}),

as where the last second inequality follows from Borell inequality and the fact that

 sup(t,v)∈[T2,T2+S]×(Sq∖Sδq)Var(Yu(t,v))=supv∈(Sq∖Sδq)(n∑i=1d2iv2i)≤d2−δ.

Similarly, we have

 Yu([T1,T1+S]×(Sq∖Sδq),[T2,T2+S]×Sδq)=o(P{Z(t0)>uk}), u→∞.

Then we just need to focus on

 Π(u):=Yu([T1,T1+S]×Sδq,[T2,T2+S]×Sδq).

We split into sets of small diameters , where

 N∗=♯