Sliding Blocks Estimators

Inference for heavy tailed stationary time series based on sliding blocks

Axel Bücher Ruhr-Universität Bochum, Fakultät für Mathematik, Universitätsstr. 150, 44780 Bochum, Germany  and  Johan Segers Université catholique de Louvain, Institut de Statistique, Biostatistique et Sciences Actuarielles, Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium
July 25, 2019
Abstract.

The block maxima method in extreme value theory consists of fitting an extreme value distribution to a sample of block maxima extracted from a time series. Traditionally, the maxima are taken over disjoint blocks of observations. Alternatively, the blocks can be chosen to slide through the observation period, yielding a larger number of overlapping blocks. Inference based on sliding blocks is found to be more efficient than inference based on disjoint blocks. The asymptotic variance of the maximum likelihood estimator of the Fréchet shape parameter is reduced by more than 18%. Interestingly, the amount of the efficiency gain is the same whatever the serial dependence of the underlying time series: as for disjoint blocks, the asymptotic distribution depends on the serial dependence only through the sequence of scaling constants. The findings are illustrated by simulation experiments and are applied to the estimation of high return levels of the daily log-returns of the Standard & Poor’s 500 stock market index.

Key words: Apéry’s constant; block maxima; Fréchet distribution; maximum likelihood estimator; Marshall–Olkin distribution; Pickands dependence function; return level.

1. Introduction

Two major paradigms in extreme value theory are the block maxima method and the peaks-over-threshold method. The former, more traditional one consists of fitting an extreme value distribution to a sample of block maxima extracted from a (perhaps latent) underlying sample. The latter method consists of fitting a generalized Pareto distribution to the excesses in a sample over a high threshold.

Although the peaks-over-threshold method has become the standard one, there has been a renewed interest recently in the block maxima method, and more specifically in its asymptotic properties. The set-up is that of a triangular array of block maxima extracted from a stationary time series. The block size, , tends to infinity as the sample size, , tends to infinity, in such a way that the number of (disjoint) blocks, approximately , tends to infinity as well. The underlying sequence of random variables can be independent (Ferreira and de Haan, 2015; Dombry, 2015; Dombry and Ferreira, 2017) or can exhibit serial dependence (Bücher and Segers, 2014, 2018).

Usually, maxima are taken over disjoint blocks of observations. For instance, for a sequence of daily observations, one may extract weakly, monthly, quarterly or yearly maxima, see, e.g., McNeil (1998) and Longin (2000) or Katz et al. (2002) for applications in finance or hydrology, respectively. Alternatively, one can slide a block or window of a given size through the sample and consider the corresponding maxima. Obviously, the blocks will be overlapping and thus dependent, even if the underlying sequence of random variables is independent. Still, as soon the underlying sequence is stationary, then so are the sliding block maxima. Moreover, the sample of sliding block maxima carries more information than the sample of disjoint block maxima, which suggests the possibility of more accurate inference. Robert et al. (2009), Northrop (2015) and Berghaus and Bücher (2016) applied this idea to the estimation of the extremal index, a summary measure for the strength of serial dependence between extremes. They found that estimators based on sliding blocks were indeed more efficient than their counterparts based on disjoint blocks.

Here, we investigate the potential benefits of using maxima over sliding blocks rather than over disjoint blocks for fitting extreme value distributions. More precisely, we seek the asymptotic distribution of the maximum (quasi-)likelihood estimator for the shape and scale parameters of a Fréchet distribution. The likelihood is computed as if the sliding block maxima are independent, although they are not, as blocks may overlap.

The solution is based on Theorem 2.5 in Bücher and Segers (2018), which states high-level conditions for the consistency and asymptotic normality of the maximum likelihood estimator of the Fréchet parameter vector based on a general triangular array of dependent random variables. The biggest challenge is the computation of the estimator’s asymptotic covariance matrix. In the course of the computations, we find new formulas for moments of pairs of jointly max-stable random variables in terms of their Pickands dependence function. These formulas are then applied to the bivariate Marshall–Olkin distribution, which describes the joint asymptotic distribution of the two maxima over a pair of overlapping blocks. The Marshall–Olkin parameter is a function of the proportion of overlap between the two blocks.

We find that the maximum likelihood estimator based on sliding blocks is more efficient than the maximum likelihood estimator based on disjoint blocks. For the estimator of the Fréchet shape parameter, the reduction in asymptotic variance is more than 18%. Remarkably, this number does not depend on the serial dependence of the underlying stationary time series, in accordance to the findings for disjoint blocks in Bücher and Segers (2018). Moreover, the efficiency gain carries over to the estimation of high return levels. The asymptotic results are confirmed in numerical experiments. We illustrate the method by estimating high quantiles of quarterly maxima of daily log-returns of the S&P500 index. The Monte Carlo simulations reveal another benefit of using sliding blocks: it makes the estimator more stable as a function of the block size.

The maximum likelihood estimator is defined and its asymptotic distribution is stated in Section 2. Estimation of high return levels is considered in Section 3, both theoretically and through a case study, followed by the results of a Monte Carlo simulation experiment in Section 4. The proofs are given in Section 5 while the covariance calculations are deferred to Appendix A.

2. Inference based on sliding blocks

2.1. The maximum likelihood estimator based on sliding block maxima

Let be a strictly stationary time series: for any and for any , the distribution of is the same as the distribution of . Further, let denote the Fréchet distribution with parameter , given by its distribution function for . We assume the following maximum domain-of-attraction condition. The arrow denotes convergence in distribution.

Condition 2.1 (Max-domain of attraction).

For some , there exists a sequence , regularly varying at infinity with index , such that

 max(X1,…,Xr)/σr⇝Pα0,1,r→∞.

Regular variation of the sequence with index means that for every , where denotes the integer part. A sufficient condition is that the common univariate distribution of the variables is in the max-domain of attraction of the Fréchet distribution (see Section 2.3) and that the extremal index of the time series exists and is positive (Leadbetter, 1983).

Suppose we observe a finite stretch of the time series, . For some integer , let

 Mr,t=Mt:(t+r−1)=max{Xt,…,Xt+r−1},t=1,…,n−r+1,

denote the maximum over the successive observations starting at time point . The sequence with is referred to as the sequence of sliding block maxima. In contrast, the classical block maxima method in extreme value statistics is based on the sequence of disjoint block maxima , where . The common big blocks/small blocks heuristics suggests that the latter sequence may be regarded as asymptotically independent and Fréchet distributed. Any sensible estimator within the statistical model is hence a sensible estimator when applied to the sample of disjoint block maxima as well.

Unfortunately, this idea cannot be directly transferred to the sample of sliding block maxima, as that sequence is certainly not asymptotically independent, not even for an underlying iid time series. Still, the sample of sliding block maxima is stationary and the asymptotic distribution of a single such block maximum is Fréchet. We can therefore estimate the Fréchet parameters by moment matching, for instance. Being based on empirical moments only, the maximum likelihood estimator for independent sampling from the Fréchet distribution (model ) is a case in point.

Existence and uniqueness of the maximum likelihood estimator in model is studied in Section 2.1 of Bücher and Segers (2018). The estimator is defined as

 (^αn,^σn)=\operatornamewithlimitsargmaxθ∈(0,∞)2n−r+1∑t=1ℓθ(Xn,t), (2.1)

where is the contribution of an observation at to the Fréchet log-likelihood of the parameter vector and where

 Xn,t=Mr,t∨c (2.2)

with an arbitrary truncation constant . The reason for the left-truncation is that otherwise some of the block maxima could be zero or negative. Asymptotically, the truncation constant does not matter thanks to Condition 2.2 below. In practice, it should be chosen as small as possible, e.g., equal to the square root of the machine precision. By Lemma 2.1 in Bücher and Segers (2018), the maximizer exists and is unique as soon as the values are not all equal, which our conditions will guarantee to occur with probability tending to one. Note that the likelihood is constructed as if the sliding block maxima were independent, although they are not, not even if the underlying sequence is independent, since some blocks overlap. Therefore, the estimator may be more accurately referred to as a maximum quasi-likelihood estimator.

2.2. Asymptotic normality

For our main result, we need a couple of additional conditions. First of all, let be an integer sequence tending to infinity such that . The next condition is a slight adaptation of Condition 3.2 in Bücher and Segers (2018).

Condition 2.2 (All block maxima of size ⌊rn/2⌋ diverge).

For every , the probability of the event that all disjoint block maxima of size are larger than converges to 1.

The condition in fact implies that the probability of the event that all sliding block maxima of size are larger than converges to 1 as well. It therefore guarantees that the left-truncation above does not matter asymptotically. In Section 2.3, the condition will be shown to hold for iid time series, provided the block sizes are not too small.

The following three conditions are Conditions 3.3, 3.4 and 3.5 in Bücher and Segers (2018). The alpha-mixing coefficients of the sequence are defined as

 α(k)=sup{|P(A∩B)−P(A)P(B)|:A∈σ(Xj,j≤i),B∈σ(Xj+k,j≥i),i∈Z},

for .

Condition 2.3 (α-Mixing with rate).

We have . Moreover, there exists such that

 limn→∞(n/rn)1+ωα(rn)=0.
Condition 2.4 (Moments).

There exists with from Condition 2.3 such that

 limsupr→∞E[gν,α0((Mr,1∨1)/σr)]<∞, (2.3)

where .

An elementary argument shows that if Condition 2.4 holds, then (2.3) continues to hold with replaced by , for arbitrary .

Let and write for a real-valued function on . Further, let

 f1(x)=x−α0log(x),f2(x)=x−α0,f3(x)=log(x) (2.4)

and note that, by Lemma B.1 in Bücher and Segers (2018),

 Pf1=−α−10Γ′(2)=α−10(γ−1),Pf2=Γ(2)=1,Pf3=−α−10Γ′(1)=α−10γ,

where and denote the gamma function and its derivative, respectively, while denotes the Euler–Mascheroni constant.

Condition 2.5 (Bias).

There exists such that for ,

with as defined in (2.4).

As an interesting consequence of Theorem 2.6 below, the limit can be seen not to depend on the constant under the conditions of the theorem.

Finally, we define the empirical process based on the left-truncated sliding block maxima in (2.2) as

 Gnf=√nrn(k−1nkn∑t=1f(Xn,t/σrn)−Pf).

Recall the maximum (quasi-)likelihood estimator in (2.1). The following theorem is the main result of this paper.

Theorem 2.6.

Suppose Conditions 2.12.5 are met. Then, for any and with probability tending to one, there exists a unique maximizer of the Fréchet log-likelihood based on the left-truncated sliding block maxima , see (2.1), and we have, as ,

 √mn(^αn−α0^σn/σrn−1) =M(α0)⎛⎜⎝Gnx−α0log(x)Gnx−α0Gnlog(x)⎞⎟⎠+oP(1)⇝N2(M(α0)B,Σ(α0)).

Here, , as in Condition 2.5 and

 M(α0) =6π2(α20α0(1−γ)−α20γ−1−(Γ′′(2)+1)/α01−γ), Σ(α0) =M(α0)ΣYM(α0)T≈(−0.4946α20−0.3236−0.3236−0.9578α−20),

with as in Corollary 5.4 and equation (5.8).

It is interesting to note that the limiting covariance matrix is substantially smaller than for the estimator based on disjoint blocks, see Theorem 3.6 in Bücher and Segers (2018):

 I−1(α0,1)=6π2(α20(γ−1)(γ−1)α−20{(1−γ)2+π2/6})≈(−0.6080α20−0.2570−0.2570−1.1087α−20).

The improvement is independent of the value of and of the serial dependence of the time series (e.g., of any of the characteristics like the extremal index or the cluster distribution). In particular, the quotient of the asymptotic variances for the shape and scale parameters are and , respectively.

More generally, the delta method implies that the asymptotic distribution of a tail quantity that can be written as a smooth function of will be normal with asymptotic variance equal to (disjoint blocks) or (sliding blocks), where is a vector of partial derivatives depending on the estimator. The ratio of asymptotic variances is thus equal to

 β′Σ(α0)ββ′I−1(α0,1)β. (2.5)

By the method of Lagrange multipliers, this ratio can be found to attain its minimum and maximum at the smallest and largest eigenvalues of the matrix . Independently of , these eigenvalues are equal to and , respectively, and provide precise lower and upper bounds to the ratio in (2.5).

Should the underlying time series be positive, one might be tempted to omit the left-truncation introduced above (which amounts to setting ). Working out the asymptotic theory is possible, but at the cost of more complicated conditions. Even if almost surely, the lower tail of the random variable in a neighbourhood of zero must not be too heavy for the moment and bias conditions to be true without truncation. In practice, left-truncation at a suitable small constant appears to be non-restrictive anyway whence we do not pursue this issue any further.

2.3. Sliding block maxima extracted from an iid sequences

If the sequence is iid, then all conditions can be expressed in terms of the univariate, marginal distribution function . Condition 2.1 is equivalent to regular variation of the function at infinity with index , that is,

 limu→∞−logF(ux)−logF(u)=x−α0,x∈(0,∞). (2.6)

The scaling sequence can be chosen as , for .

To control the bias (Condition 2.5), we need to reinforce regular variation in (2.6) to second-order regular variation of the function together with a growth restriction on the block size sequence . The following condition is identical to Condition 4.1 in Bücher and Segers (2018, Section 4); see also Remark 4.3 therein for additional context. For , define by

 hτ(x)=∫x1yτ−1dy=⎧⎨⎩xτ−1τ,if τ≠0,log(x),if τ=0.
Condition 2.7 (Second-Order Condition).

There exists , , and a real function on of constant, non-zero sign such that and such that, for all ,

 limu→∞1A(u)(−logF(ux)−logF(u)−x−α0)=x−α0hρ(x). (2.7)

Let denote the digamma function. For , define the bias function

 B(α0,ρ)=−6π2(b1(|ρ|/α0)b2(|ρ|/α0)/α20), (2.8)

where

 b1(x)=⎧⎨⎩(1+x)Γ(x){γ+ψ(1+x)},if x>0,π26,if x=0,

and

 b2(x)=⎧⎨⎩−π26x+(1+x)Γ(x){Γ′′(2)+γ+(γ−1)ψ(1+x)},if x>0,0,if x=0.

The graphs of these functions are depicted in Figure 1 in Bücher and Segers (2018).

Theorem 2.8.

Let be independent random variables with common distribution function satisfying Condition 2.7. Let the block sizes be such that and as and assume that

 limn→∞√mnA(arn)=λ∈R. (2.9)

Then, for any and with probability tending to one, there exists a unique maximizer of the Fréchet log-likelihood based on the left-truncated sliding block maxima , and we have

 √mn(^αn−α0^σn/σrn−1)⇝N2(λB(α0,ρ),Σ(α0)),n→∞,

with as in Theorem 2.6 and as in (2.8).

Compared to the estimator based on disjoint block maxima (Bücher and Segers, 2018, Theorem 4.2), the asymptotic bias is the same, but the asymptotic variance is smaller, as explained after Theorem 2.6.

3. Application to return level estimation

3.1. Estimator

Let . For , the -return level of the sequence of disjoint block maxima is defined as the quantile of , that is,

 RL(T,r)=F←r(1−1/T)=inf{x∈R:Fr(x)≥1−1/T}.

Since disjoint blocks are asymptotically independent, it will take on average disjoint blocks of size until the first such block whose maximum exceeds .

By Condition 2.1, for large , we may approximate by , the cdf of the Fréchet distribution with shape parameter and scale parameter . The quantile function of the Fréchet family is given by A reasonable estimator of is therefore

 ˆRLn(T,rn)=^σnb−1/^αnT,bT=−log(1−1/T).

Also, let

Corollary 3.1.

Additionally to the conditions in Theorem 2.6 assume the bias condition

 Λn(T)=√mn(˜RL(T,rn)RL(T,rn)−1)→Λ(T)∈R,n→∞.

Then, as ,

 √mn(ˆRLn(T,rn)RL(T,rn)−1) =β(T,α0)′⋅√mn(^αn−α0^σn/σrn−1)+Λn(T)+oP(1) ⇝N(β(T,α0)′M(α0)B+Λ(T),β(T,α0)′Σ(α0)β(T,α0))

where .

The same result holds true for the disjoint blocks estimator, but with replaced by , the inverse of the Fisher information matrix for the Fréchet family. In Table 1, the asymptotic variances are given for various values of and with .

3.2. Case study

We consider daily log-returns on the S&P500 index (data downloaded from Yahoo Finance) in the fifty-year period from 1967 to 2016, yielding observations in total. For such a long period, the log-returns can hardly be modelled by a stationary time series, and therefore we consider ten-year periods instead, yielding approximately daily observations per period. We consider a block size equal to , which corresponds to approximately one quarter, yielding disjoint quarters in a ten-year period. In light of the results in McNeil (1998) and Longin (2000), such a block size guarantees that the block maxima are approximately Fréchet distributed, while at the same time the effective sample size of is large enough to guarantee a reasonably small variance of the estimators. We estimate the quantile of the distribution of the quarterly maximum using the sliding block maxima and we check whether this level is exceeded by the quarterly maximum immediately following the ten-year training sample. We then let the ten-year window roll through the whole fifty-year period in steps of one quarter, giving estimated return levels in total, of which, on average, should be exceeded. We consider , for which one would thus expect exceedances, respectively. Doing the calculations separately for the positive and negative log-returns, we find exceedances for the wins and exceedances for the losses.

The results are shown in Figure 1. The black lines depict the quarters following each ten-year training period, from 1977:Q1 up to 2016:Q4. The estimated return levels are represented by colored lines; for instance, the value of the red line at 1977:Q1 corresponds to the estimated quantile of the cdf of the quarterly block maxima, estimated from the data in the period 1967–1976. The dots correspond to exceedances of the estimated return levels.

In Figure 2, we show estimated values and confidence intervals (based on the normal approximation and ignoring the bias) for the Fréchet shape and scale parameters. The black lines are the same as in Figure 1, except for an affine transformation. The impact of the occurrence of large block maxima is clearly visible (decrease in , increase in ).

4. Simulation Study

We compare the performance of the disjoint and sliding blocks variations of the maximum likelihood estimator of the Fréchet shape parameter . We also compare the results with the Hill estimator (Hill, 1975), which is the maximum likelihood estimator for the one-parameter Pareto distribution given by fitted to the relative excesses over a high threshold. To put the estimators on equal footing, we set the threshold equal to the largest order statistic, of which there are excesses, where is the number of disjoint blocks of size and which acts as an effective sample size.

We consider two different dependence scenarios: iid sequences and a max-autoregressive (ARMAX) model

 Xt=max{βXt−1,(1−β)Zt},t∈Z,

with parameter and where are nonnegative iid random variables whose distribution is in the max-domain of attraction of the Fréchet distribution with shape parameter . The process is strictly stationary and admits the causal representation . Since , rescaled block maxima of converge weakly to the same Fréchet distribution, but with scaling sequence depending on . For the simulations, we fix , and use a burn-in period of length to arrive at approximate stationarity.

We consider three different choices for the iid case and for the innovation distribution of the max-autoregressive model: the Fréchet distribution itself, the Pareto distribution with shape parameter , and the absolute value of the Student t-distribution with degrees of freedom.

The mean squared error, squared bias and variance for the estimators of are shown in Figure 3. We consider a fixed sample size , block sizes for the blocks estimators (i.e., ranging from to 500) and numbers of upper order statistics for the Hill estimator. The results are based on repetitions.

The bias-variance trade-off is clearly visible: small (large ) yields a large variance but a small bias, while increasing (decreasing ) decreases the variance but potentially increases the bias (with some exceptions for the Hill estimator in the ARMAX-model). The appearance of the bias curves in the iid scenarios may be explained as follows: The Hill estimator is the maximum likelihood estimator in the iid Pareto model, while the blocks estimators are maximum (quasi-)likelihood estimators in the iid Fréchet models. The absolute -distribution is in between. The size of the bias is ordered accordingly.

In all cases, the sliding blocks estimator is more accurate than the disjoint blocks estimator due to its smaller variance.

Finally, another advantage of the sliding blocks estimator is that its trajectories fluctuate less as a function of , as illustrated in Figure 4.

5. Proofs and auxiliary results

The proof of Theorem 2.6 is based on a sequence of auxiliary lemmas. Let , with specified in the subsequent lemmas. All convergences will be for , if not stated otherwise. Throughout the proofs, we will write , etc.

Lemma 5.1 (Joint weak convergence of sliding block maxima).

Suppose that Condition 2.1 is met and that there exists an integer sequence such that and as . Then, for any and any , we have, for and and as ,

 limr→∞P(Zrn,1≤x,Zrn,1+⌊rnξ⌋≤y)=Gα0,ξ(x,y)={exp{−ξx−α0−(1−ξ)(x∧y)−α0−ξy−α0},if 0≤ξ≤1,exp(−x−α0−y−α0),if ξ≥1. (5.1)

Surprisingly, the joint limiting law does not depend on any quantities related to the serial dependence of the time series (like the extremal index). The two margins of the limit distribution are always . For , the two block maxima are asymptotically independent. For , the limiting distribution is bivariate max-stable with margins and with Pickands dependence function (Pickands, 1981) equal to

 Aξ(w)=ξ+(1−ξ)(w∧(1−w)),w∈[0,1]. (5.2)

If , then is a pair of unit exponential random variables with joint survival function

 P(Z−α01≥s,Z−α02≥t) =exp{−ξs−(1−ξ)(s∧t)−ξt} =exp{−(s+t)Aξ(s/(s+t))}. (5.3)

This is the Marshall–Olkin distribution (Marshall and Olkin, 1967) with dependence parameter . The distribution depends on in such a way that the dependence increases as decreases, i.e., as the overlap between the two blocks increases.

Proof of Lemma 5.1.

Write , and . Since as , we may redefine . By Condition 2.1,

 limr→∞σ⌊rξ⌋/σr=ξ1/α

for any . As a consequence,

 P(M1:⌊rξ⌋≤σrx)=P{M1:⌊rξ⌋≤σ⌊rξ⌋(σrσ⌊rξ⌋x)}→exp(−ξx−α),r→∞.

Consider the case . We will show below that

 P(Zr,1≤x,Zr,1+⌊rξ⌋≤y) = P{M1:⌊rξ⌋≤σrx,M(⌊rξ⌋+1):r≤σr(x∧y),M(r+1):(r+⌊rξ⌋)≤σry} = P{M1:⌊rξ⌋≤σrx}P{M(⌊rξ⌋+1):r≤σr(x∧y)}P{M(r+1):(r+⌊rξ⌋)≤σry}+o(1) (5.4)

as . As a consequence of the previous two displays and by strict stationarity, we obtain (5.1) for . The case can be treated similarly while the case is trivial. It remains to show (5).

As a consequence of Lemma A.8 in Bücher and Segers (2018), we have

 limn→∞P(M1:(r′−ℓ)

for any sequence such that is bounded away from 0 and infinity; note that Condition 3.1 in that paper follows from our assumption that is regularly varying. Applying this result four times and using the fact that , we may write the expression in the middle line of (5) as

 P{M1:(⌊rξ⌋−ℓ)≤σrx,M(⌊rξ⌋+1):(r−ℓ)≤σr(x∧y),M(r+1):(r+⌊rξ⌋)≤σry}+o(1) = P{M1:(⌊rξ⌋−ℓ)≤σrx}P{M(⌊rξ⌋+1):(r−ℓ)≤σr(x∧y)}P{M(r+1):(r+⌊rξ⌋)≤σry}+o(1) =

as , which proves (5). ∎

Lemma 5.2 (Asymptotic covariances of functions of sliding block maxima).

Suppose Conditions 2.1 and 2.4 are met and that there exists an integer sequence such that and as . Then, for any , and any pair of measurable functions on which are continuous almost everywhere and satisfy

 (|f|∨|g|)2≤gη,α0(x)={x−α01(x≤e)+log(x)1(x>e)}2+η

for some , we have

 limn→∞Cov(f(Zrn,1),g(Zrn,1+⌊rnξ⌋))=Covα0,ξ(f(Z1),g(Z2))

where the right-hand side means that as in (5.1).

Proof of Lemma 5.2.

The result is a simple consequence of Lemma 5.1, the Cauchy–Schwarz inequality and Example 2.21 in van der Vaart (1998). ∎

Lemma 5.3.

Suppose Conditions 2.1, 2.3 and 2.4 are met. Then, for any pair of measurable functions which are continuous almost everywhere and satisfy

 (|f|∨|g|)2≤gη,α0(x)={x−α01(x≤e)+log(x)1(x>e)}2+η

for some , we have

 σf,g=limn→∞Cov(Gnf,Gng)=2∫10Covα0,ξ(f(Z1),g(Z2))dξ.
Proof of Lemma 5.3.

Let , where . Then , and as , so that the results of Lemma 5.15.2 become available.

For , let denote the set of indices making up the th disjoint block of size . For simplicity assume that is an integer. Then, we may write

 1knkn∑t=1f(Zr,t)=1knmn∑h=1Ah,1knkn∑t=1g(Zr,t)=1knmn∑h=1Bh,

where and . As a consequence,

 Cov(Gnf,Gng) =mnk2n(mnCov(A1,B1)+mn−1∑h=1(mn−h){Cov(A1,B1+h)+Cov(B1,A1+h)}) =m2nk2nCov(A2,B1+B2+B3)−mnk2nCov(A2,B1+B3) +m2nk2nmn−1∑h=2(1−hmn){Cov(A1,B1+h)+Cov(B1,A1+h)}. (5.5)

Let us proceed by showing that

 limn→∞Sn=2∫10Covα0,ξ(f(Z1),g(Z2))dξ, where Sn=1r2nCov(A2,B1+B2+B3). (5.6)

For that purpose, define functions and on the positive real line by

 gn1(ξ)=Cov(f(Zr,1),g(Zr,1+⌊rξ⌋)),gn2(ξ)=Cov(f(Zr,1+⌊rξ⌋),g(Zr,1)).

We may then write

 1r2nCov(A2,B2) =1r2nrn∑s=1rn∑t=1Cov(f(Zr,s),g(Zr,t)) 1r2nCov(A2,B3) =1r2nrn∑s=12rn∑t=rn+1Cov(f(Zr,s),g(Zr,t)) =1rnrn−1∑h=1hrngn1(hrn)+1rn2rn−1∑h=rn(2−hrn)gn1(hrn), 1r2nCov(A2,B1) =1r2nrn∑s=12rn∑t=rn+1Cov(f(Zr,s),g(Zr,t)) =1rnrn−1∑h=1hrngn2(hrn)+1rn2rn−1∑h=rn(2−hrn)gn2(hrn).

As a consequence of the three previous formulas,

 Sn=∫10{gn1(ξ)+gn2(ξ)}dξ+Rn

where the remainder satisfies

 |Rn|≤1rn|gn1(0)|+2∫21{|gn1(ξ)|+|gn2(ξ)|}dξ.

Note that by Lemma 5.2, for . In particular, the limit is zero for . Hence, we obtain both and (5.6) by dominated convergence.

Let us finally show that the sum on the right-hand side of (5) is negligible. The lemma then follows from (5) and (5.6). For that purpose, start by considering the sum over those summands for which . In this case, the observations making up