Maximum likelihood estimators based on the block maxima method

# Maximum likelihood estimators based on the block maxima method

Clément Dombry
Univ. Bourgogne Franche-Comté,
Laboratoire de Mathématiques de Besançon,
UMR CNRS 6623, 16 route de Gray, 25030 Besançon cedex, France.
Email: clement.dombry@univ-fcomte.fr
and
Ana Ferreira
Instituto Superior Técnico,
Av. Rovisco Pais 1049-001 Lisboa, Portugal.
Email: anafh@tecnico.ulisboa.pt
###### Abstract

The extreme value index is a fundamental parameter in univariate Extreme Value Theory (EVT). It captures the tail behavior of a distribution and is central in the extrapolation beyond observed data. Among other semi-parametric methods (such as the popular Hill’s estimator), the Block Maxima (BM) and Peaks-Over-Threshold (POT) methods are widely used for assessing the extreme value index and related normalizing constants. We provide asymptotic theory for the maximum likelihood estimators (MLE) based on the BM method. Our main result is the asymptotic normality of the MLE with a non-trivial bias depending on the extreme value index and on the so-called second order parameter. Our approach combines asymptotic expansions of the likelihood process and of the empirical quantile process of block maxima. The results permit to complete the comparison of most common semi-parametric estimators in EVT (MLE and probability weighted moment estimators based on the POT or BM methods) through their asymptotic variances, biases and optimal mean square errors.

Key words: asymptotic normality, block maxima method, extreme value index, maximum likelihood estimator, peaks-over-threshold method, probability weighted moment estimator.

AMS Subject classification: 62G32, 62G20, 62G30.

## 1 Introduction

The Block Maxima (BM) method, also known as Annual Maxima, after Gumbel [11] is a fundamental method in Extreme Value Theory and has been widely used. The method is justified under the Maximum Domain of Attraction (MDA) condition: for an independent and identically distributed (i.i.d.) sample with distribution function , if the linearly normalized partial maxima converges in distribution, then the limit must be a Generalized Extreme Value (GEV) distribution. In practice, one rarely exactly knows but the MDA condition holds for most common continuous distributions.

In the BM method, the initial sample is divided into blocks of the same size and the MDA condition ensures that the block maxima are approximately GEV distributed. The method is commonly used in hydrology and other environmental applications or in insurance and finance when analysing extremes - see e.g. the monographs by Embrechts et al. [9], Coles [5], Beirlant et al. [2], de Haan and Ferreira [6] and references therein.

The GEV is a three parameter distribution, with the usual location and scale parameters, and the extreme value index being the main parameter as it characterizes the heaviness of the tail. Several estimation methods have been proposed, including the classical maximum likelihood (ML) and probability weighted moments (PWM) estimators (Hosking et al. [13]). The asymptotic study of these estimators has been established for a sample from the GEV distribution and asymptotic normality holds with null bias and explicit variance (Prescott and Walden [15], Hosking et al. [13], Smith [17], Bücher and Segers [4]). The theory is made quite difficult and technical by the fact that the support of the GEV is varying with respect to its parameters. Regularity in quadratic mean of the GEV model has been proven only recently by Bücher and Segers [4] and we provide here a different and somewhat simpler proof (cf. Proposition 4.1).

However, in applications, the sample block maxima are only approximately GEV so that the classical parametric theory suffers from model misspecification. In this paper, we intend to fill this gap for ML estimators (MLE), by showing asymptotic normality under a flexible second order condition (a refinement of the MDA condition). Depending on the asymptotic block size, a non trivial bias may appear in the limit for which we provide an exact expression. Recently Ferreira and de Haan [10] showed asymptotic normality of the PWM estimators under the same conditions. They derived a uniform expansion for the empirical quantile of block maxima that is a crucial tool in our approach as well. Indeed, the MLE can be seen as a maximizer of the so-called likelihood process. Expressing the likelihood process in terms of this empirical quantile process, we are able to derive an expansion of the likelihood process that implies the asymptotic normality of the MLE. This derivation is again made quite technical by the fact that the support of the GEV is varying. Note that the asymptotic normality for the MLE of a Fréchet distribution based on the block maxima of a stationary heavy-tailed time series has been obtained by Bücher and Segers [3]. There the issue of parameter dependent supports is avoided but time dependence has to be dealt with. Besides, the ideas underlying their proof are quite different.

The asymptotic normality result in the present paper brings novel results to the theoretical comparison of the main semi-parametric estimation procedures in EVT. On the one hand it permits to compare BM and Peaks-over-Threshold (POT) methods (see e.g. Balkema and de Haan [1], Pickands [14]), the latter being another fundamental method in EVT and concurrent with BM. We discuss and compare the four different approaches – MLE/PWM estimators in the BM/POT approaches – based on exact theoretical formulas for asymptotic variances, biases and optimal mean square errors depending on the extreme value index and the second order parameter. It turns out that MLE under BM has minimal asymptotic variance among all combinations MLE/PWM and BM/POT but, on the other hand it has some significant asymptotic bias. When analysing the asymptotic optimal mean square error that balances variance and bias, the most efficient combination turns out to be MLE under POT (e.g. Drees, Ferreira and de Haan 2004). It turns out that the optimal sample size is larger for POT-MLE than for BM-MLE, giving a theoretical justification to the heuristic that POT allows for a better use of the data than BM.

The outline of the paper is as follows: In Section 2 we present the main theoretical conditions and results including Theorem 2.2 giving the asymptotic normality of the MLE. In Section 3 we present a comparative study of asymptotic variances and biases, optimal asymptotic mean square errors and optimal samples sizes among all combinations MLE/PWM and BM/POT. In Section 4 we state additional theoretical statements, including the local asymptotic normality of MLE under the fully parametric GEV model, and provide all the proofs. Finally, Appendix A gathers some formulas for the information matrix and for the bias of BM-MLE and Appendix B provides useful bounds for the derivatives of the likelihood function that are necessary for the main proofs.

## 2 Asymptotic behaviour of MLE

### 2.1 Framework and notations

The GEV distribution with index is defined by

 Gγ(x)=exp(−(1+γx)−1/γ),1+γx>0,

and the corresponding log-likelihood by

 gγ(x)={−(1+1/γ)log(1+γx)−(1+γx)−1/γif 1+γx>0−∞otherwise. (2.1)

For , the formula is interpreted as . The three parameter model with index , location and scale is defined by the log-likelihood

 ℓ(θ,x)=gγ(x−μσ)−logσ,θ=(γ,μ,σ). (2.2)

A distribution is said to belong to the max-domain of attraction of the extreme value distribution , denoted by , if there exist normalizing sequences and such that

 limm→+∞Fm(amx+bm)=Gγ0(x),for all x∈R.

The main aim of the BM method is to estimate the extreme value index as well as the normalizing constants and . The set-up is the following. Consider independent and identically distributed (i.i.d.) random variables with common distribution function . Divide the sequence into blocks of length and define the -th block maximum by

 Mk,m=max(k−1)m

For each , the variables are i.i.d. with distribution function and by the max-domain of attraction condition

 Mk,m−bmam\lx@stackreld⟶Gγ0as m→+∞. (2.4)

This suggests that the distribution of is approximately a GEV distribution with parameters . The method consists in pretending that the sample follows exactly the GEV distribution and in maximizing the GEV log-likelihood so as to compute the MLE. A particular feature of the method is that the model is clearly misspecified since the GEV distribution appears as the limit distribution of the block maxima as the block size tends to while in practice we have to use a finite block size. As seen afterwards, we quantify the misspecification thanks to the so-called second order condition that implies an asymptotic expansion of the empirical quantile process with a non trivial bias term. When plugging this expansion in the ML equations, we obtain a bias term for the likelihood process as well as for the MLE.

The (misspecified) log-likelihood of the -sample is

 Lk,m(θ)=k∑i=1ℓ(θ,Mi,m),θ=(γ,μ,σ)∈Θ=R×R×(0,+∞). (2.5)

We say that an estimator is a MLE if it solves the score equations

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩∂∂γLk,m(γ,μ,σ)=0∂∂μLk,m(γ,μ,σ)=0∂∂σLk,m(γ,μ,σ)=0,

which we write shortly in vectorial notation

 ∂Lk,m∂θ(θ)=0. (2.6)

A main purpose of this paper is to study the existence and asymptotic normality of the MLE under the following conditions:

• First order condition:

 F∈D(Gγ0)with γ0>−12.

Note that the also called first order condition (2.4) is equivalent to

 limm→∞V(mx)−V(m)am=xγ0−1γ0,x>0,

with . W.l.g., we can take in Equation 2.4, what we shall assume in the following.

• Second order condition: for some positive function and some positive or negative function with ,

 limt→∞V(tx)−V(t)a(t)−xγ0−1γ0A(t)=∫x1sγ0−1∫s1uρ−1duds=Hγ0,ρ(x),x>0, (2.7)

with . Note that necessarily and is regularly varying with index .

• Asymptotic growth for the number of blocks and the block size :

 k=kn→∞,m=mn→∞ and √kA(m)→λ∈R,as n→∞. (2.8)

### 2.2 Main results

Before considering the MLE, we focus on the asymptotic properties of the likelihood and score processes. For the purpose of asymptotic we introduce the local parameter :

 ⎧⎪ ⎪⎨⎪ ⎪⎩h1=√k(γ−γ0)h2=√k(μ−bm)/amh3=√k(σ/am−1)⇔⎧⎪ ⎪⎨⎪ ⎪⎩γ=γ0+h1/√kμ=bm+amh2/√kσ=am(1+h3/√k). (2.9)

Set . The local log-likelihood process at is given by

 ˜Lk,m(h) = Lk,m(γ0+h1√k,bm+amh2√k,am+amh3√k) (2.10) = k∑i=1ℓ(θ0+h√k,Mi,m−bmam)−klog(am),

and, the local score process by

 ∂˜Lk,m∂h(h) = 1√kk∑i=1∂ℓ∂θ(θ0+h√k,Mi,m−bmam) (2.11) = 1√k∂Lk,m∂θ(θ).

Clearly, the score equation (2.6) rewrites in this new variable as .

In the following, denote the quantile function of the extreme value distribution , i.e.

 Qγ0(s)=(−logs)−γ0−1γ0,s∈(0,1). (2.12)
###### Proposition 2.1.

Assume conditions (2.7) and (2.8). Let be a sequence of positive numbers verifying, as ,

 rn=O(kδn)with0<δ

Let be the ball of center and radius . Then, uniformly for ,

 ∂2˜Lk,m∂h∂hT(h)=−Iθ0+oP(1) (2.14)

with the Fisher information matrix

 Iθ0=−∫10∂2ℓ∂θ∂θT(θ0,Qγ0(s))ds. (2.15)

As a consequence, the local log-likelihood process is strictly concave on with high probability.

###### Remark 2.1.

The conditions in Proposition 2.1 are sufficient for consistency of MLE, see Dombry [7]. In particular implies , the later required for consistency. When , condition (2.13) implies that (2.14) holds for , .

Our main result is the following Theorem establishing the asymptotic behavior of the local likelihood process and from which the existence and asymptotic normality of MLE will be deduced.

###### Theorem 2.1.

Assume conditions (2.7) and (2.8). Then, the local likelihood process satisfies uniformly for in compact sets

 ˜Lk,m(h)=˜Lk,m(0)+hT˜Gk,m−12hTIθ0h+oP(1), (2.16) ∂˜Lk,m∂h(h)=˜Gk,m−Iθ0h+oP(1), (2.17)

where

 ˜Gk,m=1√kk∑i=1∂ℓ∂θ(θ0,Mi,m−bmam)\lx@stackreld⟶N(λb,Iθ0) (2.18)

i.e., is asymptotically Gaussian with variance equal to the information matrix and mean depending on the second order condition (2.7) through

 b=b(γ0,ρ)=∫10∂2ℓ∂x∂θ(θ0,Qγ0(s))Hγ0,ρ(1−logs)ds (2.19)

and on the asymptotic block size through from (2.8).

###### Remark 2.2.

Explicit formulas for the Fisher information matrix have been given by Prescott and Walden [15] (see also Beirlant et al. [2] page 169). The vector given by the integral representation (2.19) can also be computed explicitly. Formulas are provided in Appendix A.

###### Remark 2.3.

Equation (2.8) requires that both the number of blocks and the block size go to infinity with a relative rate measured by the second order scaling function and a parameter . When , the bias term disappears in (2.18); this corresponds to the situation where grows to infinity very quickly with respect to so that the block size is large enough and the GEV approximation (2.4) is very good.

Existence and asymptotic normality of the MLE can be deduced from Theorem 2.1, mainly by the argmax theorem. The concavity property stated in Proposition 2.1 plays an important role in the proof of existence and uniqueness.

###### Theorem 2.2.

Assume conditions (2.7) and (2.8).

1. There exists a sequence of estimators , , such that

 limn→+∞P[ˆθn is a MLE ]=1 (2.20)

and

 √k(ˆγn−γ0,ˆμn−bmam,ˆσnam−1)\lx@stackreld⟶N(λI−1θ0b,I−1θ0). (2.21)
2. If , ,are two sequences of estimators satisfying

 limn→+∞P[ˆθin is a MLE ]=1

and

 limn→+∞P[√k(ˆγin−γ0,ˆμin−bmam,ˆσinam−1)∈Hn]=1,

then and are equal with high probability, i.e.

 limn→+∞P[ˆθ1n=ˆθ2n]=1.
###### Remark 2.4.

An interesting by-product of the strict concavity stated in Proposition 2.1 is the convergence of numerical procedures for the computation of the MLE that are implemented in software. The Newton-Raphson algorithm is commonly used to solve numerically the score equation (2.6). Strict concavity of the objective function on a large neighbourhood of the solution ensures convergence of the algorithm with high probability as soon as the initial value belongs to this neighbourhood.

## 3 Theoretical comparisons: BM vs POT and MLE vs PWM

The POT method uses observations above some high threshold or top order statistic and the underlying approximate model is the Generalized Pareto distribution (Balkema and de Haan [1], Pickands [14]). Estimators of the shape parameter , as well as location and scale parameters have been proposed and widely studied, including MLE and PWM (Hosking and Wallis [12]). For their asymptotic properties - under basically the same conditions as under BM in Theorem 2.2 - we refer to de Haan and Ferreira [6]. Asymptotic normality of PWM estimators under BM has been established only recently by Ferreira and de Haan [10] and a comparison of PWM estimators under BM and POT has been carried out. The aim of the present section is to include our new asymptotic results for MLE estimators under BM, completing the picture in the comparison of the four different cases BM/POT and MLE/PWM.

Recall that asymptotic normality of MLE (resp. PWM estimator) holds for (resp. ). The number of selected observations corresponds to the number of blocks in BM and of selected top order statistics in POT. Similarly as in Ferreira and de Haan [10], our comparative study is restricted to the range where second order conditions for BM and POT are comparable (cf. Drees et al. [8] or Ferreira and de Haan [10]). In the following we compare MLE/PWM under BM/POT methods through their: (i) asymptotic variances (VAR), (ii) asymptotic biases (BIAS), (iii) optimal asymptotic mean square errors (AMSE) and optimal number of observations minimizing AMSE ().

#### (i) Asymptotic variances

The asymptotic variance depends on only and is plotted in Figure 1 where straight lines stand for MLE and dashed lines for PWM estimators. Among all four different cases, BM-MLE is the one with the smallest variance within its range. Moreover, for both estimators, BM has the lowest variance indicating that BM is preferable to POT when variance is concerned.

#### (ii) Asymptotic biases

The asymptotic biases depend on and and are shown in Figures 23: POT-MLE is the one with the smallest bias also in absolute value when compared to BM-MLE, contrary to what was observed for variance. This is in agreement with what has been observed when comparing BM-PWM and POT-PWM, also shown in Figures 23 already analysed in Ferreira and de Haan [10]. There is again the indication that POT method is favourable to BM when concerning bias.

#### (iii) Optimal asymptotic MSEs and optimal number of observations

Another way to compare the estimators that combines both variance and bias information is through mean square error. One can compare these for the optimal number of observations i.e., that value for which the asymptotic mean square error (AMSE) is minimal. Similarly as in Ferreira and de Haan [10], under the conditions of Theorem 2.2, we have

 k0∼n(1s)←(n)(%VAR2(γ)BIAS2(γ,ρ))1/(1−2ρ),n→∞,

with a decreasing and regularly varying function such that . It follows in particular that the optimal is different but of the same order for both estimators and methods. As for the optimal AMSE, we have

 AMSE∼1−2ρ−2ρ(1/s)←(n)n(BIAS2(γ,ρ))1/(1−2ρ)(VAR(γ))−2ρ/(1−2ρ), n→∞.

When considering ratios of optimal AMSE (or optimal number of selected observations), the regularly varying function cancels out and the asymptotic ratio does not depend on but only on and .

Figure 4 shows the contour plots of the ratio for MLE and PWM estimators. It is surprising to see a reverse behaviour in both cases: in the range of parameters considered, POT is preferable when MLE are considered, while BM is mostly preferable for PWM estimators.

In Figure 5 are shown for comparing optimal AMSE among all combinations. The green surface corresponds to MLE-POT that has always the minimal optimal AMSE in the range of parameters considered. Finally, Figure 6 reports for MLE the asymptotic ratio of optimal numbers of selected observations, that is . We can see that the optimal number of observations is larger for POT, which is in agreement with the PWM case considered in previous studies.

## 4 Main proofs

We start by introducing some material that will be useful for the proofs. More technical material is still postponed to Appendices.

### 4.1 Local asymptotic normality of the GEV model

If the observations are exactly distributed, then the choice of constants

 am=σ0mγ0andbm=μ0+σ0mγ0−1γ0 (4.1)

ensures that the normalised block maxima are i.i.d. with distribution . The issue of model misspecification is irrelevant in that particular case.

In this simple i.i.d. setting, a key property in the theory of ML estimation is differentiability in quadratic mean (see e.g. van der Vaart [18, Chapter 7]). A statistical model defined by the family of densities is called differentiable in quadratic mean at the point if there exists a measurable function called the score function such that

 ∫R[√pθ0+h(x)−√pθ0(x)−12hT˙ℓθ0(x)√pθ0(x)]2dx=o(∥h∥2),as h→0.

The following Proposition corresponds to Proposition 3.2 in Bücher and Segers [4]. We provide a slightly different proof in the case .

###### Proposition 4.1.

The three parameter GEV model with log-likelihood defined in Equation (2.2) is differentiable in quadratic mean at if and only if . The score function is then given by .

###### Proof of Proposition 4.1.

The density of the 3-parameter GEV model is given by

if and otherwise. In the case , the function

 θ∈(−1/3,+∞)×R×(0,+∞)↦√pθ(x)

is continuously differentiable for every and the information matrix is well defined and continuous (see Appendix A or Beirlant et al. [2] page 169). Differentiability in quadratic mean of the GEV model follows by a straightforward application of Lemma 7.6 in Van der Vaart [18].

In the case , the function is not differentiable at points such that . Going back to the definition of differentiability in quadratic mean, we need to show that

 limh→0∫R⎡⎢ ⎢⎣√pθ0+h(x)−√pθ0(x)−12hT∂ℓ∂θ(θ0,x)√pθ0(x)∥h∥⎤⎥ ⎥⎦2dx=0. (4.2)

This is credible because for all , the relation

 ∂√pθ(x)∂θ∣∣θ=θ0=12∂ℓ∂θ(θ0,x)√pθ0(x)

entails

 limh→0√pθ0+h(x)−√pθ0(x)−12hT∂ℓ∂θ(θ0,x)√pθ0(x)∥h∥=0.

For further reference, we note also that, for ,

 ∂2√pθ(x)∂θ∂θT∣∣θ=θ0=14pθ0(x)∂ℓ∂θ(θ0,x)∂ℓ∂θT(θ0,x)+12√pθ0(x)∂2ℓ∂ℓθ∂θT(θ0,x). (4.3)

A rigorous proof of (4.2) is given below. Since , we have for in a neighbourhood of so that the density vanishes outside with the right endpoint of the distribution . We also introduce . For all , the function is twice continuously differentiable, whence Taylor formula entails

 √pθ0+h(x)−√pθ0(x)−12hT∂ℓ∂θ(θ0,x)√pθ0(x)=12hT(∂2√pθ(x)∂θ∂θT∣∣θ=θ0+vh)h

for some . Together with Equation (4.3), the formula and Proposition B.1, this yields the upper bound

 ⎡⎢ ⎢⎣√pθ0+h(x)−√pθ0(x)−12hT∂ℓ∂θ(θ0,x)√pθ0(x)∥h∥⎤⎥ ⎥⎦2 ≤ 132∥h∥2[pθ0+vh(x)2(hT∂ℓ∂θ(θ0+vh,x))4+4pθ0+vh(x)(hT∂2ℓ∂θ∂θT(θ0+vh,x)h)2] ≤ C∥h∥2[pθ0+vh(x)2max(z(θ0+vh,x)γ0−ε,z(θ0+vh,x)1+ε)4 +pθ0+vh(x)max(z(θ0+vh,x)2γ0−ε,z(θ0+vh,x)1+ε)2]

for all and small enough. This entails

 limh→0∫x––h−∞⎡⎢ ⎢⎣√pθ0+h(x)−√pθ0(x)−12hT∂ℓ∂θ(θ0,x)√pθ0(x)∥h∥⎤⎥ ⎥⎦2dx=0. (4.4)

It remains to estimate the contribution of the integral between and . Recall that vanishes for . We have

 1∥h∥2∫+∞x––h[√pθ0+h(x)]2dx=1∥h∥2[1−Gγ0+h1(x––h−μ0−h2σ0+h3)], 1∥h∥2∫+∞x––h[hT∂ℓ∂θ(θ0,x)√pθ0(x)]2dx≤∫x0x––h∥∥∥∂ℓ∂θ(θ0,x)∥∥∥2pθ0(x)dx.

The first and second integral converge to as because and . The third integral converges also to because and so that the score is square integrable (its covariance matrix is ). We deduce

 limh→0∫+∞x––h⎡⎢⎣√pθ+h(x)−√pθ(x)−12hT∂ℓ∂θ(θ,x)√pθ(x)∥h∥⎤⎥⎦2dx=0. (4.5)

Equations (4.4) and (4.5) imply (4.2).

The fact that differentiability in quadratic mean does not hold when is proved in Bücher and Segers [4] Appendix C. They observe that for ,

 liminfh→0∥h∥−2∫+∞R1{pθ0(x)=0}pθ0+h(x)dx>0

which rules out differentiability in quadratic mean. We omit further details here. ∎

Differentiability in quadratic mean implies that the score function is centered with finite variance equal to the information matrix, i.e.

 ∫R˙ℓθ0(x)pθ0(x)dx=0and% ∫R˙ℓθ0(x)˙ℓθ0(x)Tpθ0(x)dx=Iθ0. (4.6)

Another important consequence of differentiability in quadratic mean is the local asymptotic normality property of the local score process. The following Corollary follows from Proposition 4.1 by a direct application of Theorem 7.2 in Van der Vaart [18].

###### Corollary 4.1.

Assume that with and that the constants , are given by (4.1). Then the local log-likelihood process (2.10) satisfies

 ˜Lk,m(h)=˜Lk,m(0)+hT˜˜Gk,m−12hTIθ0h+oP(1)

where

 ˜˜Gk,m=1√kk∑i=1∂ℓ∂θ(θ0,Mi,m−bmam)\lx@stackreld⟶N(0,Iθ0).

Note the similarity between Theorem 2.1 and Corollary 4.1. In Theorem 2.1 however, the is uniform on compact sets and the model misspecification results in a bias term for the asymptotic distribution .

### 4.2 The empirical quantile process associated to BM

The starting point of the proof of Proposition 2.1 and Theorem 2.1 is to rewrite the local log-likelihood process (2.10) in terms of the (normalized) empirical quantile process

 Qk,m(s)=M⌈ks⌉:k,m−bmam,0

where are the order statistics of the block maxima sample defined by (2.3) and denotes the smallest integer larger than or equal to . The local log-likelihood process (2.10) can be rewritten as

 ˜Lk,m(h)=k∫10ℓ(θ0+h√k,Qk,m(s))ds. (4.8)

Convergence (2.4) ensures the convergence of the empirical quantile process to the “true” quantile function defined in (2.12). The following expansion of the empirical quantile process is taken from Ferreira and de Haan [10], Theorem 2.1.

###### Proposition 4.2.

Assume conditions (2.7) and (2.8). For a specific choice of the second order auxiliary functions and in (2.7),

 √k(Qk,m(s)−Qγ0(s))=Bk(s)s(−logs)γ0+1+λHγ0,ρ(1−logs)+Rk,m(s) (4.9)

where , , denotes an appropriate sequence of standard Brownian bridges and the remainder term satisfies, for ,

 Rk,m(s)=s−1/2−ε(1−s)−1/2−γ0−ρ−εoP(1) (4.10)

uniformly for .

###### Remark 4.1.

For Proposition 4.2, the auxiliary functions and have to be specially chosen for establishing uniform second order regular variation bounds refining (2.7), see Lemma 4.2 in [10]. However, this choice is useful for the proofs only and is irrelevant for the statements of the main results in Section 2.2.

The following Proposition provides useful technical bounds for the proof of the main results.

###### Proposition 4.3.

Assume conditions (2.7) and (2.8). Then, as ,

 (−logs)γ0(1+(γ0+h1/√k)Qk,m(s)−h2/√k1+h3/√k)=eOP(1)

and

 (−logs)−1(1+(γ0+h1/√k)Qk,m(s)−h2/√k1+h3/√k)−1/(γ0+h1/√k)=eOP(1)

uniformly for and as in Proposition 2.1.

For the proof of Proposition 4.3, we need the following Lemma.

###### Lemma 4.1.

Let be the order statistics of i.i.d random variables with standard Fréchet distribution. Then,

 log{(−logs)Z⌈ks⌉:k}=OP(1)

where the term is uniform for .

###### Proof of Lemma 4.1.

An equivalent statement is, with standard uniform,

 log{−logU⌈ks⌉:k−logs}=OP(1).

We use Shorack and Wellner [16] (inequality 1 on p.419): for some

 1M≤U⌈ks⌉:ks≤M for 1k+1≤s<1, (4.11)
 1M≤1−U⌈ks⌉:k1−s≤M for 0

Relation (4.11) implies, for ,

 1−logM−logs≤−logU⌈ks⌉:k−logs≤1+logM−logs.

Both sides are bounded for . Relation (4.12) implies, for ,

 1−s−logs1−U⌈ks⌉:k1−s≤−logU⌈ks⌉:k−logs≤−log{1−(1−U⌈ks⌉:k)}−logs≤1−U⌈ks⌉:k1−s1U⌈ks⌉:k1−s−logs.

Both sides are bounded for . ∎

###### Proof of Proposition 4.3.

Let be a unit Fréchet random variable, i.e. with distribution function , , and be the order statistics from the associated i.i.d. sample of size , . Note that . From Lemma 4.2 in [10],

 1+γV(mZ⌈ks⌉:k)−bmσ=1+γa0mσV(mZ⌈ks⌉:k)−bma0m

is bounded (above and below) by

 Zγ0⌈ks⌉:k+(γa0mσ−γ0)Zγ0⌈ks⌉:k−1γ0+γa0mσA0(m)Hγ0,ρ(Z⌈ks⌉:k)±εZγ0+ρ±δ⌈ks⌉:kA0(m)

for each provided and are large enough. Hence,

 (−logs)γ0{1+γa0mσ(V(mZ⌈ks⌉:k)−bma0m−μ−bma0m)}

is bounded (above and below) by,

 ((−logs)Z