1 Introduction

FUNCTIONAL NONPARAMETRIC ESTIMATION OF CONDITIONAL EXTREME QUANTILES

Laurent GARDES1, Stéphane GIRARD and Alexandre LEKINA

Team Mistis, INRIA Rhône-Alpes and LJK,

655, avenue de l’Europe, Montbonnot

38334 Saint-Ismier cedex, France.

Laurent.Gardes@inrialpes.fr

Abstract We address the estimation of quantiles from heavy-tailed distributions when functional covariate information is available and in the case where the order of the quantile converges to one as the sample size increases. Such ”extreme” quantiles can be located in the range of the data or near and even beyond the boundary of the sample, depending on the convergence rate of their order to one. Nonparametric estimators of these functional extreme quantiles are introduced, their asymptotic distributions are established and their finite sample behavior is investigated.

Keywords Conditional quantile, extreme-values, nonparametric estimation, functional data.

AMS Subject classifications 62G32, 62G05, 62E20.

## 1 Introduction

An important literature is dedicated to the estimation of extreme quantiles, i.e. quantiles of order with tending to zero. The most popular estimator was proposed by Weissman [28], in the context of heavy-tailed distributions, and adapted to Weibull-tail distributions in [10, 19]. We also refer to [11] for the general case.
In a lot of applications, some covariate information is recorded simultaneously with the quantity of interest. For instance, in climatology one may be interested in the estimation of return periods associated to extreme rainfall as a function of the geographical location. The extreme quantile thus depends on the covariate and is referred in the sequel to as the conditional extreme quantile. Parametric models for conditional extremes are proposed in [9, 27] whereas semi-parametric methods are considered in [1, 22]. Fully non-parametric estimators have been first introduced in [8], where a local polynomial modelling of the extreme observations is used. Similarly, spline estimators are fitted in [7] through a penalized maximum likelihood method. In both cases, the authors focus on univariate covariates and on the finite sample properties of the estimators. These results are extended in [2] where local polynomials estimators are proposed for multivariate covariates and where their asymptotic properties are established.

Besides, covariates may be curves in many situations coming from applied sciences such as chemometrics (see Section 5 for an illustration) or astrophysics [3]. However, the estimation of conditional extreme quantiles with functional covariates has not been addressed yet. Two statistical fields are involved in this study. In the one hand, nonparametric smoothing techniques adapted to functional data are required in order to deal with the covariate. We refer to [6, 17, 24, 25] for overviews on this literature. We propose here to select the observations to be used in the conditional quantile estimator by a moving window approach. In the second hand, once this selection is achieved, extreme-value methods are used to estimate the conditional quantile, see [13] for a comprehensive treatment of extreme-value methodology in various frameworks.

Whereas no parametric assumption is made on the functional covariate, we assume that the conditional distribution is heavy-tailed. This semi-parametric assumption amounts to supposing that the conditional survival function decreases at a polynomial rate. To estimate the conditional quantile, we focus on three different situations. In the first one, the convergence of to zero is slow enough so that the quantile is located in the range of the data. In the second situation, the quantile is located near the boundary of the sample. Finally, in the third situation, the convergence of to zero is sufficiently fast so that the quantile may be beyond the boundary of the sample. This situation is clearly the most difficult one since an extrapolation outside the range of the sample is needed to achieve the estimation.

Nonparametric estimators are defined in Section 2 for each situation. Their asymptotic distributions are derived in Section 3. Some examples are provided in Section 4 and an illustration on spectrometric data is given in Section 5. Proofs are postponed to Section 6.

## 2 Estimators of conditional extreme quantiles

Let be a (finite or infinite dimensional) metric space associated to a metric . Let us denote by the conditional cumulative distribution function of a real random variable given and by the associated conditional quantile of order defined by

 F(q(α,x),x)=1−α,

for all and . In this paper, we focus on the case where, for all , is the cumulative distribution function of a heavy-tailed distribution. In such a situation, the conditional quantile satisfies, for all ,

 limα→0q(λα,x)q(α,x)=λ−γ(x), (1)

where is an unknown positive function of the covariate referred to as the conditional tail index. Loosely speaking, the conditional quantile decreases towards 0 at a polynomial rate driven by . The conditional quantile is said to be regularly varying at 0 with index , and this property characterizes heavy-tailed distributions. We refer to [5] for a general account on regular variation theory and to paragraph 4.2 for some examples of distributions satisfying (1).

Given a sample of independent observations, our aim is to build point-wise estimators of conditional quantiles. More precisely, for a given , we want to estimate , focusing on the case where the design points are non random. To this end, for all , let us denote by the ball centered at point and with radius defined by

 B(t,r)={x∈E, d(x,t)≤r}

and let be a positive sequence tending to zero as goes to infinity. The proposed estimator uses a moving window approach since it is based on the response variables for which the associated covariates belong to the ball . The proportion of such design points is thus defined by

 φ(ht)=1nn∑i=1I{xi∈B(t,ht)}

and plays an important role in this study. It describes how the design points concentrate in the neighborhood of when goes to zero, similarly to the small ball probability does, see for instance the monograph on functional data analysis [17]. Thus, the nonrandom number of observations in the slice is given by . Let be the response variables for which the associated covariates belong to the ball and let be the corresponding order statistics.

In this paper, we focus on the estimation of conditional ”extreme” quantile of order . Here, the word ”extreme” means that tends to zero as goes to infinity, making kernel based estimators [15] non adapted. In the sequel, three situations are considered:

• and ,

• , and .

• and ,

where denotes the largest integer smaller than . Let us highlight that, in the unconditional case, situations (S.1) and (S.3) with have already been examined by Dekkers and de Haan [11], the extreme case being considered in [21], Theorem 5.1. A summary of their results can be found in [13], Theorem 6.4.14 and Theorem 6.4.15. In situation (S.1), goes to 0 slower than and the point-wise estimation of the conditional extreme quantile relies on an interpolation inside the sample, since, from Proposition 2 below, is eventually almost surely smaller that the maximal observation in the slice . In such a situation, we propose to estimate by:

 ^q1(αmt,t)=Zmt−⌊mtαmt⌋+1,mt(t). (2)

In the intermediate situation (S.2), estimator (2) can still be used, since for large enough, and thus the estimation relies on a conditional extreme value of the sample. Let us note that, if is not an integer, then implies . Otherwise, if is an integer, then condition is necessary to prevent the sequence from having two adherence values and from oscillating. In situation (S.3), goes to 0 at the same speed or faster than and the conditional extreme quantile is eventually larger than with positive probability . Thus, its estimation is more difficult since it requires an estimation outside the sample. We propose in this case to estimate by:

 ^q2(αmt,t) = ^q1(βmt,t)(βmt/αmt)^γn(t) (3) = Zmt−⌊mtβmt⌋+1,mt(t)(βmt/αmt)^γn(t),

where satisfies (S.1) and is a point-wise estimator of the conditional tail index . Such estimators have been proposed both in the finite dimensional setting [2] and in the general case [20], see also paragraph 4.1 for some examples. Note that (3) is an adaptation of Weissman estimator [28] in the case where covariate information is available. The extrapolation is achieved thanks to the multiplicative term which magnitude is driven by the estimated tail index . As expected, the extrapolation is all the more important as the tail is heavy.

## 3 Main results

We first give some notations and conditions useful to establish the asymptotic distributions of our estimators. In the sequel, we fix and we assume:

• The conditional quantile function

 α∈(0,1)↦q(α,t)∈(0,+∞)

is differentiable, the function defined by

 α∈(0,1)↦Δ(α,t)=γ(t)+α∂logq∂α(α,t)∈(0,+∞)

is continuous and such that

Assumption (A) controls the behavior of the log-quantile function with respect to its first variable. It is a sufficient condition to obtain the heavy-tail property (1), see for instance [5], Chapter 1. For all , let us introduce

 ¯Δ(a,t)=supα∈(0,a)|Δ(α,t)|.

The largest oscillation of the log-quantile function with respect to its second variable is defined for all as

 ωn(a)=sup{∣∣∣logq(α,x)q(α,x′)∣∣∣ , α∈(a,1−a) , (x,x′)∈B(t,ht)2}.

Finally, let and . Our first result establishes a representation in distribution of the largest random variables of the sample , .

###### Proposition 1

If and for some , then, there exists an event with as such that

 {(logZmt−i+1,mt,i∈Jkt)|An}\lx@stackreld={(logq(Vi,mt,Ti),i∈Jkt)|An},

where are the order statistics associated to the sample of independent uniform variables and are random variables in the ball .

Note that this result is implicitly used in [20], proof of Theorem 1. We also refer to [14], Theorem 3.5.2, for the approximation of the nearest neighbors distribution using the Hellinger distance and to [18] for the study of their asymptotic distribution. Here, condition shows that, the smoother the quantile function is on the slice , i.e. the smaller its oscillation is, the easier the control of the uppest observations is, i.e the larger can be.

The next proposition is dedicated to the study of the position of the conditional extreme quantile with respect to the largest observation in the slice .

###### Proposition 2

If for some , then

• under (S.1), ,

• under (S.2) or (S.3), .

Let us first focus on situation (S.1) where the estimation of the conditional extreme quantile is addressed using , an upper order statistic chosen in the considered slice.

###### Theorem 1

Let be a sequence satisfying (S.1).
If for some then,

 (mtαmt)1/2(^q1(αmt,t)q(αmt,t)−1)\lx@stackreld→N(0,γ2(t)).

It appears that the estimator is asymptotically Gaussian, with asymptotic variance proportional to . Thus, the heavier is the tail, the larger is , and the larger is the variance. Besides, the asymptotic variance being inversely proportional to , the estimation remains more stable when the extreme quantile is far from the boundary of the sample. Considering now situation (S.2), an asymptotically Gaussian behavior cannot be expected since, in this case, the estimator is based on the th uppest order statistic in the considered slice.

###### Theorem 2

Let be a sequence satisfying (S.2).
If for some then,

 (^q1(αmt,t)q(αmt,t)−1)\lx@stackreld→E(c,γ(t)),

where is a non-degenerated distribution.

The asymptotic distribution could be explicitly deduced from the proof of the result. It is omitted here for the sake of simplicity. Situation (S.3) is more complex since the asymptotic distribution of may depend both on the behavior of and . In the next theorem, two cases are investigated. In situation (i), the asymptotic distribution of is driven by . At the opposite, in situation (ii), inherits its asymptotic distribution from .

###### Theorem 3

Let be a sequence satisfying (S.1) and let be a sequence eventually smaller than . Define .
If for some and there exists a positive sequence and a distribution such that

 υn(t)(^γn(t)−γ(t))\lx@stackreld→D, (4)

then, two situations arise:

 ζmtmax{υ−1n(t),¯Δ(βmt,t)}→0, (5)

we have

 (mtβmt)1/2(^q2(αmt,t)q(αmt,t)−1)\lx@stackreld→N(0,γ2(t)). (6)
• Otherwise, under the additional condition

 υn(t)max{ζ−1mt,¯Δ(βmt,t)}→0, (7)

we have

 υn(t)log(βmt/αmt)(^q2(αmt,t)q(αmt,t)−1)\lx@stackreld→D. (8)

Note that, even though the main interest of this result is to tackle the case where is a sequence satisfying (S.3), it can also be applied in the more general situation where is eventually smaller than . For instance, it appears that, in situation (S.2), is a consistent estimator of in the sense that the ratio converges to one in probability whereas, in view of Theorem 2, is not consistent. Some applications of Theorem 3 are provided in the next section.

## 4 Examples

In paragraph 4.1, the above theorem is illustrated with a particular family of conditional tail index estimators. The corresponding assumptions are simplified in paragraph 4.2 for some classical heavy-tailed distributions.

### 4.1 Some conditional tail-index estimators

In [20], a family of conditional tail index estimators is introduced. They are based on a weighted sum of the log-spacings between the largest order statistics . The family is defined by

 ^γn(t,W)=kt∑i=1ilog(Zmt−i+1,mt(t)Zmt−i,mt(t))W(i/kt,t)/kt∑i=1W(i/kt,t), (9)

where is a weight function defined on and integrating to one. Basing on (9) and considering , the conditional extreme quantile estimator (3) can be written as

 ^q2(αmt,t,W)=Zmt−kt+1,mt(t)(ktmtαmt)^γn(t,W).

From [20], Theorem 2, under some conditions on the weight function, is asymptotically Gaussian:

 k1/2t(^γn(t,W)−γ(t))\lx@stackreld→N(0,γ2(t)AV(t,W)),

where . Letting , we obtain

 ζmtυ−1n(t)=log(ktmtαmt)→∞,

in situation (S.2) or (S.3), which means that condition (5) cannot be satisfied. Thus, only situation (ii) of Theorem 3 may arise leading to the following corollary.

###### Corollary 1

Suppose the assumptions of [20], Theorem 2 hold. Let such that

 k1/2t¯Δ(kt/mt,t)→0 and (10) k2tωn(m−(1+δ)t)→0 for some δ>0. (11)

Let be a sequence satisfying (S.2) or (S.3). Then,

 k1/2tlog(kt/(mtαmt))(^q2(αmt,t,W)q(αmt,t)−1)\lx@stackreld→N(0,γ2(t)AV(t,W)).

As an example, one can use constant weights to obtain the so-called conditional Hill estimator with or logarithmic weights leading to the conditional Zipf estimator with . We refer to [20], Section 4, for further details.

### 4.2 Illustration on some heavy-tailed distributions

Standard Pareto distribution is the simplest example of heavy-tailed distribution. Its conditional quantile of order decreases as a power function of since, in this case, Therefore for all and condition (10) of Corollary 1 vanishes. Another example is Fréchet distribution for which

 q(α,t)=α−γ(t){1αlog(11−α)}−γ(t).

Here, the conditional quantile approximatively decreases as a power function of since, in this case, , the quality of this approximation being controlled by

 Δ(α,t)=−γ(t)2α(1+O(α)) as% α→0.

A similar example is given by Burr distributions for which

 q(α,t)=α−γ(t)(1−α−ρ(t))−γ(t)/ρ(t)

and

 Δ(α,t)=−γ(t)α−ρ(t)(1+O(α−ρ(t))),

with . These results are collected in Table 1. In both Fréchet and Burr cases, is asymptotically proportional to as with the convention for the Fréchet distribution. Note that is known as the second-order parameter in the extreme-value theory. It drives the quality of the approximation of the conditional quantile by the power function . Furthermore, it is easily seen, that for these two distributions, the function is increasing. Thus, condition (10) of Corollary 1 can be simplified as which shows that, the smaller is, the larger can be. Finally, if and are Lipschitzian, i.e. if there exist constants and such that

 |γ(x)−γ(x′)|≤cγd(x,x′) and |ρ(x)−ρ(x′)|≤cρd(x,x′)

for all , then the oscillation can be bounded by as and thus condition (11) of Corollary 1 can be simplified as

## 5 Finite sample behaviour

In this section, we propose to illustrate the behaviour of our conditional extreme quantiles estimators on functional spectrometric data. A question of interest for the planetologist is the following: Given a spectrum collected by the OMEGA instrument onboard the European spacecraft Mars Express in orbit around Mars, how to estimate the associated physical properties of the ground (grain size of CO, proportions of water, dust and CO, etc …)? To answer this question, a learning dataset can be constructed using radiative transfer models. Here, we focus on the CO proportion. Given different values , of this proportion, a radiative transfer model provides us the corresponding spectra , (see Figure 1). Clearly, the obtained spectra are non random. They are functions of the wavelength and we consider here their discretized version on 256 wavelengths . Using this learning dataset, a lot of methods can be found in the literature to estimate the CO proportion associated to an observed spectrum. One can mention Support Vector Machine, Sliced Inverse Regression, nearest neighbor approach, …(see for instance [4] for an overview of these approaches). For all these methods, the estimation of the CO proportion is perturbed by a random error term. We propose to modelize this perturbation by:

 Yi,j=log(1/yi)+σ(ϵj(xi)−Γ(1−γ(xi))),j=1,…,ni, i=1,…,16,

where

 γ(xi)=0.3∥xi∥22−minl∥xl∥22maxl∥xl∥22−minl∥xl∥22+0.2, σ=minilog(1/yi)Γ(1−γ(xi)),

and , are independent and identically distributed random values from a Fréchet distribution with tail index (see Table 1). Note that is an approximation of the total energy of the spectrum . The above definitions ensure that and that for all and . Furthermore, since the expectation of is given by , the random variables are centered on the value . Our aim is to estimate the conditional quantile

 q(α,xi)=¯F←(α,xi), for i=1,…,16,

where is the survival distribution function of . To this end, the estimator defined in paragraph 4.1 is considered. The semi-metric distance based on the second derivative is adopted, as advised in [17], Chapter 9:

 d2(xi,xj)=∫(x(2)i(t)−x(2)j(t))2dt,

where denotes the second derivative of . To compute this semi-metric, one can use an approximation of the functions and based on B-splines as proposed in [17], Chapter 3. Here, we limit ourselves to a discretized version of :

 ~d2(xi,xj)=255∑l=2{(xi,l+1−xj,l+1)+(xi,l−1−xj,l−1)−2(xi,l−xj,l)}2.

The finite sample performance of the estimator in assessed on replications of the sample with . Two values of are considered: and . In the following, we assume that the hyperparameters and does not depend on the spectrum (we thus omit the index ). These parameters are selected thanks to the heuristics proposed in [20] which consists in minimizing the distance between two different estimators of the conditional extreme quantile:

 (^hselect,^kselect)=argminh,kΔ(^q2(α,.,W\rm\tiny H),^q2(α,.,W\rm\tiny Z),

where for two functions and ,

 Δ(f,g)={16∑i=1(f(xi)−g(xi))2}1/2.

The estimator associated to these parameters is denoted by . We also compute and defined as:

 (^horacle,^koracle)=argminh,kΔ(^q2(α,.,W\rm\tiny H),q(α,.)).

The conditional quantile estimator associated to these parameters is denoted by . Note that , , and do not depend on . Of course, the oracle method cannot be applied in practical situations where is unknown. However, it provides us the lower bound on the distance that can be reached with our estimator. In order to validate our choice of and , the histograms of and computed for the replications, are superimposed on Figure 2. It appears that the mean errors are approximatively equal. Let us also remark that the heuristics errors seem to have a heavier right-tail than the oracle errors. For each spectrum the empirical 90%-confidence interval of is represented on Figure 3 for and on Figure 4 for . The confidence intervals are ranked by ascending order of the tail index. The larger the tail index is, the larger the confidence intervals are. This is in adequation with the result presented in Corollary 1. Finally, on Figure 5 () and Figure 6 (), we draw estimators and as a function of on the replication giving rise to the median error . It appears that the oracle estimator is only slightly better than the heuristics one. As noticed previously, the estimation error increases with the tail index.

## 6 Proofs

### 6.1 Preliminary results

Our first auxiliary lemma is a simple unconditioning tool for determining the asymptotic distribution of a random variable.

###### Lemma 1

Let and be two sequences of real random variables. Suppose there exists an event such that with . Then, implies .

Proof of Lemma 1 For all , the well-known expansion

 P(Xn≤x)=P({Xn≤x}|An)P(An)+P({Xn≤x}|ACn)P(ACn),

where is the complementary event associated to , leads to the following inequalities:

 P({Xn≤x}|An)P(An)≤P(Xn≤x)≤P({Xn≤x}|An)P(An)+P(ACn).

Since , it follows that:

 P({Yn≤x}∩An)≤P(Xn≤x)≤P({Yn≤x}∩An)+P(ACn).

Taking into account of

 P(Yn≤x)−P(ACn)≤P({Yn≤x}∩An)≤P(Yn≤x)

 P(Yn≤x)−P(ACn)≤P(Xn≤x)≤P(Yn≤x)+P(ACn).

The conclusion is then straightforward since and .

The next lemma provides the asymptotic distribution of extreme quantile estimators from an uniform distribution in a situation analogous to (S.1) in the unconditional case.

###### Lemma 2

Let be independent uniform random variables. For any sequence such that and ,

 (MθM)1/2(V⌊MθM⌋,M−θM)\lx@stackreld→N(0,1).

Proof of Lemma 2 For the sake of simplicity, let us introduce . From Rényi’s representation theorem,

 VkM,M\lx@stackreld=kM∑i=1Ei/M+1∑i=1Ei

where are independent random variables from a standard exponential distribution. Thus,

 ξM \lx@stackreldef= (MθM)1/2(VkM,M−θM)\lx@stackreld=(1MM+1∑i=1Ei)−1(MθM)1/2 × [1kMkM∑i=1Ei(kMM−θM)+θM(1kMkM∑i=1Ei−1) − θM(1MM+1∑i=1Ei−1)],

and, in view of the law of large numbers, we have

 ξM \lx@stackrelP∼ (MθM)1/2(kMM−θM)(1+oP(1))+(MθM)1/2(1kMkM∑i=1Ei−1) −

Let us consider the three terms separately. First, writing with , we have

 ξ1,M\lx@stackrelP∼(MθM)1/2τMM=τM(MθM)1/2→0, (12)

since . Second, since , the central limit theorem entails

 ξ2,M∼k1/2M(1kMkM∑i=1Ei−1)\lx@stackreld→N(0,1). (13)

Similarly, it is easy to check that

 ξ3,M=OP(θ1/2M)=oP(1), (14)

since . Collecting (12), (13) and (14) concludes the proof.

### 6.2 Proofs of main results

Proof of Proposition 1 Under (A) and since the random values are independent, we have:

 {logZi(t), i=1,…,mt}\lx@stackreld={logq(Vi,xi) i=1,…,mt},

where is the covariate associated to . Denoting by the random index of the covariate associated to the observation , we obtain

 {logZmt−i+1,mt(t), i=1,…,mt}\lx@stackreld={logq(Vψ(i),xψ(i)) i=1,…,mt}.

Let us consider the event where

 A1,n = {mini=1,…,kt−1logq(