Modeling serial extremal dependence

Modeling serial extremal dependence

Richard A. Davis
Columbia University
Department of Statistics
1255 Amsterdam Ave. New York, NY 10027, USA
rdavis@stat.columbia.edu
   Holger Drees
University of Hamburg
Department of Mathematics
Bundesstraße 55, 20146 Hamburg, Germany
holger.drees@math.uni-hamburg.de
   Johan Segers   Michał Warchoł
Université catholique de Louvain
Institut de Statistique, Biostatistique et Sciences Actuarielles
Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium
johan.segers@uclouvain.be, michal.warchol@uclouvain.be
September 11, 2019
Abstract

To draw inference on serial extremal dependence within heavy-tailed Markov chains, Drees, Segers and Warchoł [Extremes (2015) 18, 369–402] proposed nonparametric estimators of the spectral tail process. The methodology can be extended to the more general setting of a stationary, regularly varying time series. The large-sample distribution of the estimators is derived via empirical process theory for cluster functionals. The finite-sample performance of these estimators is evaluated via Monte Carlo simulations. Moreover, two different bootstrap schemes are employed which yield confidence intervals for the pre-asymptotic spectral tail process: the stationary bootstrap and the multiplier block bootstrap. The estimators are applied to stock price data to study the persistence of positive and negative shocks.

Keywords:

Financial time series; Heavy–tails; Multiplier block bootstrap; Regular variation; Shock persistence; Stationary time series; Tail process.

1 Introduction

The typical modeling paradigm for a time series often starts by choosing a flexible class of models that captures salient features present in the data. Of course, features depends on the type of characteristics one is looking for. For a financial time series consisting of say log-returns of some asset, the key features, often referred to as stylized facts, include heavy-tailed marginal distributions and serially uncorrelated but dependent data. These characteristics are readily detected using standard diagnostics such as qq-plots of the marginal distribution and plots of the sample autocorrelation function (ACF) of the data and the squares of the data. The GARCH process (and its variants) as well as the stochastic volatility (SV) process driven by heavy-tailed noise exhibit these attributes and often serve as a starting point for building a model. More recently, considerable attention has been directed towards studying the extremal behavior of both financial and environmental time series, especially as it relates to estimating risk factors. Extremes for such time series can occur in clusters and getting a handle on the nature of clusters both in terms of size and frequency of occurrence is important for evaluating various risk measures. Ultimately, one wants to choose models that adequately describe various extremal dependence features observed in the data. The theme of this paper is to provide additional tools that not only give measures of extremal dependence, but can be used as a basis for assessing the quality of a model’s fit to extremal properties present in the data.

The extremal index (Leadbetter, 1983) is one such measure of extremal dependence for a stationary time series. It is a measure of extremal clustering ( is the mean cluster size of extremes) with indicating clustering and signifying no clustering in the limit. Unfortunately, is a rather crude measure and does not provide fine detail about extremal dependence. The extremogram, developed in Davis and Mikosch (2009), is an attempt to provide a measure of serial dependence among the extremes in a stationary time series. It was conceived to be used in much the same way as an ACF in traditional time series modeling, but only applied to extreme values.

In this paper, we will use the spectral tail process, as formulated by Basrak and Segers (2009) for heavy-tailed time series, to assess and measure extremal dependence. The spectral tail process provides a more in-depth description of the structure of extremal dependence than the extremogram. The first objective of this paper will be to establish limit theory for nonparametric estimates of the distribution of the spectral tail process for a class of heavy-tailed stationary time series. This builds on earlier work of Drees et al. (2015) for heavy-tailed Markov chains. The nonparametric estimates provide quantitative information about extremal dependence within a time series and as such can be used in both exploratory and confirmatory phases of modeling. As an example, it provides estimates of the probability that an extreme observation will occur at time , given one has occurred at time , and that its absolute value will be even larger. These estimates can also be used for model confirmation, in much the same way that the ACF is used for assessing quality of fit for second-order models of time series. For example, one can compute a pre-asymptotic version (to be defined later) of the distribution of the spectral tail process from a GARCH process, which in most cases can be easily calculated via simulation. Then the estimated distribution of the spectral tail process can be compared with the pre-asymptotic version corresponding to a model for compatibility. A good fit would indicate the plausibility of using a GARCH model for capturing serial extremal dependence. The second main objective is then to provide a useful way of measuring compatibility, which we propose using resampling methods.

The key object of study in this paper is the tail process and in particular, its normalized version – the spectral tail process. A strictly stationary univariate time series is said to have a tail process if, for all integers , we have

(1.1)

with the implicit understanding that the law of is non-degenerate. The law of is then necessarily Pareto() for some and the function is regularly varying at infinity with index :

(1.2)

The existence of a tail process is equivalent to multivariate regular variation of the finite-dimensional distributions of (Basrak and Segers, 2009, Theorem 2.1). In many respects, this condition can be viewed as the heavy-tailed analogue of the condition that a process is Gaussian in the sense that all the finite-dimensional distributions are specified to be of a certain type.

The spectral tail process is defined by , for . By (1.1), it follows that for all integers , we have

(1.3)

The difference between (1.1) and (1.3) is that in the latter equation, the variables have been normalized by rather than by the threshold . Such auto-normalization allows the tail process to be decomposed into two stochastically independent components, i.e.,

Independence of and is stated in Basrak and Segers (2009, Theorem 3.1). The random variable characterizes the magnitudes of extremes, whereas captures serial dependence. The spectral tail process at time yields information on the relative weights of the upper and lower tails of : since , we have

(1.4)

The distributions of the forward tail process and the backward tail process mutually determine each other (Basrak and Segers, 2009, Theorem 3.1). For all with and for all measurable functions satisfying whenever , we have, provided the expectations exist,

(1.5)

The indicator variable can be omitted because of the presence of , but sometimes, it is useful to mention it explicitly in order to avoid errors arising from division by zero. By exploiting the ‘time-change formula’ (1.5), we will be able to improve upon the efficiency of estimators of the spectral tail process.

Main interest in this paper is in the cumulative distribution function (cdf), , of . If is continuous at a point , then

(1.6)

The remainder of the paper is organized as follows: Two empirical estimates of the spectral tail process based on forward and backward representations for the tail process are described in Section 2. The companion limit theory for these estimators is formulated in Section 3. While the estimates are asymptotically normal, the expressions for the asymptotic variances are too complicated to be useful for constructing confidence regions. To overcome this limitation, inference procedures can be carried out using resampling methods. Two resampling methods for constructing confidence intervals, based on the stationary bootstrap as used in Davis et al. (2012), and the multiplier block bootstrap as described in Drees (2015), are also considered in Section 2. The validity of the proposed methodology is confirmed in Section 3, whereas the finite-sample performance is investigated through Monte Carlo simulations in Section 4. In terms of coverage probabilities, the multiplier block bootstrap generally performed better than the stationary bootstrap procedure. However, with both procedures one has to be careful when applying them for very high thresholds.

In Section 5, we apply the methodology to study serial extremal dependence of daily log-returns on the S&P500 index and the P&G stock price. We distinguish between two sources of such dependence - positive and negative shocks - pointing out an asymmetric behavior. Moreover, we compare up to what extent serial extremal dependence is captured by the GARCH(1,1) model and an extension of it allowing for a leverage effect, the APARCH(1,1) model. These examples illustrate how our methodology can provide useful information on the behavior of extremes that follow both positive and negative shocks, which, in turn, can be used in a model-building context.

All proofs of the main results are collected in Section 6.

2 Methodology

2.1 Estimators

The data consist of a stretch , where is fixed and corresponds to the maximal lag of interest, drawn from a regularly varying, stationary univariate time series with spectral tail process and index .

In order to estimate , we simply take the empirical version of (1.4), yielding

For to be consistent and asymptotically normal, the threshold sequence should tend to infinity at a certain rate described in the next section.

To estimate the cdf, , of , we propose the forward estimator

(2.1)

This is just the empirical version of the left-hand side of (1.6). In equations (1.1) and (1.3), the conditioning event is , making no distinction between positive extremes, , and negative extremes, . However, these two cases can be distinguished by conditioning on the sign of . In particular, we define

The time-change formula (1.5) yields a different representation of the law of , motivating a different estimator than the one above, based on different data points.

Lemma 2.1.

Let be a stationary univariate time series, regularly varying with index and spectral tail process . Then, for all integer ,

(2.2)

Moreover

(2.3)

and

(2.4)

Assume for the moment that is known. Lemma 2.1 suggests the following backward estimator of the cdf of :

(2.5)

Moreover

In most applications, however, the value of is unknown. Therefore, we estimate separately, for instance, by the Hill-type estimator

(2.6)

We plug the estimated value of into equation (2.5) to obtain

(2.7)

.

2.2 Resampling

We explore two different bootstrap schemes that yield confidence intervals for , or rather, for the pre-asymptotic version : the stationary bootstrap and the multiplier block bootstrap. We apply each of the two resampling schemes to both the forward and backward estimators at various levels and at different lags .

The stationary bootstrap goes back to Politis and Romano (1994) and is an adaptation of the block bootstrap by allowing for random block sizes. The resampling scheme was applied to the extremogram in Davis et al. (2012). It consists of generating pseudo-samples , drawn from the sample by taking the first values in the sequence

where is an iid sequence of random variables uniformly distributed on and is an iid sequence of geometrically distributed random variables (independent of ) with distribution , for some such that and . If the index thus obtained exceeds the sample size , we replace by , i.e., we continue from the beginning of the sample. The estimators are then applied to .

The multiplier block bootstrap method was applied to cluster functionals in Drees (2015). It consists of splitting the data set into blocks of length and multiplying the cluster functionals of each block by a random factor. (Here denotes the integer part of .) Specifically, for iid random variables , independent of , with and , the bootstrapped forward estimator can be written as

where denotes the set of indices belonging to the th block. Similarly, the bootstrapped backward estimator for with estimated index of regular variation is

(2.8)

with

(2.9)

If the threshold is high, it may be advisable to construct bootstrap confidence intervals based on lower thresholds and then scale accordingly; see the explanation after Theorem 3.3.

2.3 Testing for dependence of extreme observations

For iid random variables, the spectral tail process simplifies to a.s. for all nonzero . If this occurs for a stationary, regularly varying time series, then we say that the series exhibits serial extremal independence. The opposite case is referred to as serial extremal dependence, i.e., at least one of the variables for is not degenerate at . Since the convergence of the pre-asymptotic distribution can be arbitrarily slow, one cannot formally test for extremal dependence within the present framework.

However, if one wants to test whether the exceedances over a given high threshold are independent then one may check whether the lower bound of a confidence interval for, say, constructed by one of the bootstrap methodologies is larger than this probability under the assumption of exact independence of and , which is easily shown to equal .

If one prefers to work with exceedances of the original time series (instead of its absolute values), then the probability under the assumption of independence depends on the relative weights of the upper and lower tails, and can thus not be calculated analytically. In that case, it seems natural to calculate this probability by Monte Carlo simulation. To this end, one generates (conditionally) iid samples according to the empirical distribution of the original time series by drawing with replacement from the observations, which corresponds to the classical bootstrap procedure for iid data. The considered probability under independence can be approximated by the pertaining relative frequency, which is then compared with the lower confidence bound for the probability estimated from the original time series. If the latter is larger this indicates that the exceedances in the time series exhibit a non-negligible serial dependence (see Figure 3 in Section 5).

Alternatively, one may compare the pre-asymptotic probability estimated from the observed time series using either the forward or the backward estimator with quantiles of the distribution of this estimator under independence. In a similar way as described above, the latter can be approximated by an empirical quantile obtained in Monte Carlo simulations with (conditionally) iid samples (cf. Figure 5).

3 Large-sample theory

Under certain conditions, the standardized estimation errors of the forward and the backward estimators converge jointly to a centered Gaussian process (Section 3.1). Convergence of the multiplier block bootstrap follows under the same conditions (Section 3.2).

In order not to overload the presentation, we focus on nonnegative time series. We briefly indicate how the conditions and results must be modified in the real-valued case. In addition, we distinguish between the cases where is known or unknown.

3.1 Asymptotic normality of the estimators

Suppose first that the index of regular variation, , is known. All estimators under consideration can then be expressed in terms of generalized tail array sums. These are statistics of the form , with

(3.1)

Drees and Rootzén (2010) give conditions under which, after standardization, such statistics converge to a centered Gaussian process, uniformly over appropriate families of functions . From these results we will deduce a functional central limit theorem for the processes of forward and backward estimators defined in (2.1) and (2.5), respectively.

To ensure consistency, the threshold must tend to infinity in such a way that

tends to , but the expected number, , of exceedances tends to infinity. Moreover, we have to ensure that observations which are sufficiently separated in time are almost independent. The strength of dependence will be assessed by the -mixing coefficients

Here is the -field generated by .

We assume that there exist sequences and some such that the following conditions hold:

(A())

The cdf of , , is continuous on , for .

(B)

As , we have , , , , and .

Condition 3.1 imposes restrictions on the rate at which tends to 0 and thus on the rate at which tends to . Often, the -mixing coefficients decay geometrically, i.e., for some . Then one may choose , and Condition 3.1 is fulfilled for a suitably chosen if and .

(C)

For all , there exists

such that exists and .

Under these conditions, one can prove the asymptotic normality of relevant generalized tail array sums (see Proposition 6.1 below) and thus the joint asymptotic normality of the forward and the backward estimator of centered by their expectation. However, additional conditions are needed to ensure that their biases are asymptotically negligible:

(3.2)
(3.3)

for . Here denotes the survival function of . These conditions are fulfilled if tends to sufficiently slowly, because by definition of the spectral tail process and by (2.2), the left-hand sides in (3.2)–(3.3) tend to if is continuous on .

Theorem 3.1.

Let be a stationary, regularly varying process. If 3.1, 3.1, 3.1, (3.2) and (3.3) are fulfilled for some and , then

where is a centered Gaussian process, indexed by functions defined in (6.2), whose covariance function is given in (6.3).

If the conditions of Theorem 3.1 hold except for the bias conditions (3.2) and (3.3), then the convergence still holds true if one centers with the corresponding pre-asymptotic values and .

Suppose next that the index of regular variation, , is unknown. In the definition of the backward estimator, it must then be replaced by an estimator, e.g., by the Hill-type estimator (2.6). To prove asymptotic normality of the resulting backward estimator , defined in (2.7), we have to sharpen condition 3.1. Let .

(C’)

For all , there exists

such that exists and .

Moreover, there exists such that

In addition, we need a condition to ensure that the bias of the Hill estimator is asymptotically negligible:

(3.4)
Theorem 3.2.

Let be a stationary, regularly varying process. If 3.1, 3.1, 3.1 and (3.2)–(3.4) are fulfilled for some and , then

where is the same centered Gaussian process as in Theorem 3.1.

3.2 Consistency of the multiplier block bootstrap

Here we discuss the asymptotic behavior of the multiplier block bootstrap version of the forward and backward estimators. For the sake of brevity, we focus on estimators of for a fixed in the case when is unknown.

Drees (2015) has shown convergence of bootstrap versions of empirical processes of tail array sums under the same conditions needed for convergence of the original empirical processes. Let denote the probability w.r.t. , i.e., the conditional probability given .

Theorem 3.3.

Let , , be iid random variables independent of with and . Then, under the conditions of Theorem 3.2, for all ,

in probability.

In particular, if and are such that , then

is a confidence interval for with approximative coverage level . However, if the number of exceedances over a given threshold is too small, one may prefer to construct confidence intervals based on bootstrap estimators corresponding to lower thresholds. Let denote another threshold sequence, let denote the corresponding exceedance probabilities, and let and denote the backward estimator and the bootstrap version thereof, respectively, based on the exceedances over . The conditional distribution of

given the data is approximately the same as the unconditional distribution of

Hence, if and are such that and if is a suitable estimator of , then

(3.5)

is a confidence interval for with approximative coverage probability . In practice, one will often use large order statistics as thresholds, say the -th and -th largest observations, respectively. In that case, can be replaced by . A similar approach, namely to use a variance estimator which is based on a lower threshold, has successfully been employed in Drees (2003, Section 5).

Of course, confidence intervals based on the bootstrap version of the forward estimator can be constructed analogously.

Remark 3.4.

It is possible to generalize Theorem 3.3 to cover the joint limit distribution of the bootstrap estimators for all and . Technically, this requires to endow the space of probability measures on spaces of bounded functions from (resp. ) to with a metric that induces weak convergence, e.g., the bounded Lipschitz metric. This is the approach in Drees (2015) to establish the consistency of a bootstrap method for estimating the extremogram, a close cousin of the tail process. For brevity, we omit the details.

Remark 3.5.

For time series which are not necessarily positive, the forward and backward estimators of can be represented in terms of generalized tail array sums constructed from

When , for example, the backward estimator is equal to the ratio of the generalized tail array sums pertaining to the functions

Limit theorems can be obtained by the same methods as in the positive case under obvious analogues to the conditions 3.1, 3.1 and 3.1 or 3.1, respectively, with .

4 Finite-sample performance

In Section 4.1, we show results from a numerical simulation study designed to test the performance of the forward (2.1) and the backward (2.7) estimators. We continue in Section 4.2 by evaluating the performance of two bootstrap schemes, the multiplier block bootstrap and the stationary bootstrap, described in Section 2.2.

The simulations are based on pseudo-random samples from two widely used models for financial time series. Both models are of the form where and are independent. First, we consider the GARCH(1,1) model with , the innovations being independent random variables, standardized to have unit variance. The second model is the stochastic volatility (SV) process with , with independent standard normal innovations and independent innovations with common distribution . The parameters have been chosen to ensure that both time series are regularly varying with index (Davis and Mikosch, 2001; Mikosch and Stărică, 2000).

4.1 Forward and backward estimators

We estimate for both the GARCH(1,1) and the SV model, for various arguments and lags , via the forward and the backward estimator, with estimated tail index . The threshold is set at the empirical quantile of the absolute values of a time series of length . We do Monte Carlo repetitions and calculate bias, standard deviation, and root mean squared error (RMSE) with respect to the pre-asymptotic values in the forward representation. The true quantile of and the true pre-asymptotic values were calculated numerically via Monte Carlo simulations based on time series of length .

It was already reported in the context of Markovian time series that for and large, the backward estimators usually have a smaller variance than the forward estimators (Drees et al., 2015). Here, numerical simulations suggest that this is true for non-Markovian time series and for higher lags as well. The results are presented in Figure 1.


Figure 1: Performance of the forward and the backward estimators: bias (left), standard deviation (middle), ratio of root mean square errors (right) with respect to the pre-asymptotic values in the forward estimator, for GARCH(1,1) model (top) and SV model (bottom).

The right column, which shows the RMSE of the backward estimator divided by the RMSE of the forward estimators (both with respect to the pre-asymptotic values of the spectral tail process in the forward representation), shows that the backward estimator outperforms the forward estimator if is sufficiently large in absolute value. This phenomenon was also observed at other lags (not shown). For some other models, however, such as certain stochastic recurrence equation or copula Markov models, the advantage of the backward estimator was observed only for smaller lags ().

4.2 Bootstrapped spectral tail process

We asses the performance of the two bootstrap schemes, the stationary bootstrap and the multiplier block bootstrap. To do so, we estimate the coverage probability of the bootstrapped confidence intervals with respect to the true pre-asymptotic spectral tail process in the forward representation. We focus on probabilities of the form . This particular value can be of interest due to its interpretation as the probability of a shock being followed by an even larger aftershock, i.e., being larger than conditionally on exceeding some threshold already. The true pre-asymptotic values in the forward representation were calculated numerically via Monte Carlo simulations with time series of length .



Figure 2: Coverage probabilities of confidence intervals for (left: forward estimator; right: backward estimator) based on the stationary bootstrap (gray) and the multiplier block bootstrap (black). The top and the middle plots correspond to the GARCH model with thresholds set at the and the empirical quantiles, respectively. The bottom plots correspond to the SV simulation study with threshold set at the empirical quantile. In the latter two cases, the dashed lines correspond to the coverage probabilities of the rescaled confidence intervals (3.5). The horizontal black line is the reference line.

In Figure 2, we plot the results for the GARCH model and for the SV model, for the forward and the backward estimators. The expected block size for the stationary bootstrap (represented by gray lines) was chosen as . For the multiplier block bootstrap (black lines), the block size was fixed at and the multiplier variables were drawn independently from the standard normal distribution. Estimates of the coverage probabilities are based on simulations. In each such sample, we use bootstrap samples for calculating the confidence intervals with nominal coverage probability . We use two different thresholds, i.e., the and empirical quantiles of the absolute values of a time series of length . For the higher threshold, the confidence intervals were calculated either directly (indicated by the solid lines) or using a rescaled bootstrap estimator that was based on the exceedances over the empirical quantile as in (3.5) (dashed lines).

In all cases, the multiplier block bootstrap produces a better coverage probability than the stationary bootstrap. Moreover, the backward estimator is more stable than the forward one, at least for , and this translates into higher stability of the bootstrapped confidence intervals. The effect is especially visible for higher thresholds, e.g., at the quantile, leaving insufficiently many pairs of exceedances for accurate inference. Finally, rescaled confidence intervals (3.5) based on lower thresholds can have a much better coverage than confidence intervals based on higher thresholds.

5 Application

We first consider daily log-returns on the S&P500 stock market index between 1995-01-01 and 2005-01-01 taken from Yahoo Finance111http://finance.yahoo.com/. In Figure 3, we plot the sample spectral tail process based on the backward estimator with empirical quantile taken as a threshold and the confidence intervals from the multiplier bootstrap scheme rescaled via the quantile as threshold as in (3.5). The estimated index of regular variation is . The left-hand plot corresponds to conditioning on a positive extreme at the current time instant, whereas the right-hand plot corresponds to conditioning on a negative shock. The former plot indicates much weaker serial extremal dependence than the latter one: negative shocks are more persistent than positive ones. This characteristic is shared by other stock’s daily returns which were tested but not reported here.

Figure 3: Sample spectral tail process (solid black line) for S&P500 daily log-returns based on the backward estimator. The gray area corresponds to the confidence intervals for the pre-asymptotic spectral tail probabilities based on the multiplier bootstrap with replications. The horizontal dashed line corresponds to the pre-asymptotic spectral tail probabilities under independence.

Consider two widely used financial models of the type : first, the GARCH process, where

and second, the APARCH(1,1) process (Ding et al., 1993) with

Both models allow for volatility clustering in the limit. Additionally, the APARCH model captures asymmetry in the volatility of returns. That is, volatility tends to increase more when returns are negative, as compared to positive returns of the same magnitude if . The asymmetric response of volatility to positive and negative shocks is well known in the finance literature as the leverage effect of the stock market returns (Black, 1976).

We fit those two models to daily log-returns of the S&P500 index. We use the garchFit function from the fGarch library available in R, the function being based on maximum likelihood estimation (Wuertz et al., 2013). The innovations, , are assumed to be standard normally distributed. The fitted parameters are given in the top part of Table 1.

S&P500 GARCH - -
APARCH
P&G GARCH - -
APARCH