Abstract
This paper considers quantile regression for a wide class of time series models including ARMA models with asymmetric GARCH (AGARCH) errors. The classical meanvariance models are reinterpreted as conditional locationscale models so that the quantile regression method can be naturally geared into the considered models. The consistency and asymptotic normality of the quantile regression estimator is established in locationscale time series models under mild conditions. In the application of this result to ARMAAGARCH models, more primitive conditions are deduced to obtain the asymptotic properties. For illustration, a simulation study and a real data analysis are provided.
Abstract
The aim of this supplementary material is to provide the proofs of Lemma A.3, Lemmas 1–2, and Theorems 3–4 used for obtaining the results stated in the main article.
Quantile Regression for LocationScale Time Series Models with Conditional Heteroscedasticity
Jungsik Noh^{1}^{1}1Quantitative Biomedical Research Center, Department of Clinical Sciences, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA. Email: nohjssunny@gmail.com and Sangyeol Lee^{2}^{2}2Department of Statistics, Seoul National University, Seoul 151747, Korea. Email: sylee@stats.snu.ac.kr
University of Texas Southwestern Medical Center
Seoul National University
Revised February 28, 2015
MSC2010 subject classifications: Primary 62M10; secondary 62F12.
Key words and phrases: Quantile regression,
conditional locationscale time series models, ARMAAGARCH models,
CAViaR models, consistency, asymptotic normality, identifiability
condition.
Abbreviated title: Quantile regression for
locationscale time series models
1 Introduction
Quantile regression, introduced by Koenker & Bassett (1978), generalizes the notion of sample quantiles to linear and nonlinear regression models including the least absolute deviation estimation as its special case. The method provides an estimation of conditional quantile functions at any probability levels and it is well known that the family of estimated conditional quantiles sheds a new light on the impact of covariates on the conditional location, scale and shape of the response distribution: see Koenker (2000). Quantile regression has been widely used to analyze time series data as an alternative to the least squares method (see Fitzenberger et al. 2002; Koenker 2005) since it is not only robust to heavy tails but also allows a flexible analysis of the covariate effects. Especially, in risk management, it is also a functional tool to calculate the valueatrisk (VaR). Quantile regression has been studied in linear and nonlinear autoregressive models by Bloomfield & Steiger (1983), Weiss (1991), Koul & Saleh (1995), and Davis & Dunsmuir (1997): see also Koenker & Zhao (1996) and Xiao & Koenker (2009), who handled ‘linear’ autoregressive conditional heteroscedasticity (ARCH) and generalized ARCH (GARCH) models, and Lee & Noh (2013) who considered ordinary GARCH models. Engle & Manganelli (2004) considered the quantile regression method for a broad class of time series models and designated the conditional autoregressive VaR (CAViaR) model. Although the results of Engle & Manganelli (2004) are applicable to a wide class of time series models, the CAViaR specification therein mainly focuses on the case of pure volatility models, as pointed out by Kuester et al. (2006) and Schaumburg (2012). Unlike the previous studies dealing with the models having either conditional location or scale components, in this study, we take an approach to simultaneously estimate the conditional mean and variance through the quantile regression method. Koenker & Bassett (1982) and Koenker & Xiao (2002) explored the quantile regression for locationscale models without autoregressive structure and proposed a robust test for heteroscedasticity.
This paper focuses on the quantile regression for a wide class of conditional locationscale time series models including the ARMA models with asymmetric GARCH (AGARCH) errors in which the dynamic relation between current and past observations is characterized in terms of a conditional mean and variance structure. Typically, the conditional mean is assumed to follow an either AR or ARMA type model and the conditional volatility is assumed to follow a GARCH type model (Bollerslev 2008). Here, we demonstrate that the quantile regression can be extended to conditional locationscale models rather than meanvariance models through a slight modification, and as such, the estimation of the conditional location and scale can be properly carried out. More precisely, to activate the proposed method, we remove the constraints imposed on the mean and variance of the model innovations and reformulate the meanvariance model to become the conditional locationscale model described in Section 2.2. It is noteworthy that the reformulated models to incur the quantile regression estimation are exactly the same as those in (1.3) of Newey & Steigerwald (1997) who pointed out that nonGaussian quasimaximum likelihood (QML) estimators may be inconsistent in the usual conditional meanvariance models and instead proposed locationscale models to remedy an asymptotic bias effect. From this angle, it may be mentioned that our quantile regression method is comparable with other estimation methods like the Gaussian and nonGaussian QML estimation methods.
In this study, we intend to verify the strong consistency and asymptotic normality of quantile regression estimators in general conditional locationscale time series models. Particularly, in the derivation of the consistency, one has to overcome the difficulty caused by the lack of smoothness of the quantile regression loss function. To resolve this problem, we adopt the idea of Huber (1967) and Pollard (1985) and extend Lemma 3 of Huber (1967) to stationary and ergodic time series cases; see Section 2.3 and Lemma A.1 in the Appendix for details. To apply the obtained results in general models to the ARMAAGARCH model, we deduce certain primitive conditions leading to the desired asymptotic properties. Here, the task of checking the identifiability condition appears to be quite demanding and accordingly a newly designed technique is proposed: see Remark 3 below.
In comparison to Engle & Manganelli (2004), our approach has merit in its own right. First, a weaker moment condition is used to obtain the asymptotic normality: for instance, in the ARMAAGARCH model, only a finite second moment condition is required while a third moment condition is demanded in their paper. Second, more basic conditions such as strict stationarity and ergodicity of models are assumed in our case rather than the law of large numbers and central limit theorems assumed in their paper: however, more general data generating processes are considered therein. Third, our parametrization of conditional quantile functions exhibits a more explicit relationship with the parametrization of original models. Finally, a general identifiability condition is provided for the ARMAAGARCH model and is rigorously verified.
The rest of this article is organized as follows. In Section 2, we introduce the general conditional locationscale time series models and establish the asymptotic properties of the quantile regression estimator. In Section 3, we verify the conditions for the strong consistency and asymptotic normality in the ARMAAGARCH model. In Section 4, we report a finite sample performance of the estimator in comparison with the GaussianQMLE. In Section 5, we demonstrate the validity of our method by analyzing the daily returns of Hong Kong Hang Seng Index. All the proofs are provided in the Appendix and the supplementary material.
2 Quantile regression estimation of conditional heteroscedasticity models
2.1 An example: reparameterized AR()Arch() models
Before we proceed to general conditional locationscale models (see (4) below), we first illustrate conditional quantile estimation for the AR()ARCH() model:
(1) 
where are i.i.d. random variables with and . In what follows, we denote by the field generated by . Provided that is independent of , the th conditional quantile of given can be expressed as
(2) 
where . Since the th quantile of is unknown, it is apparent that the parameters in (2) are not identifiable. As in Lee & Noh (2013), this problem can be overcome by reparameterizing the ARCH component as follows:
with , and . Here, is only proportional to the conditional standard deviation, and thus, can be interpreted to be a conditional scale: this reparameterization procedure expresses the ARCH model as a conditional scale model with no scale constraints on the i.i.d. innovations. The conditional quantile in this case is then expressed as
(3) 
wherein the parameters can be shown to be identifiable: see Lemma 1 that deals with more general ARMAAGARCH models.
In fact, the condition in (1) is not necessarily required to deduce the conditional quantile function, since the conditional quantile specification in (3) is also valid for the AR()ARCH() model without assuming this condition. As seen in Section 2.3, conditional quantile estimators and their asymptotic properties are irrelevant to the location constraint on , and thus, the condition of is not needed for estimating conditional quantiles. An analogous approach will be taken to handle the quantile regression for general locationscale models.
2.2 Conditional locationscale models
Let us consider the general conditional locationscale model of the form:
(4) 
where and respectively denote and for some measurable functions ; denotes the true model parameter; is a model parameter space; are i.i.d. random variables with an unknown common distribution function .
Many conditionally heteroscedastic time series models can be described by the autoregressive representation addressed in (4). For example, the reparameterized AR()ARCH() model in Section 2.1 can be expressed as a form of (4) with , and . Further, it can be readily seen that invertible ARMA models and stationary GARCH models also admit the form of (4): see Theorem 2.1 of Berkes et al. (2003) for the latter. In Section 3, the ARMAAGARCH model will be expressed as a form of (4).
In order to facilitate the conditional quantile estimation, model (4) is assumed to be a reparameterized version of the time series models as discussed in Section 2.1, and as such, the innovation distribution is not assumed to have zero mean and unit variance and is interpreted to be a relative conditional scale rather than variance. However, restricted to ARMAAGARCH models in Sections 3–5, we focus on the case of considering the popularity in practice.
In what follows, the following conditions are presumed:

satisfying (4) is strictly stationary and ergodic.

is independent of for .
Conditions (M1) and (M2) hold for a broad class of time series models. For example, Bougerol & Picard (1992) verified that the GARCH model is strictly stationary if and only if its Lyapunov exponent is negative, which actually entails (M2). Straumann & Mikosch (2006) provided sufficient conditions for the stationarity and ergodicity in general conditional variance models. Meitz & Saikkonen (2008) provided such conditions in nonlinear AR models with GARCH errors. In Section 3, we specify some conditions for the ARMAAGARCH model to admit the autoregressive representation in (4) and also to satisfy (M1) and (M2).
Under (M2), the th quantile of conditional on the past observations is given by for , wherein the innovation quantile appears as a new parameter. We denote by the true parameter vector. Note that the conditional quantile can be expressed as a function of the infinite number of past observations and parameter . Then, taking into consideration the form of , given the stationary solution to model (4) and a parameter vector , we introduce conditional quantile functions:
(5) 
where is a parameter within a domain that allows the above autoregressive representation. In practice, since is unobservable, we cannot obtain . Thus, we approximate them with observable . A typical example is , where all with are put to be 0: see Pan et al. (2008). One can also use a model specific approximation as in Section 3. Then, the th quantile regression estimator of for model (4) is defined by
(6) 
where is a parameter space, , and denotes the indicator function.
2.3 Asymptotic properties of quantile regression estimators
In this subsection, we show the strong consistency and asymptotic normality of the quantile regression estimator defined in (6). The result is applicable to various meanvariance time series models including the ARMAAGARCH model handled in Section 3. The asymptotic properties are proved by utilizing the affinity between and , similarly to the case of the QML estimator in GARCHtype models: see Berkes et al. (2003), Francq & Zakoïan (2004), Straumann & Mikosch (2006), Lee & Lee (2012), and the references therein. However, the asymptotic normality is derived in a nonstandard situation, as discussed below, owing to the nondifferentiability of the loss function .
In what follows, we define for matrix . To verify the consistency of , we introduce the following assumptions:
 (C1)

The th quantile of is unique, that is, for all .
 (C2)

belongs to which is a compact subset of .
 (C3)

(i) is continuous in a.s.; (ii) .
 (C4)

If a.s. for some and , then .
 (C5)

There exists a positive constant such that a.s.
 (C6)

a.s.
Theorem 1.
Suppose that assumptions (M1), (M2), and (C1)–(C6) hold for model (4). Then, a.s. as .
It can be seen that in (5) is strictly stationary and ergodic for each (see Proposition 2.5 of Straumann & Mikosch 2006), while its approximation in (6) is not so since is recursively defined with given initials. Assumptions (C2), (C3) and (C6) are needed to show the uniform convergence of the objective function in (6), based on the ergodic theorem of Straumann & Mikosch (2006). It can be shown from assumptions (C1), (C4) and (C5) that the a.s. limit of the objective function is uniquely minimized at . Conventionally, all the assumptions excepting the identifiability assumption (C4) are easy to check from the existing literatures. The ARMAAGARCH model can be shown to satisfy these conditions in Section 3, based on Brockwell & Davis (1991) and Pan et al. (2008).
Below, we discuss two issues as to (C4). First, when , (C4) is not satisfied for heteroscedastic models. It is because the parameters involved in is not identifiable as seen in the GARCH model: see Remark 3 of Lee & Noh (2013). If , only the parameters in the conditional location is estimable via using Weiss (1991) who proposed the conditional median estimation for some models similar to (4) when . This indicates that the th conditional quantile estimation for heteroscedastic models requires a different conditional quantile specification at some , usually the one corresponding to a center of locations.
Secondly, the verification of (C4) is nonstandard even in the AR()ARCH() model, see (3). It is mainly because the conditional quantile function (5) is a nonlinear function of parameters. We verify (C4) for the ARMAAGARCH model in Lemma 1 below by using the method introduced in Noh & Lee (2013), which may be applicable to the models other than the ARMAAGARCH model.
Turning to the asymptotic normality issue of quantile regression estimator , notice that the objective function in (6) is not twice differentiable with respect to even if is smooth, and thus, a second order Taylor’s expansion is not applicable. This lack of smoothness in the quantile regression is often overcome by using the empirical process techniques: see, for instance, Jurečková & Procházka (1994) and Xiao & Koenker (2009). Huber (1967) designed a method to derive the asymptotic normality under nonstandard conditions and Pollard (1985) recast this method using the empirical process techniques. Weiss (1991), Engle & Manganelli (2004), and Komunjer (2005) applied the method of Huber (1967) to the nonlinear quantile regression for mixing observations. Zhu & Ling (2011) and Lee & Noh (2013) also employed the method of Pollard (1985) for analyzing stationary processes.
When the objective function is nonconvex and nondifferentiable, it is often complicated to verify the rate of convergence of the estimators. In this study, the root consistency of is proved through a local quadratic approximation of the objective function, similarly to the one in Pollard (1985). As a device to provide the quadratic approximation, we derive Lemma A.1 in the Appendix, which is an extension of Lemma 3 of Huber (1967) and Lemma 4 of Pollard (1985) to stationary and ergodic processes.
In what follows, we list some additional assumptions to ensure the asymptotic normality of :
 (N1)

has a bounded continuous density with .
 (N2)

is an interior point of .
 (N3)

(i) There exists a neighborhood of such that for all , is differentiable in and its derivative is Lipschitz continuous a.s,
(ii) ,
(iii) .  (N4)

(i) For all , is differentiable in and its derivative is Lipschitz continuous a.s,
(ii) a.s,
(iii) a.s.  (N5)

Matrix is positive definite, where
(7)
Remark 1.
In the case of ARMAGARCH model, defined in (5) is twice continuously differentiable, whereas the condition fails in the case of ARMAAGARCH model: see Remark 4 in Section 3. The Lipschitz continuity in (N3) and (N4) is intended to cover such models. Recall that the Lipschitz continuous functions have derivatives almost everywhere.
Theorem 2.
The obtained asymptotic covariance matrix coincides with those for the models with location/scale components in Jurečková & Procházka (1994), Davis & Dunsmuir (1997), Koenker & Zhao (1996), and Lee & Noh (2013). The models considered in Weiss (1991) and Engle & Manganelli (2004) allow a time varying conditional distribution of unlike in our study. The asymptotic covariance matrices in their results involve the conditional density of at , which becomes in our setup. Thus, the covariance matrix in Theorem 2 can be also shown to coincide with that of Engle & Manganelli (2004) under the stationarity assumption. For the estimation of the asymptotic covariance matrix, we can employ the following estimator as given in Powell (1991) and Engle & Manganelli (2004):
(8) 
where
and is a bandwidth satisfying and . Theorem 3 of Engle & Manganelli (2004) shows that the asymptotic covariance estimator in (8) is consistent under certain regularity conditions including a more stringent moment condition than those of Theorem 2.
3 Quantile regression in ARMAasymmetric GARCH models
In this section, we consider an application of the results in Section 2 to the ARMAAGARCH model taking into account their broad usage in practice. We verify that the assumptions in Section 2 hold in this model and deduce some more primitive conditions to ensure the asymptotic properties of the quantile regression estimator. The AGARCH model is well known to capture asymmetric properties of conditional volatilities (see Glosten et al. 1993 and Ding et al. 1993) and to reflect the phenomenon that past positive and negative returns impose a different impact on current volatilities.
Let be the observations from the ARMA()AGARCH() model defined by
(9)  
(10) 
where , , for , , , , and are i.i.d. random variables with and . Here, the AGARCH model in (10) is a reparameterized version as described in Section 2.1. We denote by and the true ARMA and AGARCH model parameters, respectively. Further, we denote characteristic polynomials by , , , and for .
The ARMAAGARCH model (9)–(10) admits the autoregressive representation in (4) and satisfies (M1) and (M2) in Section 2.2 under some standard model assumptions: see (A1) and (A2) below. Pan et al. (2008) considered the QML and least absolute deviation estimation for the powertransformed and threshold GARCH models that include AGARCH models as a special case when the power equals 2. Theorem 5 of Pan et al. (2008) shows that equation (10) defines a unique strictly stationary and ergodic solution if and only if the Lyapunov exponent is negative: the specific formula of the exponent is given in Pan et al. (2008, p. 373). It can be seen that the Lyapunov exponent remains the same after the reparameterization and the condition is for the AGARCH() case. It also follows from the theorem that is a function of and has the following ARCH() representation
(11) 
where for and . Given the stationary AGARCH process , assumption (A2) below implies that is stationary and ergodic, and has the AR() representation:
(12) 
where for : see Brockwell & Davis (1991). Combining (11) and (12), model (9)–(10) is shown to admit the autoregressive representation in (4). In addition, it follows from (A2) that is a function of , so is measurable with respect to the field generated by . Therefore, since , (M2) is satisfied, and then, the th quantile of conditional on is given by , where is the th quantile of , , and given in (11).
To estimate the conditional quantiles of , we now construct the th quantile regression estimator of . Denote by a parameter vector which belongs to . If the parameter space satisfies assumption (A4) below, given the stationary solution and , we can define the stationary processes , and consecutively as follows:
(13)  
(14)  
(15) 
for . Then, it can be seen that . In practice, () cannot be computed excepting the AR()asymmetric ARCH() model case as mentioned in Section 2.2. To compute an approximated conditional quantile function, we define , and by using the same equations (13)–(15) for and by setting the initial values , , and for . Here, we denote and . Then, the th quantile regression estimator of for the ARMAAGARCH model (9)–(10) is defined by (6).
To show the identifiability of the conditional quantile functions, we introduce the following assumptions. Assumptions (A3)(i) and (ii) are the standard identifiability conditions for AGARCH and ARMA models, respectively. (A5) assumes that is a continuous random variable, which is common in real applications.
 (A1)

for some and the Lyapunov exponent associated with and is strictly negative.
 (A2)

All zeros of and lie outside the unit disc.
 (A3)

(i) and for each , , have no common zeros and ;
(ii) and have no common zeros and .  (A4)

and for all , for and .
 (A5)

The support of the distribution of is .
 (A6)

.
Lemma 1.
Lemma 1 ensures that the identifiability assumption (C4) for the ARMAAGARCH model holds if and it shows that only AR and MA coefficients are identifiable in the case of . For the consistency of , we added the finite first moment condition of the AGARCH process, which is equivalent to under (A2). An application of Theorem 1 and Lemma 1 yields the strong consistency addressed below.
Remark 2.
In the GARCH case, Ling (2007) presented a necessary and sufficient condition for the stationarity and fractional moments including (A6). For the AGARCH() case, such a condition can be obtained by using Theorem 2.1 of Ling (2007) and Theorem 6 of Pan et al. (2008): for , the AGARCH() process is strictly stationary with if and only if . As in there, one can use Minkowski’s inequality for and the one: , for .
Theorem 3.
To ensure the consistency of , moment conditions (N3)(ii) and (iii) are necessary. It turns out that these conditions are implied by , or equivalently, . For the asymptotic normality, we assume the following moment condition:
 (A1’)

and .
By Theorem 6.(ii) of Pan et al. (2008), (A1’) implies that the model (10) has a stationary solution with . Thus, (A1) becomes redundant. Lemma 2 below ensures assumption (N5), which is related to the nonsingularity of the asymptotic covariance matrix. The proof of Lemma 2 is deferred to the supplementary material.
Lemma 2.
Remark 3.
Theorem 4.
Remark 4.
It is notable that the quantile regression yields a consistent estimation of ARMAAGARCH parameters under the mild moment condition of (A1’), which is a finite second moment condition on both the innovations and observations. It is well known in the GARCH model that the popular Gaussian QMLE is consistent under but converges at a slower rate if the innovation is heavytailed, that is, : see Hall & Yao (2003). This fact also holds in the reparameterized GARCH model as in Section 2.1: see Section 5 of Fan et al. (2014). In fact, the fourth moment condition of innovations is indispensable for obtaining the usual rate in various GARCHtype models: see Straumann & Mikosch (2006) and Pan et al. (2008). Further, for meanvariance models such as the ARMAGARCH model, the Gaussian QML estimation additionally requires a finite fourth moment of observations, that is, : see Francq & Zakoïan (2004) and Bardet & Wintenberger (2009).
In the estimation of GARCHtype models, researchers have paid considerable attention to relaxing moment conditions and seeking robust methods against heavytailed distributions of innovations or observations. For example, Berkes & Horváth (2004) showed that the consistency of the twosided exponential QMLE requires only in the GARCH model, and Zhu & Ling (2011) verified it under in the ARMAGARCH model. These moment conditions can be additionally relaxed by using weighted likelihoods (Zhu & Ling 2011) or other nonGaussian likelihoods (Berkes & Horváth 2004; Fan et al. 2014). In view of these results, it can be reasoned that quantile regression approach in this study also makes a reasonably good robust method in a broad class of time series models.
Remark 5.
As mentioned in Section 2.3, the quantile regression for the locationscale models in (4) requires a different conditional quantile specification when . Thus, it is necessary to test whether is or not, especially for the values of around 0.5: if , the conditional quantile of is just the conditional location and the results of Weiss (1991) can be applied. Under the null hypothesis of this testing problem, we can see that the other parameters are not identified by Lemma 1. Inference in a similar situation can be found in Francq et al. (2010) and references therein. We leave the development of such a test as a task of our future study.
4 Simulation results
In this simulation study, we examine a finite sample performance of the quantile regression estimation and illustrate its robustness against the heavytailed distribution of innovations. The samples are generated from the following ARMA()AGARCH() model:
with , and . As for the distribution of innovation , we consider the two cases:

standard normal distribution;

standardized skewed distribution with degrees of freedom and skew parameter.
The skewness of distribution (b) is approximately : see Fernández & Steel (1998). By using Remark 2, we can check the stationarity and moment condition of for the two distributions. For case (a), the AGARCH() process has a finite forth moment since . For case (b), it only holds that and since and according to a Monte Carlo computation.
The sample size is 2,000 and the repetition number is always
. In computing quantile regression estimates, the NelderMead
method in R
is employed and the GaussianQML estimates are
used as initial values for the optimization process.
Bias  0.008  0.035  0.002  0.010  0.193  0.366  0.017  

SD  0.230  0.457  0.238  0.199  0.583  1.543  0.097  
ASD  0.262  0.494  0.227  0.197  0.325  1.095  0.078  
Bias  0.037  0.086  0.000  0.005  0.402  0.398  0.042  
SD  0.238  0.492  0.170  0.141  0.982  1.557  0.166  
ASD  0.420  0.833  0.172  0.143  0.848  3.181  0.123  
Bias  0.063  0.122  0.013  0.015  0.188  0.688  0.032  
SD  0.275  0.478  0.160  0.134  0.899  1.438  0.146  
ASD  0.350  0.609  0.175  0.145  1.098  1.702  0.123  
Bias  0.003  0.008  0.002  0.012  0.172  0.407  0.015  
SD  0.257  0.385  0.219  0.186  0.823  0.912  0.087  
ASD  0.296  0.428  0.243  0.205  0.521  0.567  0.077 
Tables 1 and 2 exhibit the empirical biases and standard deviations (SD) of the quantile regression estimates at for cases (a) and (b), respectively. We also report the asymptotic standard deviations (ASD) derived from Theorem 2 by using the true parameter values and . It is remarkable that AGARCH parameters are estimated more accurately at the tail part () than at the middle part (). Tables 1 and 2 suggest that the quantile regression method is robust against the heavytailed distribution.
Bias  0.024  0.075  0.010  0.023  0.572  0.514  0.047  

SD  0.349  0.755  0.393  0.338  1.120  2.001  0.148  
ASD  0.432  0.862  0.489  0.421  0.686  1.679  0.111  
Bias  0.064  0.122  0.008  0.009  0.508  0.307  0.065  
SD  0.244  0.494  0.195  0.174  1.159  1.529  0.201  
ASD  0.701  1.381  0.213  0.176  2.211  7.147  0.159  
Bias  0.031  0.073  0.004  0.006  0.110  0.546  0.008  
SD  0.181  0.359  0.125  0.108  0.823  1.357  0.080  
ASD  0.205  0.370  0.128  0.107  0.772  1.221  0.073  
Bias  0.012  0.007  0.002  0.007  0.122  0.418  0.008  
SD  0.210  0.358  0.212  0.188  0.838  0.976  0.073  
ASD  0.255  0.425  0.249  0.209  0.632  0.664  0.070 
Normal  0.062  0.324  0.398  0.235  0.158  0.437  

0.25  0.057  0.452  0.561  0.136  0.156  0.252  
0.75  0.058  0.480  0.588  0.157  0.157  0.289  
0.95  0.074  0.353  0.425  0.172  0.251  0.488  
Skewed  0.040  0.297  0.347  0.370  0.506  0.556  
0.25  0.060  0.597  0.677  0.367  0.671  0.409  
0.75  0.083  0.935  1.091  0.560  0.715  1.078  
0.95  0.085  0.550  0.628  0.549  0.986  1.182 
We demonstrate this robust feature in comparison with GaussianQMLE. To do so, we calculate the relative efficiency defined as the ratio of the root mean squared error (RMSE) of the GaussianQMLE to that of the quantile regression estimates. Table 3 shows that the relative efficiency increases in the skewed distribution case.
It is noteworthy that the quantile regression method for pure volatility models is identical to the CAViaR method of Engle & Manganelli (2004): see Remark 9 of Lee & Noh (2013). The performance of CAViaR method has been reported in many empirical studies. It would be interesting to examine the the performance of our method for various locationscale models in VaR forecasting as well. We leave this as a task of our future study.
5 A real data analysis
In this section, we showcase a real example of the quantile regression for the AR()AGARCH() model by using the daily log returns (computed as 100 times the difference of the log prices) of the Hong Kong Hang Seng Index series taken from Datastream from January 4, 1993 to December 31, 2012, consisting of 5216 observations.
Estimates  0.0360  0.0476  1.2979  4.4150  0.9214  0.0242 

S.E. 
0.0180  0.0124  0.4659  0.8776  0.0102  0.0065 
values  0.0454  0.0001  0.0053  0.0000  0.0000  0.0002 
Table 4 reports the GaussianQML estimates of the parameters in model (9)–(10) with , and . The large value of indicates the asymmetry in volatility, that is, negative values of returns result in a bigger increase in future volatility than positive values. The significance of the AR coefficient indicates that the conditional locationscale model is better fitted to the data than pure volatility models. Meanwhile, using the parameter estimates and residuals, it is obtained that with standard error , which seemingly indicates the validity of (A1’).
Figure 1 illustrates the results of the quantile regression estimation at every probability level. The confidence intervals are obtained based on the asymptotic covariance estimator in (8). The test for is not available at present, but one can guess that would be at some by a ruleofthumb. Then, owing to Theorem 3 and Lemma 1, it can be determined that the estimates at the excepting the AR coefficients are inconsistent. Overall, our findings show that the quantile regression estimates have the values similar to the QMLEs, but some remarkable differences exist between both and and and for the lower values of . For instance, it can be seen from (c) of Figure 1 that the values of are more deviated from the estimate in the lower conditional quantiles. Further, it can be reasoned from (f) of Figure 1 that the asymmetry of volatility still remains even after fitting the AGARCH model.