PORTMANTEAU TESTS FOR ARMA MODELS WITH INFINITE VARIANCE

# Portmanteau Tests for Arma Models With Infinite Variance

By J.-W. Lin AND A.I. McLeod
The University of Western Ontario

Jen-Wen Lin and A. Ian McLeod (2008). Portmanteau Tests for ARMA Models with Infinite Variance. Journal of Time Series Analysis, 29, 600-617

Abstract.

Autoregressive and moving-average (ARMA) models with stable Paretian errors is one of the most studied models for time series with infinite variance. Estimation methods for these models have been studied by many researchers but the problem of diagnostic checking fitted models has not been addressed. In this paper, we develop portmanteau tests for checking randomness of a time series with infinite variance and as a diagnostic tool for checking model adequacy of fitted ARMA models. It is assumed that least-squares or an asymptotically equivalent estimation method, such as Gaussian maximum likelihood in the case of AR models, is used. And it is assumed that the distribution of the innovations is IID stable Paretian. It is seen via simulation that the proposed portmanteau tests do not converge well to the corresponding limiting distributions for practical series length so a Monte-Carlo test is suggested. Simulation experiments show that the proposed test procedure works effectively. Two illustrative applications to actual data are provided to demonstrate that an incorrect conclusion may result if the usual portmanteau test based on the finite variance assumption is used.

Keywords. ARMA models, Infinite variance, Least squares method, Portmanteau test, Residual autocorrelation function, Stable Paretian distribution

1. INTRODUCTION

Time series models with stable Paretian errors have been studied by many researchers. Adler et al. (1998) discussed many aspects of how to apply standard Box-Jenkins techniques to stable  ARMA  processes. Adler et al. (1998) concluded that, in principle, the standard Box-Jenkins techniques do carry over to the stable setting but a great deal of care needs to be exercised. In §2 we briefly review the stable Paretian distribution and in §3 we develop portmanteau tests for whiteness or randomness for an IID series. The whiteness test is illustrated with a brief application to exchange rate data. In §4 we develop portmanteau diagnostic checks for residuals of an  AR  model fitted by least-squares assuming the true innovations are IID stable Paretian distributed. This is extended to the ARMA model in Appendix C. An illustrative example shows the differences in inferences that may result between the finite variance and infinite variance portmanteau tests.

2. THE STABLE PARETIAN DISTRIBUTION

A stable distribution is usually defined through its characteristic function. A random variable , or , is said to have a stable distribution if its characteristic function has the following form:

where , is the parameter of the characteristic function, is the index of stability, or the characteristic exponent, satisfying , is the scale parameter, is the skewness satisfying , is the location parameter, and

 sgn(t)={1    if  t>00    if  t=0−1  if  t<0.

In this paper, we restrict our attention to processes generated by application of a linear filter to an independently and identically distributed (IID) sequence, , of random variables whose distribution has Pareto-like tails, i.e.,

 {xα(1−F(x))=xαP(Zt>x)→pCxαF(−x)=xαP(Zt<−x)→qC, (1)

as , where , and is a finite positive constant, or the dispersion of the random variable .

3. PORTMANTEAU TESTS FOR RANDOMNESS OF STABLE PARETIAN TIME SERIES

In this section, we shall derive the asymptotic distributions of portmanteau tests for checking randomness of a sequence of stable Paretian random variables. We consider the stable analogues of portmanteau tests of Box and Pierce (1970) as well as Peňa and Rodriguez (2002), denoted by and , respectively. To do so, we require some important properties of sample autocorrelation functions (ACF) and sample partial autocorrelation functions (PACF) of stable Paretian  ARMA  processes (Brockwell and Davis, 1991, Ch. 13; Samorodnitsky and Taqqu, 1994; Adler et al., 1998).

3.1 Asymptotic Distribution of Autocorrelation Function

Let be an  IID sequence of stable Paretian random variables and be the strictly stationary process defined by

 Xt=∞∑j=−∞ψjZt−j,  t=1,…,n, (2)

where

 ∞∑j=−∞|j|∣∣ψj∣∣δ<∞,  for some  δ∈(0,α)∩[0,1]. (3)

The stable analogue of the autocorrelation function at lag is defined as

 ρk=∑jψjψj+k/∑jψ2j,  k=1,2,…. (4)

Eqn (4) can be estimated by the sample autocorrelation function as follows:

 rk={n−k∑t=1XtXt+k}/n∑t=1X2t,  k=1,2,…, (5)

for . According to Davis and Resnick (1986), for any positive integer , the limiting distribution of sample autocorrelation functions is given by

 [nlog(n)]1α(r1−ρ1,…,rk−ρk)T→(Y1,…,Yk)T, (6)

where denotes convergence in distribution and

 Yh=∞∑j=1(ρk+j+ρk−j−2ρjρk)SjS0,h=1,…,k, (7)

where are independent stable variables; is positive with and the are , where

 Cα=1−αΓ(2−α)cos(πα2)  %if  α≠1,

and

 Cα=2π  if  α=1.

Under the null hypothesis that are a sequence of  IID stable Paretian random variables, we have and for so the limiting distribution of sample ACFs can be further simplified as follows:

 [nlog(n)]1α(r1,…,rk)T→(W1,…,Wk)T, (8)

where are given by

 Wh=ShS0,h=1,…,k. (9)

Note that, for , we may also use the mean-corrected sample autocorrelation function at lag , denoted as , which is given by

 ~rk=n−k∑t=1(Xt−¯X)(Xt+k−¯X)/n∑t=1(Xt−¯X)2, (10)

Davis and Resnick (1986) indicated that the limiting distribution of is the same as that of .

3.2 Asymptotic Distribution of Partial Autocorrelation Function

Consider an  AR (p) process,

 Xt−ϕ1Xt−1−…−ϕpXt−p=Zt,

where are a sequence of  IID stable Paretian errors, , . Let be a vector of autocorrelation functions, be the autocorrelation matrix, and . The Yule-Walker equations are defined as

 R(p)ϕ(p)=ρ(p). (11)

The PACF at lag is simply the -th element of the solution of the Yule-walker equations,

 ϕYW(p)=Ψ(ρ(p))=R−1(p)ρ(p).

Likewise, the sample partial autocorrelation function at lag is defined as the -th element of the sample estimate of the Yule-walker solution,

 ^ϕYW(p)=Ψ(r(p))=R−1(p)r% (p),

where and are the sample autocorrelation matrix and the vector of sample autocorrelation functions, respectively. It is apparent that the sample partial autocorrelations is a function of sample autocorrelations. Their relationship is clearly described in the Durbin-Levison algorithm.

Let be the sample PACF at lag , and . By the Durbin-Levison algorithm, the vector can be expressed as a function of , , with the -th element given by

 πk=ψ(r(k))=rk−rT(k−1)R−1(k−1)r∗(k−1)1−rT(k−1)R−1(k−1)r(k−1), (12)

where and are as defined above and .

Following the proof in Monti (1994), we can derive the asymptotic distribution of sample partial autocorrelation functions. Under the null hypothesis that are independent, the autocorrelation functions are all zero, and according to Brockwell and Davis (1991, ch. 13),

 rh=Op⎛⎝[nlog(n)]−1/α⎞⎠,  h=1,2,….

Therefore,

 R(k)=1k+Op⎛⎝[nlog(n)]−1/α⎞⎠,

where is a identity matrix. By eqn. (12),

 π(m)=r(m)+Op⎛⎝[nlog(n)]−2/α⎞⎠. (13)

Using eqn. (8), we have

 [nlog(n)]1α(π1,…,πm)T→(W1,…,Wm)T. (14)

3.3 Asymptotic Distributions of and Tests

We can now derive the limiting distributions of the and tests for checking randomness of a sequence of stable Paretian random variables. Under the assumption that , Runde (1997) derived the limiting distribution of , based on the mean corrected sample autocorrelation functions. His result is given by

 (nlog(n))2/αm∑j=1~r2j→W21+⋯+W2m, (15)

where are defined in eqn. (9). Note that if , the limiting distribution of eqn. (15) remains the same if are replaced by .

Consider next the test of Peňa and Rodriguez (2002). The test statistic may be given by

 ^D=(nlog(n))2/α(1−|R(m)|1/m). (16)

Following the proof of Theorem 1 in Peňa and Rodriguez (2002), we may have the asymptotic distribution of eqn. (16) in the following Theorem. The proof is given in Appendix A.

###### Theorem 1

in eqn. (16) is asymptotically distributed as

 m∑i=1m+1−imW2i,

where are as defined in eqn. (9).

Remark 1: It is possible to compute the limiting distributions of the and tests by making use of the change variable technique and some numerical algorithms of calculating the probability density function of stable random variables, such as Mittnik et al. (1999). This approach requires, however, intensive numerical computations.

Remark 2: Another approach to obtaining the asymptotic distributions of the and tests is to simulate the aforementioned tests based on their asymptotic distributions. For example, is simulated as defined in Theorem 1. This approach also requires a large scale of computation but is much less intensive computationally than the approach mentioned in Remark 1. This approach will be adopted in the subsequent analysis based on simulations.

3.4 Simulation Experiments

The finite sample performance of and tests for randomness will be investigated in this section. Based on simulations, the 5, 10, 30, 50, 70, 90, 95, 97.5, 99 empirical quantiles of both tests with lag were calculated and plotted against the corresponding asymptotic distributions. It is seen in Figure 1 and Figure 2 that the empirical and asymptotic quantiles do not agree very well unless is very large.

It is seen in Figures 1 to 2 that the speed of convergence of both tests to the corresponding asymptotic distributions is very slow. A solution to this problem is to use the Monte-Carlo test or parametric bootstrap (Appendix B).

[Figures 1 and 2 about here]

Consider the simulation experiments. IID random sequence of with series length and , , , , were simulated. The empirical sizes of both tests were calculated based on simulations and each Monte-Carlo test was simulated based on simulations. The results are tabulated in Table 1. It is seen that the empirical sizes of both tests are very close to the 5% nominal level even with .

3.5 Illustrative Example

Consider the daily Canada/U.S. exchange rates dated from September 06, 1996 to September 05, 2006. The data was retrieved from the website of the Federal Reserve Bank of St. Louis and the returns, , were computed and tested for randomness. The consistent estimators of McCulloch (1986) were used to estimate and for the returns. We obtained and . It is seen that is close to zero so the series is not highly skewed. Since is much less than , the usage of the portmanteau tests in §3 are more reasonable than that of the ordinary portmanteau tests in this data. The P-values for test were determined using the asymptotic distribution and the Monte-Carlo method in Appendix B. The results are compared in Table 2. Note that when the finite-variance portmanteau test suggested possible evidence of non-randomness but this is not the case when the infinite-variance Monte Carlo test is used.

Remark 3: Portmanteau tests based on the nonparametric bootstrap procedure could also be used but it would be expected that they would be less powerful since less information is used.

4. DIAGNOSTIC CHECK FOR MODEL ADEQUACY OF AR MODELS WITH STABLE PARETIAN ERRORS

4.1 Some Asymptotic Results

In this section, we shall derive the asymptotic distributions of and tests for diagnostic check in model adequacy of  AR () models with stable Paretian errors. Consider the general  AR () process as follows:

 ϕ(B)Xt=Zt, (17)

where is an  IID sequence of stable Paretian random variables, denotes the backward operator, and . Let denote the estimates of autoregressive coefficients. The residuals of the fitted model are given as follows:

 ^Zt=Zt(^ϕ(p))=Xt−^ϕ1Xt−1−…−^ϕpXt−p=^ϕ(B)Xt, (18)

and the corresponding residual autocorrelation at lag is given by

 ^rk=∑^Zt^Zt−k∑^Z2t.

Consider the estimators of satisfying

 ^ϕ(p)=ϕ(p)+Op([n/log(n)]−1/α).

From Appendix C, the residual autocorrelation at lag , , can be approximated by the first order Taylor expansion about error autocorrelation functions, . Specifically, the approximation is

 ^rk=rk+p∑j=1(ϕj−^ϕj)ψk−j+Op([n/log(n)]−2/α), (19)

where is the impulse response coefficient at lag and is the error autocorrelation at lag . Eqn. (19) can also be written in matrix form, to order ,

 ^r(p)=r(p)+X(ϕ(p)−^ϕ(p)), (20)

where

 X=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣10⋯0ψ11⋱0⋮⋮⋱0⋮⋮⋱0ψm−1ψm−2⋯ψm−p⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (21)

By making use of eqn. (19) or eqn. (20) as well as following the proof in Theorem 1, we may derive the asymptotic distributions of the aforementioned portmanteau tests for diagnostic check in  AR  models. This distribution, however, is usually very complicated and may not be traceable unless the  AR  models of interest are fitted by least squares (LS). For simplicity, we only consider the case that eqn. (17) is estimated using least squares in the subsequent analysis.

According to 4 in Davis (1996), if the  ARMA  parameters, , are estimated using least squares , we have converges in distribution, where denotes the LS estimates of . Hence, in terms of our notation, we have . Then, by Box and Pierce (1970), in eqn. (18) satisfy the orthogonality conditions and, to order ,

 ^rT(p)X=0. (22)

If we now multiply eqn. (20) on both sizes by

 Q=X(XT X)−1XT,

then using eqn. (22) we have

 ^r(p)=(1m−Q)r(p) (23)

approximately, where is an identity matrix and . It was shown by Box and Pierce (1970) that is idempotent of rank . Hence, the asymptotic distribution of the test is given by

 (nlogn)2/αm∑1^r2k→WTm(1m−Q)Wm, (24)

where and are defined in eqn. (9).

Consider next the asymptotic distributions of residual partial autocorrelations. Let be the vector of the first residual partial autocorrelations and is the vector of error partial autocorrelations. The Taylor expansion of around yields

 ^π(m)=π(m)+∂π(m)∂r(m)(^r(m)−r(m))+Op([nlogn]−2/α). (25)

By eqn. (12) and (13), eqn. (25) becomes

 (26)

Consider the Peňa-Rodriguez test as the form of

 ^D=(nlogn)2/α(1−|^R(m)|1/m), (27)

where is the residual autocorrelation matrix. By eqn. (26) and following the proof in Theorem 1, the limiting distribution of eqn. (27) is , where and is a diagonal matrix with -th element equal to for .

Remark 4: It is shown in Appendix C.4 that the residuals in a fitted ARMA model are asymptotically equivalent to those in a particular AR model. Hence the asympotic results for the AR may be extended to the ARMA case.

4.2 Some Size and Power Calculations

As in 3.4, the slow convergence of and tests to their asymptotic distributions is also present at the residual autocorrelations. The first order autoregressive process with was simulated and  AR  models were fitted to the data. Then the 5, 10, 30, 50, 70, 90, 95, 97.5, 99 empirical quantiles of were plotted against its theoretical asymptotic distribution based on simulations. The asymptotic distribution of the error autocorrelation at lag one, , was also plotted in Figure 3. It is seen that empirical quantiles of get closer to its asymptotic distribution as the series length increases. However, this is not the case for the empirical quantiles of to the asymptotic distribution of . Therefore, serious size distortion may be present in this case if one uses error autocorrelations as a diagnostic tool for checking model adequacy. The slow convergence of residual autocorrelations to its asymptotic distribution may cause difficulties in using portmanteau tests in practice. Therefore, as in 3.4, we suggested using the Monte-Carlo test to improve the effectiveness of portmanteau tests.

[Figure 3]

We now investigate the effectiveness of and tests for diagnostic check in fitted  AR  models with stable Paretian errors. The empirical sizes of and tests for a significance test were first calculated via simulation. In this experiment,  AR  models, , were simulated, where and and  AR  models were fitted to the simulated data by the Burg algorithm. The empirical size for each test was calculated based on simulations and each Monte Carlo test used simulations. Series length and lags were investigated. It is seen in Table 3 that the empirical sizes of both tests are very close to their nominal level.

[Table 3]

The empirical powers of and tests as diagnostic tools were also investigated via simulation. Twelve  ARMA  models of series length in Table 4 of Peňa and Rodriguez (2002) were simulated and  AR  models were fitted to the simulated data using the Burg algorithm. Both tests with lags were calculated using the parametric bootstrap procedure. The empirical powers were calculated based on simulations and each Monte Carlo test used simulations. It is seen in Table 4 that the empirical powers of both tests are reasonably good for most models. Some of them are even better than the powers listed in Peňa and Rodriguez (2002). In addition, increasing the series length can also improve the effectiveness of the proposed test procedure. For example, with model 3 in Table 2, if the series length was increased to , the empirical powers of the test at lags were increased significantly from 23.37%, 20.10% and 17.61% to 58.27%, 43.71% and 35.52%, respectively. Similar improvement was also found in the test. Finally, as in Peňa and Rodriguez (2002), our simulation experiments show that is more powerful than as a diagnostic tool.

[Table 4]

Remark 5: It is well known that the Burg estimate of is close to the LS estimate. The advantage of using Burg estimate is that it is always in the stationary region and this is needed for the Monte-Carlo test.

4.3 Illustrative Application

Tsay (2002, Ch. 2) tentatively identified an AR(3) or AR(5) model for the monthly simple returns of CRSP value-weighted index from January 1926 to December 1997 using the partial autocorrelation function. Here and the usual Box-Pierce portmanteau test at lags does not suggest model inadequacy of either model at the 5% level. By applying our Monte-Carlo test procedure, however, both the and tests in §4 reject both models. The P-values are displayed in Table 5. The infinite variance hypothesis is plausible since the estimates for of residuals in the fitted AR and AR models are 1.696 and 1.635, respectively. We may conclude from this example that using the ordinary portmanteau tests may lead to a wrong decision if innovations have infinite variance.

[Table 5]

5. CONCLUDING REMARK

We will provide an R package implementing the portmanteau tests described in this paper on CRAN.

APPENDIX A: PROOF OF THEOREM 1

First, by decomposing the determinant of the sample autocorrelation matrix , Pena and Rodriguez (2002) showed that is a weighted function of the first partial autocorrelations. Specifically,

 |R(m)|1/m=m∏i=1(1−π2i)(m+1−i)/m. (28)

Suppose that under the null hypothesis, is asymptotic distributed as . By applying the -method to , it follows that is asymptotically distributed as . From eqn. (28), we can have

 − (nlog(n))2/αlog(|Rm|1/m)= (29) − (nlog(n))2/αm∑i=1m−i+1mlog(1−π2i).

Next suppose that

 (nlog(n))2/α(π21,π22,…,π2m)T⟶Y, (30)

and apply the multivariate -method to

 g(π21,π22,…,π2m)=−m∑i=1m−i+1mlog(1−π2i),

it follows that

 −m∑i=1m−i+1mlog(1−π2i)→(1,m−1m,…,1m)Y. (31)

From the Cramer-Wold theorem, it follows that

 (1,m−1m,⋯,1m)⎛⎝(nlog(n))2/απ21,…,(nlog(n))2/απ2m⎞⎠T ⟶(1,m−1m,…,1m)Y (32)

By eqn. (14), it follows that

 (1,m−1m,…,1m)⎛⎝(nlog(n))2/απ21,…,(nlog(n))2/απ2m⎞⎠T ⟶W21+m−1mW22+…+1mW2m, (33)

Finally, from eqn. (32) and eqn. (33),

 (1,m−1m,…,1m)Y→m∑i=1m+1−imW2i,

and from (30), we have the

 ^D→m∑i=1m+1−imW2i.      □

APPENDIX B: MONTE-CARLO TEST PROCEDURE

The Monte-Carlo test procedure for diagnostic checking of AR and ARMA models with stable Paretian errors can be summarized below. Note that, to check randomness of a time series, we skip Step 1 and in Step 4 we simulate data from an IID sequence of rather than from the fitted model.

Step 1

Fit an AR model to data using least-squares or the Burg algorithm or for ARMA, an approximate Gaussian maximum likelihood algorithm is used. Calculate residuals and the portmanteau test of interest , say .

Step 2

Estimate from residuals in Step 1. The estimator given by McCulloch (1986) may be used.

Step 3

Select the number of Monte-Carlo simulations, . Typically .

Step 4

Simulate the fitted model using the estimated AR or ARMA parameters in Step 1 and in Step 2. Obtain after estimating the parameters in the simulated series.

Step 5

Repeat Step 4 times counting the number of times that a value of greater than or equal to that in Step 1 has been obtained.

Step 6

The -value for the test is .

Step 7

Reject the null hypothesis if the -value is smaller than a predetermined significance level.

APPENDIX C: THE GENERALIZATION OF LINEAR EXPANSION OF RESIDUAL AUTOCORRELATION

C.1 Introduction

Residual autocorrelations are an important tool for diagnostic checking of autoregressive and moving average ( ARMA ) models. Their asymptotic distributions from univariate  ARMA  models were first derived by Box and Pierce (1970). McLeod (1978) refined the derivation and extended it to the multiplicative seasonal ARMA  models. Their results were established under the assumption that error sequences have finite variance and the parameters are estimated using least squares, or equivalently, using maximum likelihood estimation (MLE) for Gaussian  ARMA  processes. Their result may not be valid if the parameters of interest are estimated using other estimation methods or linear processes with infinite variance. This section demonstrates how the linear expansion of residual autocorrelations in Box and Pierce (1970) also holds for other estimation methods and for  AR  models with stable Paretian errors. The expansion may be used to derive the limiting distribution of residual autocorrelations.

C.2 The Autoregressive Process

Consider an  AR  process as follows:

 ϕ(B)yt=at, (34)

where denotes the backward operator, , and is a sequence of independent and identical random variables with mean zero and finite variance . For given values of parameters, we can define

 ˙at=at(˙Φ)=yt−˙ϕ1yt−1−⋯−˙ϕpyt−p=˙Φ(B)yt (35)

and the corresponding autocorrelation function at lag as

 ˙rk=rk(˙Φ)=∑˙at˙at−k∑˙a2t. (36)

C.3 Linear Expansion of Residual Autocorrelation Function about Error Autocorrelation Functions

Consider approximating the residual autocorrelation by a first order Taylor expansion about . Let and denote and respectively, where integer. Consider the estimators of satisfying

 ^ϕj=ϕj+Op(1/√n), ∀ j. (37)

We have

 ^rk=rk+p∑j=1(ϕj−^ϕj)^δjk+Op(1/n), (38)

where

 ^δjk = −∂˙rk∂˙ϕj|˙Φ=^Φ (39) = −∂∂˙ϕj(˙ck˙c0)|˙Φ=^Φ = ^δ(1)ij+^δ(2)ij,
 ^δ(1)ij=−˙ck∂∂˙ϕj(1˙c0)|˙Φ=^Φ

and

 ^δ(2)ij=−1˙c0∂˙ck∂˙ϕj|˙Φ=^Φ.

For LS estimates, we have that

 ∂∂˙ϕj[∑˙a2t]|˙Φ=^Φ=∂c0∂˙ϕj|˙Φ=^Φ=0 (40)

so it is straightforward that . Using this result, Box and Pierce (1970) showed that to order , where ’s are the impulse response coefficients of the  MA  representation of eqn. (34). For other estimation methods, however, may not be zero since eqn. (40) does not hold. To obtain a general result for , therefore, we will calculate explicitly.

Note that can be written as follows:

 ˙ck⋅[∑˙a2t]−2∂˙c0∂˙ϕj|˙Φ=^Φ. (41)

By eqn. (2.15) of Box and Pierce (1970) and letting , eqn. (41) can be expressed as follows:

 ∑y2t∑^a2t⋅p∑i=0^ϕi[r(y)−i+j+r(y)i−j]⋅^ck^c0 (42) = ∑pi=0^ϕi[r(y)−i+j+r(y)i−j]∑pi=0∑pj=0^ϕi^ϕjr(y)i−j⋅^rk,

where

 r(y)ν=∑ytyt−ν∑y2t.

Let denote

 (p∑i=0^ϕi[r(y)−i+j+r(y)i−j])/(p∑i=0p∑j=0^ϕi^ϕjr(y)i−j),

and approximate by replacing ’s and ’s with ’s and ’s, the theoretical parameters and the autocorrelations of the autoregressive process . By the Barteltt’s formula,

 r(y)k=ρk+Op(1/√n)

as well as eqn. (37) and (42), we have

 ^ζj=ζj+Op(1/√n). (43)

Then by making use of the recursive relation which is satisfied by the autocorrelations of an autoregressive process, eqn. (2.19) of Box and Pierce (1970), or

 ρν−ϕ1ρν−1−⋯−ϕpρν−p=ϕ(B)ρν=0,    ν≥1, (44)

can be simplified to yield

 ζj=∑pi=0ϕiρ−j+i∑pi=0ϕiρi. (45)

Note that eqn. (45) has the same form of eqn. (2.20) of Box and Pierce (1970). Specifically, it can be seen as . Moreover, Box and Pierce indicated that so . Plugging this result into eqn. (42), we have . Consequently, eqn. (2.20) of Box and Pierce (1970) for the linear expansion of residual autocorrelations still holds for other estimators with order .

Remark 6 : Many estimators of for an  AR  model with Paretian stable errors have order , such as Whittle’s, Yule-Walker and LS estimtors. Using the result that , and following the proofs in this section as well as in Box and Pierce (1970), we may obtain the linear expansion of residual autocorrelation functions for  AR  models with stable Paretian errors as in eqn. (19)

C.4 The Equality of Residuals in  AR  and  ARIMA  Models

The result in .3 may be extended to  ARIMA  models using technique in 5.1 of Box and Pierce (1970). If two time series (a) an  ARMA () process

 ϕ(B)wt=θ(B)at, (46)

and (b) an autoregressive series

 π(B)xt=(1−π1B−⋯−πp+qBp+q)xt=at, (47)

are both generated from the same set of errors , where

 ϕ(B)=1−ϕB−ϕB2−⋯−ϕBp,

and

 θ(B)=1−θB−θB2−⋯−θBq.

If

 π(B)=ϕ(B)θ(B), (48)

then when the models are fitted by least squares, their residuals, and hence also their autocorrelations, will be very nearly the same. In this section, we consider whether the equality of residuals between  AR  and  ARIMA  models is still valid when the parameters are estimated by other approaches.

As in eqn. (35), define

 ˙aARt=aARt(˙π)=˙π(B)xt=−p+q∑j=0˙πjxt−j, (49)

where , and now also

 ˙a⋆t=a⋆t(˙ϕ,˙θ)=˙ϕ(B)˙θ(B)−1wt=[p∑i=0˙ϕiBi][q∑j=0˙θjBj]−1wt, (50)

where . Using eqn. (5.12) and eqn. (5.13) of Box and Pierce (1970), we can approximate and as follows:

 ˙aAR=a+X(π−˙π) (51)

and

 ˙a⋆=a+X(β−˙β). (52)

Note that eqn. (51) and eqn. (52) can be seen as a linear regression model. We can estimate regression coefficients, and using any suitable method. Let denote the corresponding estimator. Since both eqn. (51) and eqn. (52) have the same form, their estimators should agree with each other. For example, least squares estimates are given by

 (53)

and

 ^β−˙β=g(X,˙a⋆)=(XTX)−1XT˙a⋆. (54)

Then by setting and estimating the regression coefficients of eqn. (51) and eqn. (52), we have

 ^π−π=g(X,a)=^β−β. (55)

Finally, by setting and in eqn. (51) and eqn. (52), it follows from eqn. (55) that to order

 ^aAR=g(X,a)=^a⋆, (56)

and thus (to the same order) .

REFERENCES

Adler, R.J. Feldman, R.E. and Gallagher, C. (1998), “Analysing Stable Time Series,” A Practical Guide to Heavy Tails: Statistical Techniques and Applications, Birkhuser, Boston.

Box, G.E.P. and Pierce, D.A. (1970), “Distribution of Residual Autocorrelation in Autoregressive-Integrated Moving Average Time Series Models,” Journal of American Statistical Association 65, 1509-1526.

Brockwell, P.J. and Davis, R.A. (1991), Time Series: Theory and Methods, Springer, New York.

Davis, R.A. (1996), “Gauss-Newton and M-estimation for  ARMA  processes,” Stochastic Processes and their Applications 63, 75–95.

Davis, R.A. and Resnick, S. (1986), “Limit Theory for the Sample Covariance and Correlation Functions of Moving Averages,” The Annals of Statistics 14, 533–558.

McCulloch, J.H. (1986), “Simple Consistent Estimators of Stable Distribution Parameters,” Communication in Statistics–Computation and Simulation, 15, 1109–1136.

Mittnik, S., Rachev, S.T., Doganoglu, T. and Chenyao, D. (1999), “Maximum Likelihood Estimation of Stable Paretian Models,” Mathematical and Computer Modelling, 29, 275–293.

Monti, A.C. (1994), “A Proposal for Residual Autocorrelation Test in Linear Models,” Biometrika 81, 776–780.

Peňa, D. and Rodriguez, J. (2002), “A Powerful Portmanteau Test of Lack of Fit For Time Series,” Journal of American Statistical Association 97, 601-610.

Runde, R. (1997), “The Asymptotic Null Distribution of the Box-Pierce Q-Statistic for Random Variable with Infinite Variance: An Application to German Stock Returns,” Journal of Econometrics 78, 205-216.

Samorodnitsky, G. and Taqqu, M. (1994), Stable-Non-Gaussian Random Processes, Chapman-Hall, New York.

Tsay, R.S. (2002), Analysis of Financial Time Series, New York: Wiley.