Causality tests with heteroscedastic errors

# Adaptive estimation of vector autoregressive models with time-varying variance: application to testing linear causality in mean

###### Abstract

Linear Vector AutoRegressive (VAR) models where the innovations could be unconditionally heteroscedastic and serially dependent are considered. The volatility structure is deterministic and quite general, including breaks or trending variances as special cases. In this framework we propose Ordinary Least Squares (OLS), Generalized Least Squares (GLS) and Adaptive Least Squares (ALS) procedures. The GLS estimator requires the knowledge of the time-varying variance structure while in the ALS approach the unknown variance is estimated by kernel smoothing with the outer product of the OLS residuals vectors. Different bandwidths for the different cells of the time-varying variance matrix are also allowed. We derive the asymptotic distribution of the proposed estimators for the VAR model coefficients and compare their properties. In particular we show that the ALS estimator is asymptotically equivalent to the infeasible GLS estimator. This asymptotic equivalence is obtained uniformly with respect to the bandwidth(s) in a given range and hence justifies data-driven bandwidth rules. Using these results we build Wald tests for the linear Granger causality in mean which are adapted to VAR processes driven by errors with a non stationary volatility. It is also shown that the commonly used standard Wald test for the linear Granger causality in mean is potentially unreliable in our framework. Monte Carlo experiments illustrate the use of the different estimation approaches for the analysis of VAR models with stable innovations.

VAR model; Heteroscedatic errors; Adaptive least squares; Ordinary least squares; Kernel smoothing; Linear causality in mean.
JEL Classification: C01; C32
\authornames

Valentin Patilea and Hamdi Raïssi \numberwithinequationsection \numberwithinfiguresection

Valentin Patilea and Hamdi Raïssi*** 20, avenue des buttes de Coësmes, CS 70839, F-35708 Rennes Cedex 7, France. Email: valentin.patilea@insa-rennes.fr and hamdi.raissi@insa-rennes.fr

IRMAR-INSA & CREST-Ensai

IRMAR-INSA

First version January 2010

This version July 2010

## 1 Introduction

In the recent years the study of linear time series models in the context of unconditionally heteroscedastic innovations has become of increased interest. This interest may be explained by the fact that numerous applied works pointed out that unconditional volatility is a common feature in economic data. For instance Doyle and Faust (2005), Ramey and Vine (2006), McConnell and Perez-Quiros (2000), Blanchard and Simon (2001) among other references, pointed out a declining volatility for many economic data since the 1980s. Sensier and van Dijk (2004) found that 80% of 214 U.S. macroeconomic time series they considered exhibit a break in volatility.

In the univariate time series case Busetti and Taylor (2003), Cavaliere (2004), Cavaliere and Taylor (2007) and Kim, Leybourne and Newbold (2002) among other references, considered the test of unit roots with non stationary volatility, while Sanso, Arago and Carrion (2004) proposed tests to detect volatility breaks in the residuals. Robinson (1987) and Hansen (1995) studied univariate linear models with a non stationary volatility. Phillips and Xu (2005) investigated the Ordinary Least Squares (OLS) estimation of univariate stable autoregressive processes. Xu and Phillips (2008) considered the same model and proposed an Adaptive Least Squares (ALS) approach which are based on nonparametric estimation of the volatility of the innovations using OLS residuals. The main conclusion of Xu and Phillips (2008) is that the ALS estimating approach could be much more effective than the OLS estimation. They also found that the asymptotic behavior of the ALS estimator does not dependent on the volatility structure. Multivariate processes are often used in econometric applications because they allow to study cross-correlations between variables. In the multivariate framework Boswijk and Zu (2007) and Cavaliere, Rahbek and Taylor (2007) studied cointegrated systems in presence of non stationary volatility.

In this paper we study the inference in linear vector autoregressive (VAR) models with volatility changes and possibly serially dependent innovations. Three methods for estimating the VAR coefficients are investigated: OLS, infeasible Generalized Least Squares (GLS) based on the knowledge of the time-varying volatility structure, and ALS which is defined like the GLS but using a kernel estimate of the volatility structure. The kernel smoothing could be used with a single bandwidth for the whole volatility matrix or with different bandwidths for different cells. In some sense, we extend the approach of Phillips and Xu (2005) and Xu and Phillips (2008) to the VAR framework. In particular, we see that in the multivariate case the asymptotic distribution of the GLS and ALS estimators is no longer free from the time-varying volatility structure. Moreover, our asymptotic results are uniform with respect to the bandwidth in a given range. This opens the door to data-driven choices of the smoothing parameter, for instance by cross-validation. Such uniformity results seems new even for the univariate case.

As an application of the new estimation methodology, we also consider the problem of test linear causality in mean. The linear causality in mean, introduced by Granger (1969), is often used to investigate causal relations between subsets of variables. For instance Sims (1972), Feige and Pearce (1979) or Stock and Watson (1989) studied the money-income causality relation. Bataa et al. (2009) studied the links between the inflations of different countries by testing linear causality relations. This can be explained by the fact that linear causality in mean can be easily tested by considering tests of zero restrictions on the parameters of VAR models. However, the existing test procedures for checking the linear causality in mean are based on the iid innovation assumption, while several empirical analysis contradict this setting. For instance, Bataa et al. (2009) underlined the presence of volatility breaks in their data set. In this paper, we use our theoretical results on the OLS and ALS estimation to propose new Wald tests for linear causality in mean adapted to the framework of non-stationary volatility. The asymptotic chi-square distribution of the new Wald type statistic obtained from the ALS approach is derived uniformly with respect to the bandwidth(s).

The structure of the paper is as follows. Section 2 outlines the heteroscedastic VAR model, introduces the assumptions and the definitions of OLS and GLS estimators. Section 3 contains the results on the asymptotic behavior of the OLS and the infeasible Generalized Least Squares estimators. We also propose an estimator for the asymptotic variance of the OLS estimator. The ALS estimator based on kernel smoothing of OLS residuals is proposed in Section 4 as a feasible asymptotically equivalent version of GLS estimator. The asymptotic equivalence between ALS and GLS estimators is proved uniformly in the bandwidths involved in volatility estimation. To prove this equivalence we use, among other technical arguments, a recent version of a uniform CLT for martingale differences arrays obtained by Bae et al. (2010), Bae and Choi (1999). A procedure for estimating the asymptotic variance of the ALS estimator is also provided. The application of the new inference methodologies to the test of the linear Granger causality in mean in the presence of time-varying volatility is presented in Section 5. The benefit from using our new Wald type test statistics and the failure of the classical Wald test designed for iid innovations is illustrated through an example. In section 6 the finite sample properties of the different tests considered in this paper are studied by mean of Monte Carlo experiments. The better precision of the ALS estimator when compared to the OLS estimator is also highlighted. The proofs are relegated to the appendix.

The following notations will be used throughout in the paper. We denote by the Kronecker product of two matrices and , and by . The vector obtained by stacking the columns of is denoted . The symbol denotes the convergence in distribution and we denote by the convergence in probability. We denote by the integer part of a real number . The determinant of a square matrix is denoted by .

## 2 The model and least squares estimation of the parameters

Let us consider the observations generated by the following VAR model

 Xt=A1Xt−1+⋯+ApXt−p+ut (1) ut=Htϵt,

where the ’s are -dimensional vectors. The stability condition on the matrices , for all with and denotes the identity matrix, is assumed to be hold. For a random variable we define , where denotes the Euclidean norm. We also define as the -field generated by . The following assumption on the ’s and the process gives the framework of our paper.

Assumption A1:  (i) The matrices are invertible and satisfy , where the components of the matrix are measurable deterministic functions on the interval , such that , and each satisfies a Lipschitz condition piecewise on a finite number of some sub-intervals that partition . The matrix is assumed positive definite for all .
(ii) The process is -mixing and such that , and the components of the process satisfy for some and all .

The assumption A1 generalizes the assumption of Xu and Phillips (2008) to the multivariate case. From the assumption , the innovations are possibly serially dependent. However since is deterministic and , we do not allow the error process to follow a multivariate GARCH model. Cavaliere, Rahbek and Taylor (2007) considered similar volatility structure to ours. Their assumption is slightly different from A1 in the sense that they do not require a Lipschitz condition and allow for a countable number of jumps. Boswijk and Zu (2007) allow the matrix to be possibly stochastic, but requires the volatility process to be continuous with other additional assumptions, which in particular excludes important cases like abrupt shifts. Hafner and Herwartz (2009) assumed no structure on the volatility of the error process and allow for conditional heteroscedasticity. Nevertheless their framework excludes the use of information on the volatility structure and could result in a loss of efficiency in the statistical inference of the model. In addition Hafner and Herwartz (2009) also assumed

 limT→∞T−1T∑t=1Σt=˙Σ,andlimT→∞T−1T∑t=1E{(~Xt−1~X′t−1)⊗(utu′t)}=W,

where , and , are positive definite matrices, and this could be viewed as too restrictive. If we suppose that the volatility matrix is constant, we retrieve the standard homoscedastic case. However the assumption of standard errors is often considered to be too restrictive for macroeconomic or financial applications. Indeed many applied studies pointed out that such data may display unconditional non-stationary volatility (see e.g. Kim and Nelson (1999), Warnock and Warnock (2000) or Batbekh et al. (2007)). Stărică and Granger (2005) found that when large samples of stock returns are considered, taking into account shifts for the unconditional volatility instead of assuming a stationary model as a GARCH(1,1) improve the volatility forecasts.

Let us denote by the vector of the true parameters. The equation (1) becomes

 Xt=(~X′t−1⊗Id)θ0+ut ut=Htϵt,

where we keep the notation . Using this expression we first define the OLS estimator

 ^θOLS=^Σ−1~Xvec(^ΣX),

where

 ^Σ~X=T−1T∑t=1~Xt−1~X′t−1⊗Idand^ΣX=T−1T∑t=1Xt~X′t−1.

Next, let us define the unconditional variance and the Generalized Least Squares (GLS) estimator that takes into account a time-varying , that is

 ^θGLS=^Σ−1~X––vec(^ΣX––), (2)

with

 ^Σ~X––=T−1T∑t=1~Xt−1~X′t−1⊗Σ−1tand^ΣX––=T−1T∑t=1Σ−1tXt~X′t−1.

Note that since is assumed invertible, is positive definite for all . If we suppose that the volatility matrix is constant in time, it is easy to see that . However the GLS estimator is in general infeasible since the true volatility matrix appears in the expression (2). In the next section we compare the efficiency of the OLS and GLS estimators.

## 3 Asymptotic behaviour of the estimators

In order to state the first result of the paper, we need to introduce the following notations. Since we assumed that for all , it is well known that

 Xt=∞∑i=0ψiut−i, (3)

where and the components of the ’s are absolutely summable (see e.g. Lütkepohl (2005, pp 14-16)). From the expression (3) we also write

 ~Xt=∞∑i=0~ψiupt−i,

is given by , where is the vector of ones of dimension and

 ~ψi=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝ψi0000ψi−10000⋱0000ψi−p+1⎞⎟ ⎟ ⎟ ⎟ ⎟⎠,

taking for . Let us define by the matrix with components equal to one. The following proposition gives the asymptotic behavior of the OLS and GLS estimators. For the sake of brevity we only investigate the asymptotic normality, the consistency is in some sense an easier matter and is hence omitted.

{prop}

If Assumption A1 holds true, then:

1.  T12(^θGLS−θ0)⇒N(0,Λ−11), (4)

where

 Λ1=∫10∞∑i=0{~ψi(1p×p⊗Σ(r))~ψ′i}⊗Σ(r)−1dr

is positive definite;

2.  T12(^θOLS−θ0)⇒N(0,Λ−13Λ2Λ−13), (5)

where

 Λ2=∫10∞∑i=0{~ψi(1p×p⊗Σ(r))~ψ′i}⊗Σ(r)dr

and

 Λ3=∫10∞∑i=0{~ψi(1p×p⊗Σ(r))~ψ′i}⊗Iddr

are positive definite;

3. The asymptotic variance of is smaller than the asymptotic variance of , that is the matrix is positive semidefinite.

If we suppose that the error process is homoscedastic, that is for all , and since we assumed , we obtain

 Λ1=E[~Xt~X′t]⊗Σ−1u,Λ2=E[~Xt~X′t]⊗ΣuandΛ3=E[~Xt~X′t]⊗Id,

so that we retrieve the standard result of the iid case (see e.g. Lütkepohl (2005, p 74))

 Λ−11=Λ−13Λ2Λ−13={E[~Xt~X′t]}−1⊗Σu, (6)

although here the error process is assumed dependent. Note that in the homoscedastic case the OLS and ALS estimator have the same efficiency.

In the univariate case (), belongs to the real line so that simplifies to

 Λ1=∞∑i=0~ψi1p×p~ψi, (7)

where the ’s are diagonal matrices. This expression corresponds to the asymptotic covariance matrix obtained in equation (10) of Xu and Phillips (2008). Moreover,

 Λ2=∫10Σ(r)2dr∞∑i=0{~ψi1p×p~ψi},Λ3=∫10Σ(r)dr∞∑i=0{~ψi1p×p~ψi},

and then we retrieve equation (5) in Xu and Phillips (2008).

A nice feature of the GLS estimator in the univariate case is that the covariance matrix of the asymptotic distribution does not depend on the volatility function . In the multivariate case the simplification (7) is still possible if , with a scalar function. Nevertheless, we show in Example 3 below that (7) does not hold in the general multivariate framework and the asymptotic covariance matrix in (4) depends on the volatility function . Moreover, our example shows that the covariance matrices in (4) and (5) can be equal in some particular cases of heteroscedasticity but in general they could be very different.

{ex}

Consider the bivariate model (1) with and

 A1=(a100a2),Σ(r)=(Σ1(r)00Σ2(r)).

In this simple case let us compare the asymptotic variances

 Varas(^θ2,GLS)=(1−a21)×(∫10Σ1(r)/Σ2(r)dr)−1

and

 Varas(^θ2,OLS)=(1−a21)×⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∫10Σ1(r)Σ2(r)dr(∫10Σ1(r)dr)2⎫⎪ ⎪ ⎪⎬⎪ ⎪ ⎪⎭,

that is the asymptotic variances of the GLS and OLS estimators of the second component of the vector (which corresponds to the element of the matrix ).

First we notice that depends on the volatility structure when . In order to illustrate the difference between the variances of and , we plot the ratio

 Varas(^θ2,OLS)/Varas(^θ2,GLS) (8)

in Figure 1 taking

 Σ1(r)=σ210+(σ211−σ210)×1[τ1,1](r)andΣ2(r)=σ220+(σ221−σ220)×1[τ2,1](r),

where and with . This specification of the volatility function is inspired by Example 1 of Xu and Phillips (2008) (see also Cavaliere (2004)). On the left graphic we take and but , so that only is heteroscedastic in general. When or , the process is homoscedastic. On the right graphic we take and but in general. When , we have and hence we retrieve the case studied in Example 1 of Xu and Phillips (2008).

As expected the ratio (8) is equal to one in the homoscedastic case in the left graphic. However, departure from this case clearly shows that the difference between the variances of the two estimators is increasing with . In the right graphic we can see that when or the ratio in (8) is equal to one although is heteroscedastic. The variances and are different when and the largest relative difference is attained when we set the volatility shifts in the middle of the sample.

It appears that the GLS estimator is more efficient than the OLS estimator in general when the matrix is time-varying. Nevertheless the assumption of known volatility structure needed to construct the GLS estimator could be unrealistic in practice. Moreover, the asymptotic distribution of the GLS estimator depends on the unknown volatility. In the OLS estimation approach only the asymptotic distribution of the coefficients estimator depends on the unknown volatility. In addition, we can provide simple consistent estimators of and , which could be further used for instance to build confidence intervals for the OLS estimators. For the purpose of estimation of and let us consider the matrices and denote the OLS residuals by .

{prop}

Under Assumption A1 we have

 ^Ω2:=T−1T∑t=2^ut−1^u′t−1⊗^ut^u′t=Ω2+op(1), (9)
 ^Ω3:=T−1T∑t=1^ut^u′t=Ω3+op(1), (10)
 ^Λ2:=T−1T∑t=1~Xt−1~X′t−1⊗^ut^u′t=Λ2+op(1). (11)
 ^Λ3:=^Σ~X=Λ3+op(1), (12)

Using (9) and (10) and some additional algebra, we can define alternative consistent estimators of and . Indeed, it is shown in the appendix that

 vec(Λ2)={I(pd2)2−(Δ⊗Id)⊗2}−1vec(Ω20d2×(p−1)d20(p−1)d2×d20(p−1)d2×(p−1)d2) (13)

and

 vec(Λ3)={I(pd2)2−(Δ⊗Id)⊗2}−1vec(Ω3⊗Id0d2×(p−1)d20(p−1)d2×d20(p−1)d2×(p−1)d2), (14)

where is the null matrix of dimension and

 Δ=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝A1…Ap−1ApId0…0⋱⋱⋮0Id0⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

is a matrix of dimension . Therefore replacing and by respectively and , and the by their OLS estimates in the expression of in (13) and (14), we obtain consistent estimators of and . These estimators will be denoted by and , where the subscript refer to the use of the OLS estimator of .

In the previous section we pointed out that the GLS estimator is generally infeasible in applications. Therefore we consider a feasible weighted estimator obtained using nonparametric estimation of the volatility function. Our approach generalizes the work of Xu and Phillips (2008) to the multivariate case. Let us denote by the Hadamard (entrywise) product of two matrices of same dimension and . Define the symmetric matrix

 ˇΣ0t=T∑i=1wti⊙^ui^u′i,

where, as before the ’s are the OLS residuals and the element, , of the matrix of weights is given by

 wti(bkl)=(T∑i=1Kti(bkl))−1Kti(bkl),

with the bandwidth and

 Kti(bkl)={K(t−iTbkl)ift≠i,0ift=i.

The kernel function is bounded nonnegative and such that . For all the bandwidth belongs to a range with some constants and at a suitable rate that will be specified below.

When using the same bandwidth for all the cells of , since , are almost sure linear independent each other, is almost sure positive definite provided is sufficiently large. A similar estimator is considered by Boswijk and Zu (2007). When using several bandwidths it is no longer clear that the symmetric matrix is positive definite. Then we propose to use a regularization of , that is to replace it by the positive definite matrix

 ˇΣt={(ˇΣ0t)2+νTId}1/2

where , , is a sequence of real numbers decreasing to zero at a suitable rate that will be specified below. Our simulation experience indicates that in applications with moderate and large samples could be even set equal to 0.

In practice the bandwidths can be chosen by minimization of a cross-validation criterion like

 T∑t=1∥ˇΣt−^ut^u′t∥2,

with respect to all , , where is some norm for a square matrix, for instance the Frobenius norm that is the square root of the sum of the squares of matrix elements. Our theoretical results below are obtained uniformly with respect to the bandwidths and this brings a justification for the common cross-validation bandwidth selection approach in the framework we consider. To our best knowledge, this justification is new and hence completes previous procedures of Xu and Phillips (2008) and Boswijk and Zu (2007).

Let us now introduce the following adaptive least squares (ALS) estimator

 ^θALS=ˇΣ−1~X––vec(ˇΣX––),

with

 ˇΣ~X––=T−1T∑t=1~Xt−1~X′t−1⊗ˇΣ−1t,andˇΣX––=T−1T∑t=1ˇΣ−1tXt~X′t−1.

Assumption A1’: Suppose that all the conditions in Assumption A1(i) hold true. In addition:

(i) where denotes the smallest eigenvalue of the symmetric matrix .

(ii) for all .

Assumption A2:   (i) The kernel is a bounded density function defined on the real line such that is nondecreasing on and decreasing on and . The function is differentiable except a finite number of points and the derivative is an integrable function. Moreover, the Fourier Transform of satisfies .

(ii) The bandwidths , , are taken in the range with and as , for some .

Assumption A1’ and A2(ii) are natural extensions to the multivariate framework of the assumptions used in Theorem 2 of Xu and Phillips (2008). The conditions on the kernel function are convenient assumptions satisfied by almost all commonly used kernels. These conditions allow us for simpler technical arguments when investigating the rates of convergence uniformly with respect to the bandwidths. The condition on the sequence , , is slightly more restrictive than the one imposed by Xu and Phillips (2008) in the univariate case, that is , and this is the price we pay for obtaining the results uniformly in the bandwidths in a range .

Let . In the sequel, we say that a sequence of random matrices , is uniformly with respect to (w.r.t.) as if . The following proposition gives the asymptotic behavior of the adaptive estimators uniformly w.r.t the bandwidths.

{prop}

Under A1’ and A2 and provided , uniformly w.r.t. as

 ˇΛ1:=ˇΣ~X––=Λ1+op(1),
 ˇΩ1:=T−1T∑t=1ˇΣt⊗ˇΣ−1t=Ω1+op(1)

and

 √T(^θALS−^θGLS)=op(1).

Proposition 4 shows that the ALS and GLS estimators have the same asymptotic behavior, that is the ALS estimator is consistent in probability and asymptotically normal as soon as the GLS estimator has such properties. The results remains true even if the bandwidths are data dependent.

On the other hand, similarly to (13) and (14),

 vec(Λ1)={I(pd2)2−(Δ⊗Id)⊗2}−1vec(Ω10d2×(p−1)d20(p−1)d2×d20(p−1)d2×(p−1)d2). (15)

Then we also obtain an alternative consistent estimator (uniformly w.r.t. ) of by replacing by , and the by their ALS estimates in the expression of in (15).

## 5 Application to the test of the linear Granger causality in mean

In this section we propose tests for linear causality in mean in our framework using the OLS and the adaptive approaches. Let us consider the subvectors and such that where is of dimension , and . It is said that does not cause linearly in mean if we have

 EL(X1t∣X1t−1,…)=EL(X1t∣X1t−1,X2t−1,…),

where is the linear conditional expectation. In our framework since we assumed that is a martingale difference, the linear predictor is optimal. Therefore we have , where is the conditional expectation, and we simply refer to the linear Granger causality in mean as Granger causality in mean in the sequel. We test the null hypothesis that does not Granger cause in mean. It is well known that this amounts to test the null hypothesis that for all versus the alternative that there exists such that , where the ’s are the matrices given by the first rows and last columns of the ’s (see e.g. Lütkepohl (2005)). Define the block diagonal matrix of dimension , where is a -dimensional matrix given by

 C=⎛⎜ ⎜ ⎜⎝0d1×d1dId10d1×d200⋱⋱000Id10d1×d2⎞⎟ ⎟ ⎟⎠.

The matrix is such that we have with is the null vector of dimension under the null hypothesis. Therefore the tested hypotheses can be written as

 H0:Rθ0=0vs.H1:Rθ0≠0.

In this paper we focus on the Wald type tests, because they are the most commonly used tests by the practitioners. We first consider the ALS estimator to build tests for Granger causality in mean. Let us introduce the adaptive Wald test statistics

 QALS=T^θ′ALSR′(RˇΛ−11R′)−1R^θALSandQδALS=T^θ′ALSR′(RˇΛ−11δR′)−1R^θALS.

The following proposition gives the asymptotic distribution of the ALS test statistics as a simple consequence of Proposition 4. We say that a sequence of random variables , , converges in law to a chi-square distribution uniformly w.r.t. as , if there exists a sequence of random variables , , independent of such that and uniformly w.r.t. .

{prop}

Under the assumptions of Proposition 4, uniformly w.r.t. as

 QALS⇒χ2pd1d2, (16)
 QδALS⇒χ2pd1d2 (17)

and

 QmaxALS=max{QALS,QδALS}⇒χ2pd1d2. (18)

Based on Proposition 5 we propose the following procedure for testing Granger causality in mean: for a fixed asymptotic level , reject the null hypothesis if , where is the quantile of the law. Similar procedures could be defined using or instead of , but the latter statistic is expected to yield a more powerful test. The tests based on the ALS estimation will be denoted , and with obvious notations.

Let us now consider the following Wald test statistics based on the OLS estimation

 QOLS=T^θ′OLSR′(R^Λ−13^Λ2^Λ−13R′)−1R^θOLS,
 QδOLS=T^θ′OLSR′(R^Λ−13δ^Λ2δ^Λ−13δR′)−1R^θOLS,

and the commonly used standard Wald test statistic

 QS=T^θ′OLSR′(R^J−1R′)−1R^θOLS,with^J={T−1T∑t=1~Xt−1~X′t−1}⊗^Ω−13.

The following proposition gives the asymptotic behavior of the OLS and standard test statistics.

{prop}

Under A1 we have as

 QOLS⇒χ2pd1d2, (19)
 QδOLS⇒χ2pd1d2, (20)
 QmaxOLS=max{QOLS,QδOLS}⇒χ2pd1d2, (21)

and

 QS⇒Z(δ):=pd1d2∑i=1κiZ2i, (22)

where the ’s are independent variables, is the vector of the eigenvalues of the matrix

 Ψ=(RJ−1R′)−12(RΛ−13Λ2Λ−13R′)(RJ−1R′)−12, (23)

with

 J=∫10∞∑i=0{~ψi(1p×p⊗Σ(r))~ψ′i}dr⊗Ω−13.

It is easy to see from (10) and (12) that is a consistent estimator of . The results (19), (20) and (22) are direct consequences of Proposition 3 and 3, so that the proof is omitted. In the Appendix we only give the proof of (21). Similarly to the tests built using the ALS approach, tests using the results (19), (20) and (21) can be proposed.

When the errors are homoscedastic ( for all ), we obtain Recall that in this case we also have , so that we obtain and hence we retrieve the standard result . However the ’s in (22) can be quite different from 1 if the volatility of the errors is not constant as illustrated in the following example.

{ex}

Consider the bivariate VAR(1) process with true parameter . Such a model may be used to test Granger causality in mean between the components of an uncorrelated process. Like in Example 3, let us take

 Σ(r)=(Σ1(r)00Σ2(r)).

Suppose that one is interested in testing if Granger causes in mean. Then and the matrix is a scalar such that

 Ψ=(∫10Σ1(r)dr)−1×(∫10Σ2(r)dr)−1×∫10Σ