# Nonparametric estimation of the distribution of the autoregressive

coefficient
from panel random-coefficient AR(1) data

Vilnius University, Faculty of Mathematics and Informatics, Naugarduko 24, LT-03225 Vilnius, Lithuania

Vilnius University, Institute of Mathematics and Informatics, Akademijos 4, LT-08663 Vilnius, Lithuania

Université de Nantes, Laboratoire de Mathématiques Jean Leray, 44322 Nantes Cedex 3, France

ANJA INRIA Rennes Bretagne Atlantique

###### Abstract

We discuss nonparametric estimation of the distribution function of the autoregressive coefficient from a panel of random-coefficient AR(1) data, each of length , by the empirical distribution function of lag 1 sample autocorrelations of individual AR(1) processes. Consistency and asymptotic normality of the empirical distribution function and a class of kernel density estimators is established under some regularity conditions on as and increase to infinity. The Kolmogorov-Smirnov goodness-of-fit test for simple and composite hypotheses of Beta distributed is discussed. A simulation study for goodness-of-fit testing compares the finite-sample performance of our nonparametric estimator to the performance of its parametric analogue discussed in [1].

Keywords: random-coefficient autoregression, empirical process, Kolmogorov-Smirnov statistic, goodness-of-fit testing, kernel density estimator, panel data

2010 MSC: 62G10, 62M10, 62G07.

## 1 Introduction

Panel data can describe a large population of heterogeneous units/agents which evolve over time, e.g., households, firms, industries, countries, stock market indices. In this paper we consider a panel where each individual unit evolves over time according to order-one random coefficient autoregressive model (RCAR(1)). It is well known that aggregation of specific RCAR(1) models can explain long memory phenomenon, which is often empirically observed in economic time series (see [9] for instance). More precisely, consider a panel , where each is an RCAR(1) process with noise and random coefficient , whose autocovariance

(1.1) |

is determined by the distribution function of the autoregressive coefficient. Granger [9] showed, for a specific Beta-type distribution , that the contemporaneous aggregation of independent processes , , results in a stationary Gaussian long memory process , i.e.,

(1.2) |

where the autocovariance decays slowly as so that .

A natural statistical problem is recovering the distribution (the frequency of across the population of individual AR(1) ‘microagents’) from the aggregated sample . This problem was treated in [5, 6, 12]. Some related results were obtained in [4, 10, 11]. Albeit nonparametric, the estimators in [5, 12] involve an expansion of the density in an orthogonal polynomial basis and are sensitive to the choice of the tuning parameter (the number of polynomials), being limited in practice to very smooth densities . The last difficulty in estimation of from aggregated data is not surprising due to the fact that aggregation per se inflicts a considerable loss of information about the evolution of individual ‘micro-agents’.

Clearly, if the available data comprises evolutions , of all individual ‘micro-agents’ (the panel data), we may expect a much more accurate estimate of . Robinson [15] constructed an estimator for the moments of using sample autocovariances of and derived its asymptotic properties as , whereas the length of each sample remains fixed. Beran et al. [1] discussed estimation of two-parameter Beta densities from panel AR(1) data using maximum likelihood estimators with unobservable replaced by sample lag 1 autocorrelation coefficient of (see Section 6), and derived the asymptotic normality together with some other properties of the estimators as and tend to infinity.

The present paper studies nonparametric estimation of from panel random-coefficient AR(1) data using the empirical distribution function:

(1.3) |

where is the lag 1 sample autocorrelation coefficient of , (see (3.3) below). We also discuss kernel estimation of the density based on smoothed version of (1.3). We assume that individual AR(1) processes are driven by identically distributed shocks containing both common and idiosyncratic (independent) components. Consistency and asymptotic normality as of the above estimators are derived under some regularity conditions on . Our results can be applied to test goodness-of-fit of the distribution to a given hypothesized distribution (e.g., a Beta distribution) using the Kolmogorov-Smirnov statistic, and to construct confidence intervals for or .

The paper is organized as follows. Section 2 obtains the rate of convergence of the sample autocorrelation coefficient to , in probability, the result of independent interest. Section 3 discusses the weak convergence of the empirical process in (1.3) to a generalized Brownian bridge. The Kolmogorov-Smirnov goodness-of-fit test for simple and composite hypotheses of Beta distributed is discussed in Section 4. In Section 5 we study kernel density estimators of . We show that these estimates are asymptotically normally distributed and their mean integrated square error tends to zero. A simulation study of Section 6 compares the empirical performance of (1.3) and the parametric estimator of [1] to the goodness-of-fit testing for under null Beta distribution. The proofs of auxiliary statements can be found in the Appendix.

In what follows, stands for a positive constant whose precise value is unimportant and which may change from line to line. We write for the convergence in probability and the convergence of (finite-dimensional) distributions respectively, whereas denotes the weak convergence in the space with the supremum metric.

## 2 Estimation of random autoregressive coefficient

Consider an RCAR(1) process

(2.1) |

where innovations admit the following decomposition:

(2.2) |

where random sequences , and random coefficients , , satisfy the following conditions:

Assumption A are independent identically distributed (i.i.d.) random variables (r.v.s) with , , for some .

Assumption A are i.i.d. r.v.s with , , for the same as in A.

Assumption A and are possibly dependent r.v.s such that and , .

Assumption A is a r.v. with a distribution function (d.f.) supported on and satisfying

(2.3) |

Assumption A , , and the vector are mutually independent.

###### Remark 2.1

Under conditions A–A, a unique strictly stationary solution of (2.1) with finite variance exists and is written as

(2.4) |

Clearly, and . Note that (2.3) is equivalent to

since for .

For an observed sample from the stationary process in (2.4), define the sample mean and the sample lag 1 autocorrelation coefficient

(2.5) |

Note the estimator in (2.5) does not exceed 1 a.s. in absolute value by the Cauchy-Schwarz inequality. Moreover, it is invariant to shift and scale transformations of in (2.1), i.e., we can replace by with some (unknown) and .

###### Proposition 2.1

Under Assumptions A–A, for any and , it holds

with independent of .

Proof. See Appendix.

Assume now that the d.f. satisfies the following Hölder condition:

Assumption A There exist constants and such that

(2.6) |

Consider the d.f. of :

(2.7) |

###### Corollary 2.2

Let Assumptions A–A hold. Then, as ,

## 3 Asymptotics of the empirical distribution function

Consider random-coefficient AR(1) processes , , which are stationary solutions to

(3.1) |

with innovations having the same structure as in (2.2):

(3.2) |

More precisely, we make the following assumption:

Assumption B satisfies A; , , , are independent copies of , , , respectively, which satisfy Assumptions A–A. (Note that we assume A for any .)

###### Remark 3.1

The individual processes have covariance long memory if conditions (2.3) and hold, which is compatible with Assumption B. The same is true about the limit aggregated process in (1.2) arising when the common component is absent. On the other hand, in the presence of the common component, long memory in the limit aggregated process arises when the individual processes have infinite variance and condition (2.3) fails, see [14].

Define the sample mean , the corresponding sample lag 1 autocorrelation coefficient

(3.3) |

and the empirical d.f.

(3.4) |

Recall that (3.4) is a nonparametric estimate of the d.f. from observed panel data . In the following theorem we show that is an asymptotically unbiased estimator of , as and both tend to infinity, and prove the weak convergence of the corresponding empirical process.

###### Theorem 3.1

Proof. Note , , are identically distributed, in particular, with defined in (2.7). Hence, (3.5) follows immediately from Corollary 2.2.

To prove the second statement of the theorem, we approximate by the empirical d.f.

of i.i.d. r.v.s . We have with . Since A guarantees the continuity of , it holds

by the classical Donsker theorem. Then (3.7) follows once we prove . By definition,

where , , and

For we have

(Note that does not depend on .) By Proposition 2.1, we obtain

which tends to when is chosen as . Next,

The above choice of implies , whereas vanishes in the uniform metric in probability (see Lemma A.2 in Appendix). Since is analogous to , this proves the theorem.

###### Remark 3.2

(3.6) implies that asymptotically for . Note that and for any . We may conclude that Theorem 3.1 as well as other results of this paper apply to long panels with increasing much faster than , except maybe for the limiting case for . The main reason for this conclusion is that need to be accurately estimated by (3.3) in order that behaves similarly to the empirical d.f. based on unobserved autocorrelation coefficients .

## 4 Goodness-of-fit testing

Theorem 3.1 can be used for testing goodness-of-fit. In the case of simple hypothesis, we test the null vs. with being a certain hypothetical distribution satisfying the Hölder condition in (2.6). Accordingly, the corresponding Kolmogorov-Smirnov (KS) test rejecting whenever

(4.1) |

has asymptotic size provided satisfy the assumptions for (3.7) in Theorem 3.1. (Here, is the upper -quantile of the Kolmogorov distribution.) However, the goodness-of-fit test in (4.1) requires the knowledge of parameters of the model considered, which is not typically a very realistic situation. Below, we consider testing composite hypothesis using the Kolmogov-Smirnov statistic with estimated parameters. The parameters will be estimated by the method of moments.

Write and , where

###### Proposition 4.1

Proof. Write

where . We have as by the multivariate central limit theorem. On the other hand, follows from and Proposition 2.1 with , proving the proposition.

###### Remark 4.1

Robinson [15, Theorem 7] discussed a different estimate of , which was proved to be asymptotically normal for fixed as in contrast to ours. However, his result holds in the case of idiosyncratic innovations only and under stronger assumptions on than in Proposition 4.1, which do not allow for long memory.

Consider testing the composite null hypothesis that belongs to the family of Beta d.f.s versus an alternative , where

(4.3) |

and is Beta function. The th moment of is given by

Parameters can be found from the first two moments as

(4.4) |

The moment-based estimator of is obtained by replacing in (4.4) by its estimator . The consistency and asymptotic normality of this estimator follows by the Delta method from Proposition 4.1, see Corollary 4.2 below, where we need condition to satisfy Assumptions A and A.

###### Corollary 4.2

###### Corollary 4.3

Proof. The d.f. with satisfies Assumptions A and A with . Recall , . Since condition (3.6) is satisfied, so vanishes in probability by Theorem 3.1, whereas the convergence , follows from (4.6) using the fact that is continuous in , see [7] or [20, Theorem 19.23].

With Corollary 4.3 in mind, the Kolmogorov-Smirnov test for the composite hypothesis can be defined as

(4.7) |

where is the upper -quantile of the distribution of :

The test in (4.7) has correct asymptotic size for any , which follows from Corollary 4.3 and the continuity of the quantile function in , see [19, p. 69], [20]. By writing , it follows that the Kolmogorov-Smirnov statistic on the l.h.s. of (4.7) tends to infinity (in probability) under any fixed alternative which cannot be approximated by a Beta d.f. in the uniform metric, i.e., such that . Moreover, even under the alternative, we preserve the consistency of , hence being a continuous function of sample moments, converges in probability to some finite limit. Therefore the test (4.7) is consistent.

In practice, the evaluation of requires Monte Carlo approximation which is time-consuming. Alternatively, [18, 19] discussed parametric bootstrap procedures to produce asymptotically correct critical values. We note that the assumptions of [19, Theorem 1] are valid for the family of Beta d.f.s and the moment-based estimator of in Corollary 4.3. The consistency of the test when using bootstrap critical values follows by a similar argument as in (4.7).

## 5 Kernel density estimation

In this section we assume has a bounded probability density function , implying Assumption A with Hölder exponent in (2.6). It is of interest to estimate in a nonparametric way from (3.3).

Consider the kernel density estimator

(5.1) |

where is a kernel, satisfying Assumption A and is a bandwidth which tends to zero as and tend to infinity.

Assumption A is a continuous function of bounded variation that satisfies . Set and and , .

We consider two cases separately.

Case (i) , meaning that the coefficient for the common shock in (3.2) is zero and that the individual processes , are independent and satisfy

Case (ii) , meaning that , are mutually dependent processes.

###### Proposition 5.1

Let Assumptions B and A hold. If , then

(5.2) |

at every continuity point of . Moreover, if

(5.3) |

then

(5.4) |

at any continuity points of . If holds in addition to (5.3), then the estimator is consistent at each continuity point :

(5.5) |