Empirical Likelihood for Change Point Detection in Autoregressive Models

Empirical Likelihood for Change Point Detection in Autoregressive Models


Change point analysis has become an important research topic in many fields of applications. Several research work has been carried out to detect changes and its locations in time series data. In this paper, a nonparametric method based on the empirical likelihood is proposed to detect the structural changes of the parameters in autoregressive (AR) models . Under certain conditions, the asymptotic null distribution of the empirical likelihood ratio test statistic is proved to be the extreme value distribution. Further, the consistency of the test statistic has been proved. Simulations have been carried out to show that the power of the proposed test statistic is significant. The proposed method is applied to real world data set to further illustrate the testing procedure.
Keywords: Autoregressive model; Change point analysis; Empirical Likelihood; Extreme value distribution; Consistency.

1 Introduction

Change point analysis introduced by Page (1954, 1955) has become popular due to its usage in wide variety of fields, such as stock market analysis, quality control, traffic mortality rate, geology data analysis, genetics, etc. It concerns both detecting whether or not a change(s) has (have) occurred, and identifying the location(s) of any such change(s). Several methods to identify and estimate the change points in the change point problem are proposed by scholars. Bayesian approach to detect changes in the mean has been discussed by Chernoff and Zacks (1964) and Sen Srivastava (1975). Further, Csörgó and Horváth (1997) and Chen and Gupta (2000) established asymptotic results on parametric change point models. Hawkins (1977), Worsley (1986) and Gombay and Horváth (1994) are a few among the many researchers who discussed change point problem under the parametric settings. However, the parametric methods are no longer applicable if the underlying distribution is completely unknown. In such a case, a nonparametric approach should be considered as an alternative. One such popular nonparametric approach is the Cumulative Sum (CUSUM) method. Most authors have assumed that the observations are independent and studied the case where two distributions differ only in location. Combining nonparametric approaches along with the change point detection has been studied by many scholars over the past years. Aue and Horváth (2012) discussed two methods, namely, Cumulative Sum (CUSUM) and Likelihood Ratio Test (LRT), on how they can be modified for data exhibiting serial dependence. Further, they provided some insight to the sequential procedure as well. Lee et. al. (2003) also discussed about the Cusum test for changes of parameters in time series models and considered the changes of the parameters in a random coefficient autoregressive model AR(1) and that of the autocovariances of a linear process.

The change point problem may be viewed as a two-sample test adjusted for the unknown break location, thus leading to max-type procedures. Correspondingly, asymptotic relationships are derived to obtain critical values for the tests. In general, the change point problem can described as follows. Let be a sequence of independent random vectors (variables) with probability distribution functions , respectively. More specifically, suppose that the distributions belong to a common parametric family F(), where , then the change point problem is to test the hypotheses about the population parameters

versus the alternative

where and are unknown and need to be estimated.

Empirical likelihood introduced by Owen (1988, 1990) is one of the popular and powerful nonparametric approaches. It has been widely used due to the robustness of its nonparametric nature and the efficiency of its likelihood construction. Kolaczyk (1994) used empirical likelihood with generalized linear models. Further, Qin and Lawless (1994) obtained estimating equations and derived asymptotic properties of the test statistic. Many scholars have discussed about the empirical likelihood ratio test for a change point in linear models, such as Zou et al. (2007) , Liu et al. (2008) , and Ning (2012). Since the empirical likelihood was originally proposed for independent data, it is difficult to apply it to dependent data such as time series data. Several approaches suggested to reduce the dependent data problem into an independent data problem. Owen (2001) suggested using the conditional likelihood to remove the dependence structure and generate the estimating equations. Kitamura (1997) used block-wise empirical likelihood method which preserves the dependence of data, and the resulting likelihood ratios have been used to construct asymptotically valid confidence intervals. Ogata (2005) and Nordman and Lahiri (2006) independently formulated a frequency domain empirical likelihood (FDEL) using spectral estimating equations which can be used for short- and long- range dependent data. Bai and Perron (1998) proposed CUSUM and F-based statistics for change point detection. Baragona et al. (2013) compared it with the test they proposed for change point detection based on the empirical likelihood approach for change point detection.

To deal with the situation of multiple changes, it traditionally uses the binary segmentation method proposed by Vostrikova (1981). The advantage of using this method is that it detects number of change points and estimates their locations simultaneously as well as the consistency of this method has been established. Hence, the general hypothesis of the change point problem can be simplified as the hypothesis of no change point versus a single change point, i.e. the alternative hypothesis is:

where is the location of the single change point at this stage. If is not rejected, then the process is stopped and we conclude that there is no change. If is rejected, then there is a change point and the two subsequences before and after the change point found are tested for a change. This process is repeated until there are no subsequences having change points.

In this paper, we propose a test statistic based on the empirical likelihood approach for detecting changes in a time series model. In Section 2, the change point problem in time series models has been introduced for AR(p) model. The empirical likelihood procedure for change point detection is described in Section 3. The null asymptotic distribution of the test statistic and the consistency of the test along with the proofs are provided under Section 4. Simulations are carried out in Section 5 and a real data application is given in Section 6. Section 7 provides some discussion and proofs of results are given in the Appendix.

2 Changepoint Problem in AR(p) Model

Consider the stationary ar(p) model with the mean 0.

where ’s are independent random variables with mean zero and variance , (i.e. White noise process), are all unknown parameters, and is the unknown change location which needs to be estimated. Denote , where and . Therefore, the change point problem is to test the null hypothesis of no change in the autoregressive parameters versus the alternative hypothesis of one unknown change, i.e.,

Hence, under the alternative hypothesis, there is a change in at least one of the parameters at an unknown location. We denote and to be the parameter vectors under the null and the alternative hypothesis respectively. According to Owen (1991), we derive the estimating functions to be


where and


where It is easy to see that

for every and .

3 Empirical Likelihood for AR(p) Changepoint Model

WLOG, we assume one change point at an unknown location . Let




be the parameter spaces under and , respectively, where and are the probability vectors such that , and , . If a change occurs at , then the empirical likelihood ratio test statistic is defined as,


The null hypothesis is rejected for a sufficiently large value of . Let . A Lagrangian argument gives,


where and are chosen such that and . Therefore, under we obtain

Let . The score functions are defined as:



Under certain regularity conditions, Qin and Lawless (1994) showed, there exists () such that,

Hence, we obtain .

Similarly, under we have, . Then the empirical likelihood ratio statistic can be rewritten as


Since is unknown, is rejected when the maximally selected log-likelihood ratio statistic,

where , is sufficiently large.

When or is too small, then the minimax estimators of empirical likelihood may not exist. Hence we consider the trimmed likelihood ratio statistic where the range of is selected arbitrarily as follows. The Trimmed likelihood ratio statistic is defined as,


where . According to Perron and Vogelsang (1992), the selection of and can be arbitrary. In our work, we choose , where means the largest integer not larger than . If is true, then follows an asymptotic extreme value limit distribution. The convergence to the extreme value limit can be slow and asymptotic test often tends to be too conservative in finite samples.

4 Main Results

The results are similar to the ones by Csörgó and Horváth (1997). Under mild regularity conditions, the following theorem holds.

Theorem 1.

Let be the true parameter. Suppose that , , and is positive definite. If is true, then we have

for all t, where , , , and is the dimension of the parameter .

Theorem 2.

Under the conditions of Theorem 1 and the condition that for every fixed parameter , there exists a positive constant satisfy that , holds. If is true, assume that as , then ELR test statistic is consistent, i.e. there exists a constant such that

Theorem 3.

Under the conditions of Theorem 1 and the condition that for every fixed parameter , there exists a positive constant satisfy that , holds. If is true, assume that as , we have in probability as .

Proofs are given in the Appendix.

5 Simulation Study

A Monte Carlo simulation has been conducted to illustrate the performance of the proposed method. Consider the following AR(1) model with mean :

where is the white noise with mean zero and variance . Four different distributions are considered for : (i) , (ii) , (iii) , and (iv) . The power of the proposed test in detecting changes in parameters of the AR(1) model has been calculated for two different sample sizes: n=100, 150 and 250. Different change locations have been considered under each sample size. Additional simulations have been carried out to compute the empirical critical values under different significance levels which are turned out to be close to the theoretical critical values for the corresponding significance level. Hence, we use the theoretical critical value 2.9702 with  for power calculations with 1000 simulations. The results are listed in Table 1. It can be seen that the power of the hypothesis test of AR(1) model increases with the sample size. The power values under a given change location are approximately similar for the four different error distributions. This maybe due to the fact that the three distributions are standardized. When the change location is farther away from the starting location, then the power tends to decrease. Intuitively, this maybe due to the dependency existing in the data set.

n = 100
20 0.802 0.816 0.815 0.808
30 0.765 0.775 0.774 0.779
40 0.723 0.732 0.754 0.747
50 0.656 0.669 0.627 0.674
80 0.296 0.292 0.283 0.331
n = 150
30 0.929 0.929 0.924 0.903
45 0.913 0.901 0.893 0.872
60 0.862 0.844 0.853 0.845
75 0.806 0.799 0.813 0.804
120 0.385 0.397 0.426 0.450
n = 250
50 0.993 0.988 0.990 0.976
80 0.983 0.981 0.976 0.965
100 0.966 0.966 0.976 0.948
125 0.941 0.927 0.928 0.926
200 0.621 0.603 0.626 0.622
Table 1: Power of the hypothesis test for AR(1) model

6 Application

In this section, we study the data which consists of monthly average soybean prices achieved by farmers in Illinois from January 1960 to November 2008 with the sample size 587. The prices are given in dollars per bushel. This data was analyzed by Balcombe et al. (2007) who considered the threshold AR(1) models for modeling the prices of agricultural products. Berkes et al. (2011) studied this data set by proposing the likelihood ratio test to detect the structural change of an AR model to threshold AR model. We apply the proposed EL method for AR(1) changepoint model to detect the structural change in the same data set. Figure 1 shows the time series plot for the given data.

Figure 1: Time Series for the Monthly average soybean prices

In order to test if there are significant changes, we use the from (6). The value of is 16.07426. Using the critical values derived under the Theorem are given in Table 2, we have sufficient evidence to reject the null hypothesis that there is no change.

4.600149 2.970195 2.250367
Table 2: Theoretical Critical values

7 Discussion

In this paper, we discuss developing an EL-based detecting procedure for structural changes in time series data, i.e. testing null hypothesis of no change versus alternative hypothesis of one change. A test statistic is derived for a fixed change location and the max-type of test statistic over all possible change locations is considered. The asymptotic null distribution of the test statistic has been established as extreme value distribution. Simulations to compute the power in AR(1) model have been carried out with different sample sizes and different error distributions in order to illustrate the performance of the proposed test statistic. The results indicate that the proposed method is efficiently identify the changes in a given time series data set. We should point out that, due to the slow convergence of the proposed test statistic in Theorem 1, the moderate or the large sample size is recommended to achieve the good approximation (See Csörgő and Horváth, 1997). If the sample size is small, the bootstrap is suggested to obtain the approximated p-values in practice.

As for future work, we plan to extend the proposed method to other stationary time series models such as MA, ARMA, GARCH models along with corresponding analytic results and simulations. Comparisons to other existing methods will be done. Further, sequential change point detection based on EL method is to be studied where the sample size is a random variable and the null hypothesis of sequential structural stability will be rejected as soon as a change is detected. Hence, the objective in sequential change point detection is to detect such a change with a minimum number of false alarms. A nonparametric testing procedure based on EL method will be proposed and related asymptotic results will be studied.


In order to prove Theorem 1, we need following Lemmas.

Lemma 1.

Assume that for is positive definite, is continuous in a neighborhood of the true value , , , and are all bounded in the neighborhood of the true value . Then, as , , with probability 1 satisfying,



First we will show

where and .
Let for where . Let  be the solution of the function  given by the first score function defined in Section 3.


Let where and .

(\text{ where $e_{j}$ is the unit vector in the $j^{th}$ coordinate direction.})
(\text{where $g^{*}=\smash{\displaystyle\max_{l}}g(x_{l},\beta)$ and $S=\frac{1}{n}\sum_{l}\theta_{l}^{-2}g(x_{l},\beta)g^{\prime}(x_{l},\beta).$})

Since , where is the smallest eigen value of , then

So, .
Let . Then, .
Expanding ((A.1)),


The last equality is since
By substituting , we have the final term of ((A.2));



Now, denote , , and . So ((A.2)) can be rewritten as,

Since so .
Let be any constant sequence such that , and . Denote the ball and the surface of the ball . For any , we have


By the Taylor expansion, for any , we have


The first term of ((A.4)) is;


The second term of ((A.4)) is:



So we can rewrite ((A.4)) as,