Nonparametric test for detecting change in distribution with panel data

# Nonparametric test for detecting change in distribution with panel data

## Abstract

This paper considers the problem of comparing two processes with panel data. A nonparametric test is proposed for detecting a monotone change in the link between the two process distributions. The test statistic is of CUSUM type, based on the empirical distribution functions. The asymptotic distribution of the proposed statistic is derived and its finite sample property is examined by bootstrap procedures through Monte Carlo simulations.

keywordsnonparametric estimation panel data process

## 1 Introduction

Many situations lead to the comparison of two random processes. In a parametric case, the problem of change detection has been widely studied in the time series literature. A common problem is to test a change in the mean or in the variance of the time series by using a parametric model (see for instance [8] or [7], and references therein). In the Gaussian case comparisons of processes are considered through their covariance structures (see [9], [12]). These distribution assumptions can be relaxed when the study concerns processes observed through panel data. This situation is frequently encountered in medical follow-up studies when two groups of patients are observed and compared. Each subject in the study gives rise to a random process denoting the measurement of the patient up to time (such data are referred to as panel data). In this context, [3, 2, 1] considered the problem of testing the equality of mean functions and proposed new multi-sample tests for panel count data.

In this paper we consider the general problem of comparison of two processes which may differ by a transformation of their distributions. Our purpose is to test whether this transformation changes over time. For this, two panels are considered: and , not necessarily independent; that is, we can have i.i.d. paired observations with dependence between and . It is assumed that for each , the (resp. are i.i.d. random variables with common distribution function (resp. ) and with support (resp. ). Also we assume that for all there exists monotone transformations such that the following equality in distribution holds: . Without loss of generality we consider that the functions are increasing. Note that if is invertible then there exists a trivial transformation given by . We are interested in testing whenever this transformation is time independent; that is, for all , the equality occurs. A simple illustration is the case where and are Gaussian processes with mean and and variance and , respectively. In that case the function is linear.

More generally, observing both processes and with panel data we want to test

 H0:∀t,ht=h against H1:∃t1≠t2,ht1≠ht2.

It is clear that coincides with the equality in distribution: , for all . Following [8] (see also [7]), we construct a non parametric test statistic based on the empirical estimator of , denoted by . We show that is proportional to a Brownian bridge under .

When is not rejected, it is of interest to estimate and to interpret its estimator . Then this test can be viewed as a first step permitting to legitimate estimation and interpretation of a constant transformation between the distributions of two samples, possibly paired.

The paper is organized as follows: In Section 2 we construct the test statistic. In Section 3 we perform a simulation study using a bootstrap procedure to evaluate the finite sample property of the test. The power is evaluated against alternatives where there are smooth scale or position time changes in the process distribution. Section 4 contains brief concluding remarks.

## 2 The test statistic

A natural nonparametric estimator of is given by

 ˆht(⋅) = X(NxˆGt(⋅)),t,

where denotes the th order statistic and is the empirical distribution function of , that is

 ˆGt(x) = 1NyNy∑i=11{Yi,t≤x}.

A nonparametric test is considered to test the variation of . For , , write

 Bn(τ,x) = 1√nˆσn⎛⎝[nτ]∑t=1ˆht(x)−[nτ]nn∑t=1ˆht(x)⎞⎠, (1)

where

 ˆσ2n = 1nn∑t=1(ˆht(x)−¯h(x))2,¯h(x)=1nn∑t=1ˆht(x).

For a given square integrable function we define the following test statistic

 Sn(w) = ∫Rw(x)sup1≤τ≤1|Bn(τ,x)|dx.

To establish the limiting distribution of the statistic under the null, we need the following assumptions:

• Assumption 1. There exists such that

• Assumption 2. There exist and such that and for all , where and are the density  functions of and

• Assumption 3. For all , there exist such that

 1nn∑t=1σ21,t(x)→¯¯¯σ22(x), as n→∞,

where

 σ21,t(x) = σ2t(x)Nx+NyNxNy, and σ2t(x)=Gt(x)(1−Gt(x))f2t(ht(x)). (2)
• Assumption 4.

 n(Nx+Ny)NxNy → 0
###### Remark 2.1

Assumptions 1 and 2 are standard. Assumption 3 states that the second moments converge on average. If Assumption 1 is satisfied, Assumption 4 is equivalent to or .

###### Theorem 2.1

Let assumptions 1-4 hold. Then under the null we have the following convergence in distribution

 Sn(w)→dS(w) = B∞∫Rw(x)dx,as n→∞,Nx→∞ and Ny→∞, (3)

where , and is a Brownian bridge.

###### Remark 2.2

The cumulative distribution function of is given by (see [4])

 FB∞(z) = 1+2∞∑k=1(−1)kexp{−2k2z2}.

Before proving Theorem 1, we state three lemmas.

###### Lemma 2.1

Under Assumption 1 we have

 (NxNyNx+Ny)1/2(ˆht(x)−ht(x))→dN(0,σ2t(x)), as Nx→∞,Ny→∞, (4)

where is given by (2).

### Proof

is an i.i.d sequence with mean and variance hence an immediate application of the central limit theorem yields

 N1/2y(ˆGt(x)−Gt(x)) →d N(0,Gt(x)(1−Gt(x))). (5)

By the delta-method the last convergence implies that

 N1/2y(F−1(ˆGt(x))−F−1(Gt(x))) →d N(0,σ2t(x)). (6)

For fixed, denote by the sample -quantile; that is, , where . By Theorem 3 of [13] we obtain

 N1/2x(ˆF−1t(p)−F−1t(p))→dN(0,p(1−p)f2t(F−1t(p))), ∀p∈(0,1). (7)

Let denotes the characteristic function of the random variable and let denotes the conditional characteristic function of the random variable conditional on . We have

 ˜Ht = (NxNyNx+Ny)1/2(ˆht(x)−ht(x)) = (NxNyNx+Ny)1/2(ˆF−1t(ˆGt(x))−F−1t(Gt(x))) = ˜H1,t+˜H2,t,

where

 ˜H1,t = (NyNx+Ny)1/2N1/2x(ˆF−1t(ˆGt(x))−F−1t(ˆGt(x)))
 ˜H2,t = (NxNx+Ny)1/2N1/2y(F−1t(ˆGt(x))−F−1t(Gt(x))).

Then we get

 ϕ˜Ht(u) = E(exp(iu˜Ht)) = E(E[exp(iu˜Ht)∣Yt]) = E(exp(iu˜H2,t ) E[exp(iu˜H1,t )∣Yt]).

Moreover

 E[exp(iu˜H1,t)∣Yt] = ϕ˜H1,t∣Yt(u) (8) = ϕN1/2x(ˆF−1t(ˆGt(x))−F−1t(ˆGt(x)))∣Yt((Ny/(Nx+Ny))1/2u)

From (7) it follows that, ,

 ϕN1/2x(ˆF−1t(ˆGt(x))−F−1t(ˆGt(x)))∣Yt(v) ⟶ exp(−12v2ˆσ2t(x)), (9)

as , where

 ˆσ2t(x) = ˆGt(x)(1−ˆGt(x))f2t(F−1t(ˆGt(x))).

The convergence (5) yields , as which implies, combined with (8)-(9), Assumption 1 and , that

 E[exp(iu˜H1,t)∣Yt] d→ exp(−12(1−a)u2σ2t(x)), (10)

as and . Moreover we have

 exp(iu˜H2,t ) = exp[iu(NxNx+Ny)1/2N1/2y(F−1t(ˆGt(x))−F−1t(Gt(x)))].

Since the function is continuous, then the convergence (6) and Assumption 1 yield

 exp(iu˜H2,t )→d exp(iua1/2H2,t ), as Nx→∞,Ny→∞, (11)

where is centered Gaussian distributed with variance equal to . From (10) and (11) it follows that, as and ,

 exp(iu˜H2,t )E[exp(iu˜H1,t)∣Yt] →d exp(iua1/2H2,t ) exp(−12(1−a)u2σ2t(x)). (12)

Since and are bounded almost surely, it follows from (12) that

 ϕ˜Ht(u) = E(exp(iu˜H2,t )E[exp(iu˜H1,t)∣Yt]) → E(exp(iua1/2H2,t)exp(−12(1−a)u2σ2t(x))), as Nx→∞,Ny→∞ = exp(−12au2σ2t(x))exp(−12(1−a)u2σ2t(x)) = exp(−12u2σ2t(x)),

therefore the desired conclusion (4) holds.

Lemma 2.1 implies that

 ˆht(x) = ht(x)+σ1,t(x)εt+rt, (13)

where is given by (2), is a standard Gaussian white noise and the remainder term is such that

 rt = OP({(Nx+Nx)/NxNy}1/2). (14)

Let  be the space of random functions that are right-continuous and have left limits, endowed with the Skorohod topology. The weak convergence of a sequence of random elements in to a random element in will be denoted by Let

 Wn(τ) = 1 √n[nτ]∑t=1σ1,t(x)εt, \ \ \ τ∈[0,1]. (15)
###### Lemma 2.2

Under Assumptions 1-3 we have

 Wn ⟹ ¯¯¯σ2(x) W, (16)

where stands for the standard Brownian motion.

### Proof

Assumption 2 implies that

 σ21,t(x) ≤ 1γ21Nx+NyNxNy ≤ C,

for some positive constant and and large enough.  Hence is a bounded deterministic sequence, therefore the weak convergence (16) follows from Theorem A.1 of [5].

###### Lemma 2.3

Under the null , as , and we have

 ˆσ2n=1nn∑t=1(ˆht(x)−¯h(x))2 d \ −−−−→ ¯¯¯σ22(x). (17)

### Proof

Under the null : the equality (13) becomes

 ˆht(x) = h(x)+σ1,t(x)εt+rt.

Let ,   then by using the same argument as in Theorem 1 of [5] we obtain

 1nn∑t=1(yt−¯¯¯y)2 d→ ¯¯¯σ22(x). (18)

We have

 1nn∑t=1(ˆht(x)−¯h(x))2 =1nn∑t=1(yt−¯¯¯y)2 + 1nn∑t=1(rt−¯¯¯r)2+21nn∑t=1(yt−¯¯¯y)(rt−¯¯¯r), (19)

where From (14) it follows that

 ¯¯¯r = OP(((Nx+Ny)/NxNy)1/2) = op(1), as  Nx→∞,Ny→∞,

which implies that

 1nn∑t=1(rt−¯¯¯r)2=op(1), as  Nx→∞,Ny→∞. (20)

By using the Cauchy Shwartz inequality, we have

 1nn∑t=1(yt−¯¯¯y)(rt−¯¯¯r) ≤ (1nn∑t=1(yt−¯¯¯y)2)1/2(1nn∑t=1(rt−¯¯¯r)2)1/2.

Hence by using (18) and (20) we get

 1nn∑t=1(yt−¯¯¯y)(rt−¯¯¯r)=op(1), as  Nx→∞,Ny→∞. (21)

The desired conclusion (17) holds by combining (18)-(21).

### Proof of Theorem 1

Under the null, the process in (1) can be rewritten as

 Bn(τ,x) = 1√nˆσn⎛⎝[nτ]∑t=1σ1,t(x)εt−[nτ]nn∑t=1σ1,t(x)εt⎞⎠+Rn(τ,x) = 1ˆσn(Wn(τ)−[nτ]nWn(1))+Rn(τ,x),

where the remainder term is given by

 Rn(τ,x) = 1√nˆσn⎛⎝[nτ]∑t=1rt−[nτ]nn∑t=1rt⎞⎠.

Now observe that

 [nτ]∑t=1rt = OP([nτ]((NX+NY)/NXNY)1/2),

which together with (17) implies that

 Rn(τ,x) = OP({n(Nx+Ny)/NxNy}1/2), = op(1) under assumption 4.

Hence

 Rn(τ,x) = 1ˆσn(Wn(τ)−[nτ]nWn(1))+op(1),

which combined with (16) and (17) yields

 Bn(.,x) ⟹ B,

where is a Brownian bridge. Therefore

 sup1≤τ≤1|Bn(τ,x)| →d sup1≤τ≤1|B(τ)|. (22)

Let be the space of square integrable functions endowed with the uniform norm For a given square integrable function , the functional : defined by

 Gw(g) = ∫Rw(x)g(x)dx,

is continuous. To obtain the convergence (3) it is sufficient to apply (22) and the continuous mapping theorem.

## 3 Empirical study

For simplicity we consider . Data are generated from three models: first, is normally distributed with mean and variance , and is generated independently by the transformation , where is another Gaussian process with mean and variance . Second, is an autoregressive process of order 1 (AR1) with correlation coefficient equal to 0.5, and is generated independently by the transformation , where is another AR1 process. For the last model random variables are paired: are independent Gaussian variables with mean and variance , and , that is, the time transformation is on the random variables. It is clear that this implies the same transformation for the corresponding distributions.

### Alternatives.

The following five alternatives are considered

First alternative: A1
Change in the mean. .

Second alternative: A2
Change in the variance. .

Third alternative: A3
Jump. ,
where if and 0 otherwise.

Fourth alternative: A4
Smooth change in the mean.

Fifth alternative: A5
Smooth change in the mean.

All alternatives are smooth and are less rough than classical rupture on the mean or on the variance, except A3 which coincides with a jump on the mean. The first two alternatives A1-A2 tend quickly to the null model under when the length increases. Figure 1 illustrates the proximity of to a constant for large times length in the case of alternative A1. In opposition, alternatives A4-A5 are very smooth and converge slowly to the null model. Figure 2 illustrates this smooth convergence under alternative A4.

### Bootstrap procedure.

To evaluate the power of our testing procedure we first consider a Monte Carlo statistic. Given points in we consider

 SM(w)=1MM∑i=1w(xi)A(xi), (23)

where

 A(xi) = max1≤k≤n∣∣ ∣∣1ˆσn(xi)√n(k∑t=1ˆht(xi)−kˆh(xi))∣∣ ∣∣,

with

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩ˆσ2n(x)=1nn∑t=1(ˆht(x)−¯h(x))2¯h(x)=1nn∑t=1ˆht(x).

The convergence of the statistic is not guaranteed since the are dependent. To carry out this problem, a bootstrap procedure is proposed. We construct a naive bootstrap statistic; that is, the test statistic given in (23) is compared to the empirical bootstrapped distribution obtained from , with constructed from the bootstraped sample drawn randomly with replacement and satisfying the size equalities and . We fix as a constant. Note that if and are paired, the bootstrap procedure consists in drawing randomly with replacement pairs from the data. We fix bootstrap replications.

### Powers.

For each alternative, the test statistic is computed, based on sample sizes for a theoretical level . The lengths of time’s intervals are and ; that is, the function is observed times for each varying in , or , or , with a step equal to one. The empirical power of the test is defined as the percentage of rejection of the null hypothesis over replications of the test statistic under the alternative.

Figure 3 presents empirical powers of the bootstrap test for all alternatives, in the case where are independent standard Gaussian variables. Solid lines and dotted lines correspond to and respectively. It can be observed that the power decreases with the length for alternatives A1 and A2. It is in accordance with the previous remark: is close to the null hypothesis for relatively large values of . Then passing from a time length equal 20 to a time length equal to 200 corresponds to adding variables with nearly constant transformation in distribution (see Figure 1).

Alternatives A4-A5 have similar behaviors, with a power increasing with . It can be explained by the very slow convergence to the null model. Here, passing from a time length equal 20 to a time length equal to 200 corresponds to adding new observations with a time depending transformation (see Figure 2).

It is also observed that power associated to alternative A3 increases with .

In Figure 4 empirical powers are presented in the case where follows an AR1 process with a correlation coefficient equal to 0.5. Here powers are slightly better and more stable with respect to the length. This is due to the correlation inducing more stability of the process and permitting a better estimation of .

Figure 5 presents results in the case of paired data, with normally distributed. Powers are good, due to the fact that transformations occur not randomly since we have considered . Then can be efficiency estimated and its variations are well detected.

## 4 Concluding remarks

The proposed method concerns the comparison of two processes when panel data are available. The test permits to detect a change in the relation between the two process distributions. Therefore it can detect a change in a higher moments (not only in the mean and/or in the variance as almost tests do in this framework). The asymptotic distribution of the proposed statistic was derived under the null of no change in the relation between the two process distributions.

The Monte Carlo simulations show that our test performs well in finite sample and has a good power against either abrupt or smooth changes. It is also valid for paired processes and then it can be used to detect a change in in the relation (see the paired case in our simulations). The test can also be used as a first step permitting to legitimate estimation and interpretation of a constant transformation between two panel data, as for instance in a medical follow-up study.

A direction for future research is to consider a -sample comparison of distributions, for , in the way of [3, 2]. Another direction should consider multivariate distributions.

### References

1. Balakrishnan, N., Xingqiu Zhao b. (2010). A nonparametric test for the equality of counting processes with panel count data Computational Statistics & Data Analysis 54, 135–142.
2. Balakrishnan, N., Xingqiu Zhao, b. (2009). New multi-sample nonparametric tests for panel count data. The Annals of Statistics 37, 1112–1149.
3. Balakrishnan, N., Xingqiu Zhao, b. (2008). A class of multi-sample nonparametric tests for panel count data. Ann. Instit. Statist. Math. 60, 151–171.
4. Billingslley, P. (1968). Convergence of probability measures, Wiley, New York.
5. Boutahar, M. (2009). Testing for change in the mean of heteroskedastic time series. http://fr.arxiv.org/abs/1102.5431.
6. Davidson, J. (1994). Stochastic Limit Theory. Oxford: Oxford University Press.
7. Galeano, P., Peña, D. (2007). Covariance changes detection in multivariate time series  Journal of Statistical Planning and Inference 137, 194 – 211.
8. Gombay, E. (2008). Change detection in autoregressive time series, J. Multivariate Anal. 99, 451-464.
9. Gupta A.K., Tang J. (1984). Distribution of likelihood ratio statistic for testing equality of covariance matrices of multivariate Gaussian models  Biometrika 71, 555–559.
10. Hawkins, D.M. (1977). Testing a sequence of observations for a shift in location, J. Amer. Statist. Assoc. 72, 180–186.
11. James, B., James, K., Siegmund, D. (1987). Tests for a change point, Biometrika 74, 71– 83.
12. Panaretos, V.M., Kraus, D. & Maddocks, J.H. (2009). Second-Order Comparison of Gaussian Random Functions and the Geometry of DNA Minicircles. Journal of the American Statistical Association 490, 670–682.
13. Sen, A., Srivastava, M.S. (1975). On tests for detecting change in mean, Ann. of Statist. 3,1 98– 108.
14. Srivastava, M.S., Worsley, K.J. (1986). Likelihood ratio tests for a change in the multivariate normal mean, J. Amer. Statist. Assoc. 81, 199–204.
15. Worsley, K.J. (1979). On the likelihood ratio test for a shift in locations of normal populations, J. Amer. Statist. Assoc. 74, 365– 367.
72380