Estimation in nonlinear bifurcating autoregressive models

Autoregressive functions estimation in nonlinear bifurcating autoregressive models

S. Valère Bitseki Penda and Adélaïde Olivier S. Valère Bitseki Penda, IMB, CNRS-UMR 5584, Université Bourgogne Franche-Comté, 9 avenue Alain Savary, 21078 Dijon Cedex, France. simeon-valere.bitseki-penda@u-bourgogne.fr Adélaïde Olivier, CEREMADE, CNRS-UMR 7534, Université Paris-Dauphine, Place du maréchal de Lattre de Tassigny, 75775 Paris Cedex 16, France adelaide.olivier@ceremade.dauphine.fr
Abstract.

Bifurcating autoregressive processes, which can be seen as an adaptation of autoregressive processes for a binary tree structure, have been extensively studied during the last decade in a parametric context. In this work we do not specify any a priori form for the two autoregressive functions and we use nonparametric techniques. We investigate both nonasymptotic and asymptotic behaviour of the Nadaraya-Watson type estimators of the autoregressive functions. We build our estimators observing the process on a finite subtree denoted by , up to the depth . Estimators achieve the classical rate in quadratic loss over Hölder classes of smoothness. We prove almost sure convergence, asymptotic normality giving the bias expression when choosing the optimal bandwidth. Finally, we address the question of asymmetry: we develop an asymptotic test for the equality of the two autoregressive functions which we implement both on simulated and real data.

Keywords: Bifurcating Markov chains, binary trees, bifurcating autoregressive processes, nonparametric estimation, Nadaraya-Watson estimator, minimax rates of convergence, asymptotic normality, asymmetry test.

Mathematics Subject Classification (2010): 62G05, 62G10, 62G20, 60J80, 60F05, 60J20, 92D25.

1. Introduction

1.1. A generalisation of the bifurcating autoregressive model

BAR process

Roughly speaking, bifurcating autoregressive processes, BAR processes for short, are an adaptation of autoregressive processes when the index set have a binary tree structure. BAR processes were introduced by Cowan and Staudte [16] in 1986 in order to study cell division in Escherichia coli bacteria. For , let (with ) and introduce the infinite genealogical tree

For , set and define the concatenation and . In [16], the original BAR process is defined as follows. A cell with generation time gives rise to the offspring with generation times

where and are unknown real parameters, with which measures heredity in the transmission of the biological feature. The noise sequence forms a sequence of independent and identically distributed bivariate centered Gaussian random variables and represents environmental effects; the initial value is drawn according to a Gaussian law.

Since then, several extensions of this model have been studied and various estimators for unknown parameters have been proposed. First, one can mention [27] where Guyon introduces asymmetry to take into account the fact that autoregressive parameters for type or type cells can differ. Introducing the bifurcating Markov chain theory, Guyon studies the asymptotic behaviour of the least squares estimators of the unknown parameters. He also introduces some asymptotic tests which allow to decide if the model is symmetric or not. Several extensions of this linear model have been proposed and studied from a parametric point of view, see for instance Basawa and Huggins [2, 3] and Basawa and Zhou [4, 5] where the BAR process is studied for non-Gaussian noise and long memory. Around 2010, Bercu, Blandin, Delmas, de Saporta, Gégout-Petit and Marsalle extended in different directions the study of the BAR process. Bercu et al. [7] use martingale approach in order to study least squares estimators of unknown parameters for processes with memory greater than 1 and without normality assumption on the noise sequence. Even more recently, [20, 21] takes into account missing data and [6, 14, 22] study the model with random coefficients. A number of other extensions were also surveyed, one can cite Delmas and Marsalle [17] for a generalisation to Galton-Watson trees and Bitseki Penda and Djellout [10] for deviations inequalities and moderate deviations.

Nonlinear BAR process

Nonlinear bifurcating autoregressive (NBAR, for short) processes generalize BAR processes, avoiding an a priori linear specification on the two autoregressive functions. Let us introduce precisely a NBAR process which is specified by 1) a filtered probability space , together with a measurable state space , 2) two measurable functions and 3) a probability density on with a null first order moment. In this setting we have the following

Definition 1.

A NBAR process is a family of random variables with value in such that, for every , is -measurable and

where is a sequence of independent bivariate random variables with common density .

The distribution of is thus entirely determined by the autoregressive functions , the noise density and an initial distribution for . Informally, we view as a population of individuals, cells or particles whose evolution is governed by the following dynamics: to each we associate a trait (its size, lifetime, growth rate, DNA content and so on) with value in . At its time of death, the particle gives rise to two children and . Conditional on , the trait of the offspring of is a perturbed version of .

The strength of this model lies in the fact that there is no constraint on the form of and , whose form is free. This is particularly interesting in view of applications for which no a priori knowledge is available on the autoregressive links.

When is distributed according to a measure on , we denote by the law of the NBAR process and by the expectation with respect to the probability .

1.2. Nonparametric estimators of the autoregressive functions

Estimation of the autoregressive functions

Our aim is to estimate the unknown autoregressive functions and in Definition 1 from the observation of a subpopulation. For that purpose, we propose to make use of a Nadaraya-Watson type estimator (introduced independently by Nadaraya [36] and Watson [44] in 1964). The Nadaraya-Watson estimator is a kernel estimator of the regression function when observing pairs of independent random variables distributed as . The Nadaraya-Watson estimator was also used in the framework of autoregressive time series in order to reconstruct , assuming that is stationary, see [39, 30]. We generalize here the use of the Nadaraya-Watson estimator to the case of an autoregressive process indexed by a binary tree i.e. a NBAR process.

For , introduce the genealogical tree up to the -th generation, . Assume we observe , i.e. we have random variables with value in . Let be a compact interval. We propose to estimate the autoregressive functions at point from the observations by

(1)

where and we set for and a kernel function such that .

Main theoretical results and outline

Our first objective in this work is to study the estimators (1) both from nonasymptotic and asymptotic points of view. To our best knowledge, there is no extensive nonparametric study for nonlinear bifurcating autoregressive processes. We can mention the applications of Bitseki Penda, Escobar-Bach and Guillin [12] (section 4) where deviations inequalities are derived for Nadaraya-Watson type estimators of the autoregressive functions. We also refer to Bitseki Penda, Hoffmann and Olivier [13] where some characteristics of a NBAR process are estimated nonparametrically (the invariant measure, the mean-transition and the -transition).

Our nonasymptotic study includes the control of the quadratic loss in a minimax sense (Theorems 5 and 6) and our asymptotic study includes almost sure convergence (Proposition 7) and asymptotic normality (Theorems 8 and 9). To this end, we shall make use of nonasymptotic behaviour for bifurcating Markov chains (see [27, 11]) and asymptotic behaviour of martingales. We are also interested in comparing the two autoregressive functions and and to test whether the phenomenon studied is symmetric or not. The test we build (in Theorem 11) to do so relies on our asymptotic results (see Corollary 10).

The present work is organised as follows. The results are obtained under the assumption of geometric ergodicity of the so-called tagged-branch Markov chain we define in Section 2, together with the nonasymptotic results. In Section 3, we state asymptotic results for our estimators which enable us to address the question of asymmetry and build a test to compare and . We also give numerical results to illustrate the estimation and test strategies on both simulated and real data (Section 4). Section 5 encloses a discussion. The last parts of the article, Section 6 with an appendix in Section 7, are devoted to the proofs of our results.

2. Nonasymptotic behaviour

2.1. Tagged-branch chain

In this section we build a chain corresponding to a lineage taken randomly (uniformly at each branching event) in the population, a key object in our study. Let be a sequence of independent and identically distributed random variables that have the Bernoulli distribution of parameter . Introduce

(2)

the marginals of the noise density and let be a sequence of independent random variables such that given has density , for . In addition and are independent of . We now set

(3)

Then the so-called tagged-branch chain has transition

(4)

which means that for all , we have where we set

for the -th iteration of .

Asymptotic and nonasymptotic studies have shown that the limit law of the Markov chain plays an important role, we refer to Bitseki Penda, Djellout and Guillin [11] and references therein for more details (in the general setting of bifurcating Markov chains). In the present work, the tagged-branch Markov chain will play a crucial role in the analysis of the autoregressive functions estimators111More precisely, we will see that the denominator of (1) converges almost surely to the invariant density of the Markov chain  (Proposition 16)..

2.2. Model contraints

The autoregressive functions and are devoted to belong to the following class. For and , we introduce the class of continuous functions such that

for any .

The two marginals and of the noise density are devoted to belong to the following class. For and , we introduce the class of nonnegative continuous functions such that

for any . When for some , we denote the covariance matrix of , called noise covariance matrix, by

(5)

with and .

It is crucial for our proofs to study the ergodicity of the tagged-branch Markov chain . Geometric ergodicity of nonlinear autoregressive processes has been studied in Bhattacharya and Lee [8] (Theorem 1) and also in An and Huang [1] and Cline [15]. The main difference is that we need here a precise control on the ergodicity rate, which should be smaller that , due to the binary tree structure. We also see, through (3), that the autoregressive function is random in our case.

The following crucial assumption will guarantee geometric ergodicity of the tagged-branch Markov chain with an exponential decay rate smaller than (see Lemma 13). For any , we set

(6)
Assumption 2.

Set where has density . There exists such that

and there exists such that

(7)

The following assumption will guarantee that the invariant density is positive on some nonempty interval (see Lemma 17). For any , we set

where, for a function , stands for .

Assumption 3.

For such that , there exists such that .

2.3. Main results

We need the following property on :

Assumption 4.

The kernel is bounded with compact support and for some integer , we have for .

Assumption 4 will enable us to have nice approximation results over smooth functions and , described in the following way: for and , with , and an integer, let denote the Hölder space of functions possessing a derivative of order that satisfies

(8)

The minimal constant such that (8) holds defines a semi-norm . We equip the space with the norm and the balls

The notation means proportional to, up to some positive constant independent of and the notation means up to some constant independent of .

Theorem 5 (Upper rate of convergence).

Let and , let and . Specify with a kernel satisfying Assumption 4 for some , with

and such that as . For every and , for every such that satisfy Assumptions 2 and 3, there exists such that for every compact interval with nonempty interior and for every in the interior of ,

where the supremum is taken among all functions , for any initial probability measure on for such that .

Some comments are in order:

1) The noise density should be at least as the autoregressive functions and , in order to obtain the rate . Recall now that we build Nadaraya-Watson type estimators: the estimator is the quotient of an estimator of and an estimator of . Thus the rate of convergence depends not only on the regularity of but also on the regularity of . Note that determines the regularity of the mean transition (see (4)) and thus the regularity of the invariant measure . A more general result would establish the rate with the minimal regularity between and (and thus between and the noise density). 2) The autoregressive functions et should be locally smooth, on a vicinity of (and not necessarily globally smooth, on ). Note that we could also state an upper bound for . 3) Up to the factor , we obtain the classical rate where is the number of observed couples . We know it is optimal in a minimax sense in a density estimation framework and we can infer this is optimal in our framework too. To prove it is the purpose of Theorem 6 which follows. 4) We do not achieve adaptivity in the smoothness of the autoregressive functions since our choice of bandwidth still depends on . For classical autoregressive models (i.e. nonbifurcating), adaptivity is achieved in a general framework in the early work by Hoffmann [31], and we also refer to Delouille and van Sachs [18]. 5) For the sake of simplicity, we have picked a common bandwidth to define the two estimators, but one can immediately generalize our study for two different bandwidths where is the Hölder smoothness of .

We complete the previous theorem with

Theorem 6 (Lower rate of convergence).

Assume the noise density is a bivariate Gaussian density. Let be a compact interval. For every and every positive , there exists such that, for every ,

where the supremum is taken among all functions and the infimum is taken among all estimators based on .

This result obviously implies a lower rate of convergence for the mean quadratic loss at point . We see that in a regular case, the Gaussian case, the lower and upper rates match.

3. Asymmetry test

Testing in the context of nonparametric regression is a crucial point, especially in applied contexts. The question of no effect in nonparametric regression is early addressed in Eubank and LaRiccia [26]. We may also want to compare two regression curves nonparametrically and we refer to Munk [35] and references therein. Specific tools have been developed to compare time series (see for instance the recent work by Jin [32] among many others). The test we propose to study asymmetry in a NBAR process is based on the following asymptotic study.

3.1. Preliminaries: asymptotic behaviour

The almost-sure convergence of the autoregressive functions estimators is obtained in Proposition 7 for any with . Choosing , the estimator recentered by and normalised by converges in distribution to a Gaussian law. Depending on or not, the limit Gaussian law is centered or not, as we state in Theorems 8 and 9.

Proposition 7 (Almost sure convergence).

In the same setting as in Theorem 5 with for ,

as .

From now on we need to reinforce the assumption on the noise sequence: we require that the noise has finite moment of order , , which is guaranteed by for . We use the notation .

Theorem 8 (Asymptotic normality).

In the same setting as in Theorem 5 with and for ,

being the noise covariance matrix and . Moreover, for distinct points in , the sequence

is asymptotically independent.

The restriction in Theorem 8 prevents us from choosing , which is the optimal choice to achieve the minimax rate as we have seen in Theorem 5. The following Theorem remedies to this flaw, but at the cost of an unknown bias. We obtain an explicit expression of this bias for an integer which depends on the -th derivatives of the autoregressive functions and the invariant measure of the tagged-branch chain.

Theorem 9 (Asymptotic normality with bias expression).

In the same setting as in Theorem 5 with and an integer,

  • If with as , then

    and

  • If as , then

3.2. Construction of an asymmetry test

Let be distinct points in . We are going to build a statistical test that allows us to segregate between hypothesis

In the parametric studies on E. coli [27, 17], these tests are known as detection of cellular aging and they permit to decide if the cell division is symmetric or asymmetric.

Construction of an asymptotically unbiased estimator

Inspired by Bierens [9] we define new estimators in order to both achieve the rate in the asymptotic normality property and remove the asymptotic bias. Let be the estimators (1) with bandwidth and be the estimators (1) with bandwidth for some . We define

(9)
Corollary 10.

In the same setting as in Theorem 9,

for every in the definition (9) of .

As announced the trick of Bierens [9] enables us to remove the unknown asymptotic bias while keeping the optimal rate of convergence.

Test statistics

We define a test statistics based on the new estimators by

(10)

with where .

Theorem 11 (Wald test for asymmetry).

In the same setting as in Theorem 9, let be distinct points in . Then the test statistic converges in distribution to the chi-squared distribution with k degrees of freedom , under , and -almost surely to , under .

Note that in (10) we could replace , and by

(11)

with the empirical residuals for We claim that these estimators are consistent, so that Theorem 11 is still valid for this new test statistics. Proving the convergence in probability of these three quantities would imply some technical calculations and we do not give here more details.

4. Numerical implementation

4.1. Simulated data

The goal of this section is to illustrate the theoretical results of the previous sections, in particular the results of Theorem 5 (Upper rate of convergence) and Theorem 11 (Wald test for asymmetry).

Quality of the estimation procedure.

We pick trial autoregressive functions defined analytically by

for . We take a Gaussian noise with and . We simulate times a NBAR process up to generation , with root . We take a Gaussian kernel and in order to implement estimators given by (1). We evaluate and on a regular grid of with mesh . We did not meet any problem with the denominator in practice and actually set . For each sample we compute the empirical error

where denotes the discrete norm over the numerical sampling. Table 1 displays the mean-empirical errors together with the empirical standard deviations, and for The larger , the better the reconstruction of and as shown in Table 1.

511 1 023 2 047 4 095 8 191 16 383 32 767
0.4442 0.3417 0.2633 0.2006 0.1517 0.1285 0.0891
sd. 0.1509 0.1063 0.0761 0.0558 0.0387 0.0295 0.0209
0.6696 0.5141 0.4006 0.3027 0.2356 0.1776 0.1384
sd. 0.2482 0.1626 0.1227 0.0831 0.0622 0.0440 0.0326
Table 1. [Simulated data] Mean empirical relative error (resp. ) and its standard deviation computed over Monte-Carlo trees, with respect to , for the autoregressive function (resp. ) reconstructed over the interval by the estimator (resp. ).

This is also true at a visual level, as shown on Figure 1 where -level confidence bands are built so that for each point , the lower and upper bounds include of the estimators . As one can see on Figure 1, the reconstruction is good around and deteriorates for large or small . Indeed the invariant measure estimator shows that its mass is located around and thus more observations lie in this zone, which ensures a better reconstruction there. The same analysis holds for the reconstruction of , see the thin blue lines.

Figure 1. [Simulated data] Reconstruction of over with -level confidence bands constructed over Monte Carlo trees. In bold red line: . In thin blue lines: reconstruction of with -level confidence bands. Left: generations. Right: generations.

The error is close to for both and as expected: indeed, for a kernel of order , the bias term in density estimation is of order . For the smooth , and we have here, we rather expect for the rate for the Gaussian kernel with that we use here, and this is consistent with what we observe on Figure 2.

Figure 2. [Simulated data] The log-average relative empirical error over Monte Carlo trees vs. for (resp. ) reconstructed over by (solid blue line) (resp. (dashed blue line)) compared to the expected log-rate (solid red line).

Implementation of the asymmetry test.

We implement now the estimators (9) inspired by [9] in order to compute our test statistics (10). We keep a Gaussian kernel and we pick and (i.e. – and the choice of proves to have no influence on our numerical results). The numerical study of and leads to similar results as those of the previous study. For a given grid of , we reject the null hypothesis if exceeds the 5%-quantile of the chi-squared distribution with degrees of freedom and thus obtain a test with asymptotic level 5%. We measure the quality of our test procedure computing the proportion of rejections of the null.

We first implement the two following cases:

The test should reject in the second case but not in the first one. The larger , the better the test as one can see in Table 2: is more and more often rejected for and less and less often rejected for as increases, which is what we expect. We also observe the influence of the number of points of the grid which enables us to build the test statistics. Three grids of are tested with and 61 points. The larger the number of points, the larger the proportion of rejections of in both cases. However, for , more that generations are needed to reach the theoretical asymptotic level of 5%. The choice of the number of points is delicate but we would recommend to use a low to build the test.

8 9 10 11 12 13 14
511 1 023 2 047 4 095 8 191 16 383 32 767
46.8% 67.2% 87.6% 99.0% 100% 100% 100%
Case 59.6% 77.8% 92.8% 99.8% 100% 100% 100%
67.8% 85.4% 95.6% 99.8% 100% 100% 100%
19.6% 18.6% 18.2% 16.2% 13.4% 14.8% 12.4%
Case 30.4% 30.0% 29.0% 24.8% 21.4% 19.4% 19.8%
42.6% 42.6% 40.4% 39.8% 35% 31.6% 32.2%
Table 2. [Simulated data] Proportions of rejections of the null hypothesis for 5% asymptotic level tests over Monte-Carlo trees. The test is based on the test statistics (10) with the grids for (i.e. and 61 points). (Case ): the proportions (power of the test) should be high. (Case ): the proportions (type I error) should be low.
Figure 3. [Simulated data] Proportions of rejections of the null hypothesis (power of the test): with respect to for 5% asymptotic level tests over Monte-Carlo trees. The test is based on the test statistics (10) with the grid for (i.e. points). Left: generations. Right: generations.

The second numerical test aims at studying empirically the power of our test. We keep with the same autoregressive function for cells of type 0 and parametrize the autoregressive function for cells of type 1 such that it progressively comes closer to :

for . This choice enables us to interpolate between (Case ) and (Case ).

As becomes closer to , i.e. as becomes closer to , we see the decrease of the proportions of rejections of the null on Figure 3. The steeper the decrease is, the better performs our test. The proportion of rejections of is higher than 40% only for up to 0.1875 for a reasonable number of observations ( on the left in Figure 3). On the right in Figure 3, one can see what become the results for a larger number of observations, : the performance is good for up to , i.e. closer to the equality case .

4.2. Real data

Quoting Stewart et al. [40], ”The bacterium E. coli grows in the form of a rod, which reproduces by dividing in the middle. (…) one of the ends of each cell has just been created during division (termed the new pole), and one is pre-existing from a previous division (termed the old pole).” At each division, the cell inheriting the old pole of the progenitor cell is of type 1, say, while the cell inheriting the new pole is of type 0. The individual feature we focus on is the growth rate (E. coli is an exponential growing cell). Stewart et al. [40] followed the growth of a large number of micro-colonies of E. coli, measuring the growth rate of each cell up to 9 generations (possibly with some missing data).

Recently, concerning the data set of Stewart et al., Delyon et al. [19] found out that ”There is no stationarity of the growth rate across generations. This means that the initial stress of the experiment has not the time to vanish during only the first 9 generations.” As a consequence, we should not aggregate the data from the different micro-colonies and we shall only work on the largest samples. The largest genealogical tree counts cells for which we know both the type and the growth rate, together with the type and the growth rate of the progenitor. The autoregressive functions estimators are represented on Figure 4(a)222We keep on with a Gaussian kernel, the bandwidths are picked proportional to and (i.e. ) with , up to a constant fixed using the rule of Silverman. We underline we do not observe the full tree , and we compute our estimators accordingly. Point-wise confidence intervals of asymptotic level 95% built using Corollary 10 overlap the curves and , and are far too optimistic (since ).. It is surprising that the relationship between the growth rates of the mother and its offspring may be decreasing. In this case, our nonparametric estimated curves are close to the linear estimated curves (computed using the estimators of Guyon [27]). We show a second example on Figure 4(b) where the relationship may not be linear.

Previously, Guyon et al. [28] and de Saporta et al. [21, 23] carried out asymmetry tests, and our conclusions seem to coincide with the previous ones. We implement our test statistic (10) for the largest tree, using 10 equidistant points of – where 80% of the growth rates lie – using the covariance matrix estimator (11). For the largest tree of 655 cells, our test strongly reject the null hypothesis (p-value ). In the same way, the null hypothesis is strongly rejected for the 10 largest trees available (from 443 cells to 655). Thus, we may conclude to asymmetry in the transmission of the growth rate from one cell to its offspring. Admittedly, our test does not take into account the influence of missing data and the level of our test for small trees is poor (recall Table 2, , Column ). Thus one should take this conclusion with caution.

(a)
(b)
Figure 4. [Real data from Stewart et al. [40]] Points and for cells . Bold green curve (resp. Thin green line): reconstruction of with (resp. with a linear estimator). Bold red curve (resp. Thin red line): reconstruction of with (resp. with a linear estimator).

5. Discussion

Recursive estimators

We could estimate