A new approach to optimal designs for correlated observations

A new approach to optimal designs for correlated observations

Holger Dette, Maria Konstantinou
Ruhr-Universität Bochum
Fakultät für Mathematik
44780 Bochum
Germany
   Anatoly Zhigljavsky
School of Mathematics
Cardiff University
Cardiff, CF24 4AG
UK
Abstract

This paper presents a new and efficient method for the construction of optimal designs for regression models with dependent error processes. In contrast to most of the work in this field, which starts with a model for a finite number of observations and considers the asymptotic properties of estimators and designs as the sample size converges to infinity, our approach is based on a continuous time model. We use results from stochastic analysis to identify the best linear unbiased estimator (BLUE) in this model. Based on the BLUE, we construct an efficient linear estimator and corresponding optimal designs in the model for finite sample size by minimizing the mean squared error between the optimal solution in the continuous time model and its discrete approximation with respect to the weights (of the linear estimator) and the optimal design points, in particular in the multi-parameter case.
In contrast to previous work on the subject the resulting estimators and corresponding optimal designs are very efficient and easy to implement. This means that they are practically not distinguishable from the weighted least squares estimator and the corresponding optimal designs, which have to be found numerically by non-convex discrete optimization. The advantages of the new approach are illustrated in several numerical examples.

Keywords and Phrases: linear regression, correlated observations, optimal design, Gaussian white mouse model, Doob representation, quadrature formulas

AMS Subject classification: Primary 62K05; Secondary: 62M05

1 Introduction

The construction of optimal designs for dependent observations is a very challenging problem in statistics, because - in contrast to the independent case - the dependency yields non-convex optimization problems. As a consequence, classical tools of convex optimization theory as described, for example, in Pukelsheim (2006) are not applicable. Most of the discussion is restricted to very simple models and we refer to Dette et al. (2008); Kiselak and Stehlík (2008); Harman and Štulajter (2010) for some exact optimal designs for linear regression models. Several authors have proposed to determine optimal designs using asymptotic arguments [see, for example, Sacks and Ylvisaker (1966, 1968), Bickel and Herzberg (1979), Näther (1985a), Zhigljavsky et al. (2010)], but the resulting approximate optimal design problems are still non-convex and extremely difficult to solve. As a consequence, approximate optimal designs have mainly been determined analytically for the location model (in this case the corresponding optimization problems are in fact convex) and for a few one-parameter linear models [see Boltze and Näther (1982), Näther (1985a), Ch. 4, Näther (1985b), Pázman and Müller (2001) and Müller and Pázman (2003) among others].
Recently, substantial progress has been made in the construction of optimal designs for regression models with a dependent error process. Dette et al. (2013) determined (asymptotic) optimal designs for least squares estimation, under the additional assumption that the regression functions are eigenfunctions of an integral operator associated with the covariance kernel of the error process. Although this approach is able to deal with the multi-parameter case, the class of models for which approximate optimal designs can be determined explicitly is still rather small, because it refers to specific kernels with corresponding eigenfunctions. For this reason Dette et al. (2015) proposed a different strategy to obtain optimal designs and efficient estimators. Instead of constructing an optimal design for a particular estimator (such as least squares or weighted least squares), these authors proposed to consider the problem of optimizing the estimator and the design of experiment simultaneously. They constructed a class of estimators and corresponding optimal designs with a variance converging (as the sample size increases) to the optimal variance in the continuous time model. In other words, asymptotically these estimators achieve the same precision as the best linear unbiased estimator computed from the whole trajectory of the process. While this approach yields a satisfactory solution for one-dimensional parametric models using signed least squares estimators, it is not transparent and in many cases not efficient in the multi-parameter model. In particular, it is based on matrix-weighted linear estimators and corresponding designs which are difficult to implement in practice and do not yield the same high efficiencies as in the one-dimensional case.
In this paper we present an alternative approach for the construction of estimators and corresponding optimal designs for regression models with dependent error processes, which has important advantages compared to the currently used methodology. First - in contrast to all other methods - the estimators with corresponding optimal designs proposed here are very easy to implement. Secondly, it is demonstrated that the new estimator and design yield a method which is practically not distinguishable from the best linear estimator (BLUE) with corresponding optimal design. Third, in many cases the new estimator and a uniform design are already very efficient.
Compared to most of the work in this field, which begins with a model for a finite number of observations and considers the asymptotic properties of estimators as the sample size converges to infinity, an essential difference of our approach is that it is directly based on the continuous time model. In Section 2 we derive the best linear unbiased estimate in this model using results about the absolute continuity of measures on the space . This yields a representation of the best linear estimator as a stochastic integral and provides an efficient tool for constructing estimators with corresponding optimal designs for finite samples which are practically not distinguishable from the optimal (weighted least squares) estimator and corresponding optimal design. We emphasize again that the latter design has to be determined by discrete non-convex optimization. To be more precise, in Section 3 we propose a weighted mean, say (here denotes the response at the point and is the sample size), where the weights (which are vectors in case of models with more than one parameter) and design points are determined by minimizing the mean squared error between the optimal solution in the continuous time model (represented by a stochastic integral with respect to the underlying process) and its discrete approximation with respect to the weights (of the linear estimator) and the optimal design points. In Section 4 we discuss several examples and demonstrate the superiority of the new approach to the method which was recently proposed in Dette et al. (2015), in particular for multi-parameter models. Some more details on best linear unbiased estimation in the continuous time model are given in Section 5, where we discuss degenerate cases, which appear - for example - by a constant term in the regression function. For a more transparent presentation of the ideas some technical details are additionally deferred to the Appendix.
We finally note that this paper is a first approach which uses results from stochastic analysis in the context of optimal design theory. The combination of these two fields yields a practically implementable and satisfactory solution of optimal design problems for a broad class of regression models with dependent observations.

2 Optimal estimation in continuous time models

Consider a linear regression model of the form

(2.1)

where is a Gaussian process, denotes the covariance between observations at the points and (), is a vector of unknown parameters, is a vector of continuously differentiable linearly independent functions, and the explanatory variables vary in a compact interval, say . If denotes the vector of observations the weighted least squares estimator of is defined by

where is the design matrix and is the matrix of variances/covariances. It is well known that is the BLUE in model (2.1). The corresponding minimal variance is given by

(2.2)

and an optimal design for the estimation of the parameter in model (2.1) minimizes an appropriate real-valued functional of this matrix. As pointed out before, the direct minimization of this type of criterion is an extremely challenging non-convex discrete optimization problem and explicit solutions are not available in nearly all cases of practical interest. For this reason many authors propose to consider asymptotic optimal designs as the sample size converges to infinity [see Sacks and Ylvisaker (1966, 1968), Bickel and Herzberg (1979), Näther (1985a), Zhigljavsky et al. (2010)].
In the following discussion we consider - parallel to model (2.1) - its continuous time version, that is

(2.3)

where the full trajectory of the process can be observed and is a centered Gaussian process with continuous covariance kernel , i.e. . We will focus on triangular kernels, which are of the form

(2.4)

for ), where and are some functions defined on the interval . An alternative representation of is given by

where . We assume that the process is non-degenerate on the open interval , which implies that the function is positive on the interval and strictly increasing and continuous on , see Mehr and McFadden (1965) for more details. Consequently, the functions and must have the same sign and can be assumed to be positive on the interval without loss of generality. Note that the majority of covariance kernels considered in the literature belong to this class, see, for example, Näther (1985a); Zhigljavsky et al. (2010) or Harman and Štulajter (2011). The simple triangular kernel

is obtained for the choice and and corresponds to the Brownian motion. As pointed out in Dette et al. (2015), the solutions of the optimal design problems with respect to different triangular kernels are closely related. In particular, if a best linear unbiased estimator (BLUE) for a particular triangular kernel has to be found for the continuous time model, it can be obtained by simple nonlinear transformation from the BLUE in a different continuous time model (on a possibly different interval) with a Brownian motion as error process (see Remark 2.1(b) below for more details). For this reason we will concentrate on the covariance kernel of the Brownian motion throughout this section. Our first result provides the optimal estimator in the continuous time model (2.3), where the error process is given by a Brownian motion on the interval , where (the case will be discussed in Section 5). We begin with a lemma which is crucial for the definition of the estimator. The proof can be found in the Appendix.

Lemma 2.1

Consider the continuous time linear regression model (2.3) on the interval , , with a continuously differentiable vector of regression functions and a Brownian motion as error process. Then the matrix

(2.5)

is non-singular.

Theorem 2.1

Consider the continuous time linear regression model (2.3) on the interval , , with a continuously differentiable vector of regression functions and a Brownian motion as error process. The best linear unbiased estimate is given by

(2.6)

Moreover, the minimum variance is given by

(2.7)

Proof of Theorem 2.1. Note that the continuous time model (2.3) can be written as a Gaussian white noise model

where the function is defined as

Let and denote the measure on associated with the process and , respectively. From Theorem 1 in Appendix II of Ibragimov and Has’minskii (1981) it follows that is absolute continuous with respect to with Radon-Nikodym derivative given by

The maximum likelihood estimator can be determined by solving the equation

The solution coincides with the linear estimate (2.6), and a straightforward calculation, using Ito’s formula and the fact that the random variables and are independent, gives

where the matrix is defined in (2.5). It has been shown in Dette et al. (2015) that this matrix is the variance/covariance matrix of the BLUE in the continuous time model, which proves Theorem 2.1.

Remark 2.1

  • Dette et al. (2015) determined the best linear estimator for the continuous time linear regression model (2.3) with a twice continuously differentiable vector of regression functions and Brownian motion as

    (2.8)

    Using integration by parts gives

    and it is easily seen that the expression (2.8) coincides with (2.6). This means that a BLUE in the continuous time model (2.3) is even available under the weaker assumption of a once continuously differentiable function .

  • The best linear estimator in the continuous time model (2.3) with a general triangular kernel of the form (2.4) can easily be obtained from Appendix in Dette et al. (2015). To be precise, consider a triangular kernel of the form (2.4), define

    and consider the stochastic process

    where is a Brownian motion on the interval and , . It follows from Doob (1949) that is a centered Gaussian process on the interval with covariance kernel (2.4). Moreover, if we consider the continuous time model

    (2.9)

    and use the transformations

    (2.10)

    then it follows from Dette et al. (2015) that the BLUE for the continuous time model (2.3) (with a general triangular covariance kernel) can be obtained from the BLUE in model (2.9) by the transformation . Therefore an application of Theorem 2.1 gives for the best linear estimator in the continuous time model (2.3) with triangular covariance kernel of the form (2.4) the representation

    where the matrix is given by

  • Using integration by parts it follows (provided that the functions , , and are twice continuously differentiable) that the BLUE in the continuous time model (2.3) can be represented as

    where is a vector of signed measures defined by , denotes the Dirac measure at the point and the “masses” , and the density are given by

    respectively. Now, if denotes an unbiased linear estimate in model (2.1) with vectors , we can represent this estimator as

    in the continuous time model (2.3), where is a discrete signed vector valued measure with “masses” at the points . Consequently, we obtain from Theorem 2.1 that

    (in the Loewner ordering). In other words, is a lower bound for any linear estimator in the linear regression model (2.1).

3 Optimal estimators and designs for finite sample size

We have determined the BLUE and corresponding minimal variance/covariance matrix in the continuous time model (2.3). In the present section we now explain how the particular representation of the BLUE as a stochastic integral can be used to derive efficient estimators and corresponding optimal designs in the original model (2.1), which are practically not distinguishable from the BLUE in model (2.1) based on an optimal design. Our approach is based on a comparison of the mean squared error of the difference between the best linear unbiased estimator derived in Theorem 2.1 and a discrete approximation of the stochastic integral in (2.6). For the sake of a clear representation, we discuss the one-dimensional case first.

3.1 One-parameter models

Consider the estimator defined by (2.6) for the continuous time model (2.3) with and define an estimator in the original regression model by an approximation of the stochastic integral, that is

(3.1)

Here are design points in the interval and are corresponding (not necessarily positive) weights. Obviously, the estimator depends on the weights only through the quantities and therefore we use the notation

(3.2)

in the following discussion. We will determine optimal weights and design points minimizing the mean squared error between the estimators and . Our first result provides an explicit expression for this quantity. The proof is omitted because we prove a more general result later in the multi-parameter case (see Section A.3).

Lemma 3.1

Consider the continuous time model (2.3) in the one-dimensional case. If the assumptions of Theorem 2.1 are satisfied, then

(3.3)

In order to find “good”weights for the linear estimator in (3.1) we propose to consider only estimators with weights such that the second term in (3.3) vanishes, that is

(3.4)

It is easy to see that this condition is equivalent to the property that the estimator in (3.1) is also unbiased, that is , or equivalently

(3.5)

The following result describes the weights minimizing under the constraint (3.4).

Lemma 3.2

Consider the continuous time model (2.3) in the one-dimensional case. If the assumptions of Theorem 2.1 are satisfied, then the optimal weights minimizing in the class of all unbiased linear estimators of the form (3.1) are given by

(3.6)

where

Proof of Lemma 3.2. Under the condition (3.4) the mean squared error simplifies to

Using Lagrangian multiplies to minimize this expression subject to the constraint (3.5) yields

where denotes the Lagrangian multiplier. Substituting this into (3.4) gives

Therefore, the optimal weights are given by (3.6). .


Inserting these weights in the mean squared error gives the function

which finally has to be minimized by the choice of the design points . Because we discuss the one-parameter case in this section and the matrix does not depend on , this optimization corresponds to the minimization of

(3.7)
Remark 3.1

Let

denote the efficiency of an estimator defined by (3.1) with optimal weights. Note that from the proof of Lemma 3.2 it follows that the function is non-negative for all . Consequently, minimizing with respect to the design points means that have to be determined such that

approximates the integral most precisely (this produces an efficiency close to ). Now, if is sufficiently smooth, we have for any

for all , where

This gives

As the function has the representation

it follows that (note that the expression on the right-hand side is increasing with )

(3.8)

where

This shows that for most models a substantial improvement of the approximation by the choice of can only be achieved if the sample size is small. For moderate or large sample sizes one could use the points , which gives already the estimate

(note that we consider worst case scenarios to obtain these estimates). Consequently, in many cases the design points can be chosen in an equidistant way, because the choice of the points is irrelevant from a practical point of view, provided that the weights of the estimator are already chosen in an optimal way.

Example 3.1

Consider the quadratic regression model , where . Then , and the function in (3.7) reduces to

It follows by a straightforward computation that the optimal points are given by

(3.9)

while the corresponding minimal value is

Note that this term is of order . Remark 3.1 gives the bound

which shows that (3.8) is not necessarily sharp. For the efficiency we obtain

which is of order . On the other hand, if the function is given by

and optimal points have to be found numerically. However, we can evaluate the efficiency of the uniform design in (3.9), which is given by

and also of order . Thus, although the uniform design is not optimal, its efficiency (with respect to the continuous case) is extremely high.

3.2 Multi-parameter models

In this section we derive corresponding results for the multi-parameter case. If we propose a linear estimator with matrix weights as an analogue of (3.1), that is

where is given in (2.7), are matrices and are -dimensional vectors, which have to be chosen in a reasonable way. For this purpose we derive a representation of the mean squared error between the best linear estimate in the continuous time model and its discrete approximation in the multi-parameter case first. The proof can be found in Appendix A.3.

Lemma 3.3

Consider the continuous time model (2.3). If the assumptions of Theorem 2.1 are satisfied, then

(3.11)

In the following we choose optimal vectors (or equivalently matrices ) and design points , such that the linear estimate (3.2) is unbiased and the mean squared error matrix in (3.11) “becomes small”. An alternative criterion is to replace the mean squared error by the mean squared error

between the estimate defined in (3.2) and the “true” vector of parameters. The following result shows that both optimization problems will yield the same solution in the class of all unbiased estimators. The proof can be found in Appendix A.4.

Theorem 3.1

The estimator defined in (3.1) is unbiased if and only if the identity

(3.12)

is satisfied. Moreover, for any linear unbiased estimator of the form we have


In order to describe a solution in terms of optimal “weights” and design points we recall that the condition of unbiasedness of the estimate in (3.2) is given by (3.12) and introduce the notation

(3.13)

It follows from Lemma 3.3 that for an unbiased estimate