Non-separable Dynamic Nearest-Neighbor Gaussian Process Models for Large spatio-temporal Data With an Application to Particulate Matter Analysis

Non-separable Dynamic Nearest-Neighbor Gaussian Process Models for Large spatio-temporal Data With an Application to Particulate Matter Analysis

Abstract

Particulate matter (PM) is a class of malicious environmental pollutants known to be detrimental to human health. Regulatory efforts aimed at curbing PM levels in different countries often require high resolution space-time maps that can identify red-flag regions exceeding statutory concentration limits. Continuous spatio-temporal Gaussian Process (GP) models can deliver maps depicting predicted PM levels and quantify predictive uncertainty. However, GP based approaches are usually thwarted by computational challenges posed by large datasets. We construct a novel class of scalable Dynamic Nearest Neighbor Gaussian Process (DNNGP) models that can provide a sparse approximation to any spatio-temporal GP (e.g., with non-separable covariance structures). The DNNGP we develop here can be used as a sparsity-inducing prior for spatio-temporal random effects in any Bayesian hierarchical model to deliver full posterior inference. Storage and memory requirements for a DNNGP model are linear in the size of the dataset thereby delivering massive scalability without sacrificing inferential richness. Extensive numerical studies reveal that the DNNGP provides substantially superior approximations to the underlying process than low rank approximations. Finally, we use the DNNGP to analyze a massive air quality dataset to substantially improve predictions of PM levels across Europe in conjunction with the LOTOS-EUROS chemistry transport models (CTMs).

\kwd
\arxiv

arXiv:0000.0000 \startlocaldefs \endlocaldefs \externaldocumentDNNGPsupplement

\runtitle

Spatio-temporal NNGP

{aug}

and and and and

Non-separable spatio-temporal Models \kwdScalable Gaussian Process \kwdNearest Neighbors \kwdBayesian Inference \kwdMarkov Chain Monte Carlo \kwdEnvironmental Pollutants

1 Introduction

Recent years have witnessed considerable growth in statistical modeling of large spatio-temporal datasets; see, for example, the recent books by Gelfand et al. (2010), Cressie and Wikle (2011) and Banerjee, Carlin and Gelfand (2014) and the references therein for a variety of methods and applications. An especially important domain of application for such models is environmental public health, where analysts and researchers seek map projections for ambient air pollutants measured at monitoring stations and understand the temporal variation in such maps. When inference is sought at the same scale as the observed data, one popular approach is to model the measurements as a time series of spatial processes. This approach encompasses standard time series models with spatial covariance structures (Pfeifer and Deutsch, 1980a, b; Stoffer, 1986) and dynamic models (Stroud, M¨uller and Sans´o, 2001; Gelfand, Banerjee and Gamerman, 2005) among numerous other alternatives.

On the other hand, when inference is sought at arbitrary scales, possibly finer than the observed data (e.g., interpolation over the entire spatial and temporal domains), one constructs stochastic process models to capture dependence using spatio-temporal covariance functions (see, e.g., Cressie and Huang, 1999; Kyriakidis and Journel, 1999; Gneiting, 2002; Stein, 2005; Allcroft and Glasbey, 2003; Gneiting, Genton and Guttorp, 2007). In modeling ambient air pollution data, it is now customary to meld observed measurements with physical model outputs, where the latter can operate at much finer scales. Inference, therefore, is increasingly being sought at arbitrary resolutions using spatio-temporal process models (see, e.g., Gneiting and Guttorp, 2010) . Henceforth, we focus upon this setting.

While the richness and flexibility of spatio-temporal process models are indisputable, their computational feasibility and implementation pose major challenges for large datasets. Model-based inference usually involves the inverse and determinant of an spatio-temporal covariance matrix , where is the number of space-time coordinates at which the data have been observed. When has no exploitable structure, matrix computations typically require floating point operations (flops) and storage in the order of which becomes prohibitive if is large. Approaches for modeling large covariance matrices in purely spatial settings include low rank models (see, e.g., Higdon, 2001; Kammann and Wand, 2003; Stein, 2007, 2008; Banerjee et al., 2008; Cressie and Johannesson, 2008; Crainiceanu, Diggle and Rowlingson, 2008; Rasmussen and Williams, 2005; Finley, Banerjee and McRoberts, 2009; Katzfuss, 2016), covariance tapering (see, e.g., Furrer, Genton and Nychka, 2006; Kaufman, Scheverish and Nychka, 2008; Du, Zhang and Mandrekar, 2009; Shaby and Ruppert, 2012; Bevilacqua et al., 2015), approximations using Gaussian Markov Random Fields (GMRF) (see, e.g., Rue and Held, 2005), products of lower dimensional conditional densities (Datta et al., 2016a; Vecchia, 1988, 1992; Stein, Chi and Welty, 2004), and composite likelihoods (e.g., Eidsvik et al., 2014). Extensions to spatio-temporal settings include Cressie, Shi and Kang (2010), Finley, Banerjee and Gelfand (2012) and Katzfuss and Cressie (2012) who extend low-rank spatial processes to dynamic spatio-temporal settings while Xu, Liang and Genton (2014) who opts for a GMRF approach. All these methods use dynamic models defined on fixed temporal lags and do not lend themselves easily to continuous spatio-temporal domains.

Spatio-temporal process models for continuous space-time modeling of large datasets have received relatively scant attention. Bai, Song and Raghunathan (2012) and Bevilacqua et al. (2012) used composite likelihoods for parameter estimation in a continuous space-time setup. Both these approaches, like their spatial analogues, have focused upon constructing computationally attractive likelihood approximations and have restricted inference only to parameter estimation. Uncertainty estimates are usually based on asymptotic results which are usually inappropriate for irregularly observed datasets. Moreover, prediction at arbitrary locations and time points proceeds by imputing estimates into an interpolator derived from a different process model. This remains expensive for large and may not reflect predictive uncertainty accurately.

Our current work offers a highly scalable spatio-temporal process for continuous space-time modeling. We expand upon the neighbor-based conditioning set approaches outlined in purely spatial contexts by Vecchia (1988), Stein, Chi and Welty (2004) and Datta et al. (2016a). We derive a scalable version of a spatio-temporal process, which we call the Dynamic Nearest-Neighbor Gaussian Process (DNNGP), using information from smaller sets of neighbors over space and time. This approach offers several benefits. The DNNGP is a well-defined spatio-temporal process whose realizations follow Gaussian distributions with sparse precision matrices. Thus, the DNNGP can act as a sparsity-inducing prior for spatio-temporal random effects in any Bayesian hierarchical model and enables full posterior inference considerably enhancing its applicability. Moreover, it can be used with any spatio-temporal covariance function, thereby accommodating non-separability. Being a process, importantly, allows the DNNGP to provide inference at arbitrary resolutions and, in particular, enables predictions at new spatial locations and time points in posterior predictive fashion. The DNNGP also delivers a substantially superior approximation to the underlying process than, for example, by low rank approximations (see, e.g, Stein, 2014, for problems with low-rank approximations). Finally, storage and memory requirements for a DNNGP model are linear in the number of observations, so it efficiently scales up to massive datasets without sacrificing richness and flexibility in modeling and inference.

The remainder of the article is organized as follows. In Section 2 we present the details of a massive environmental pollutants dataset and the need for a full Bayesian analysis. Section 3 elucidates a general framework for building scalable spatio-temporal processes and uses it to construct a sparsity-inducing DNNGP over a spatio-temporal domain. Section 4 describes efficient schemes for fixed as well as adaptive neighbor selection, which are used in the DNNGP. Section 5 details a Bayesian hierarchical model with a DNNGP prior and its implementation using Markov Chain Monte Carlo (MCMC) algorithms. Section 6 illustrates the performance of DNNGP using simulated datasets. In Section 7 we present a detailed analysis of our environmental pollutants dataset. We conclude the manuscript in Section 8 with a brief review and pointers to future research.

2 Pm pollution analysis

Exposure to airborne particulate matter (PM) is known to increase human morbidity and mortality (Brunekreef and Holgate, 2002; Loomis et al., 2013; Hoek et al., 2013). In response to these and other health impact studies, regulatory agencies have introduced policies to monitor and regulate PM concentrations. For example, the European Commission’s air quality standards limit PM (PM10 m in diameter) concentrations to an average of 50 g m over 24 hours and of 40 g m over a year (European Commission, 2015). Measurements made with standard instruments are considered authoritative, but these observations are sparse and maps at finer scales are needed for monitoring progress with mitigation strategies and for monitoring compliance. Hence, accurately quantifying uncertainty in predicted PM concentrations is critical.

Substantial work has been aimed at developing regional scale chemistry transport models (CTM) for use in generating such maps. CTM’s, however, have been shown to systematically underestimate observed PM concentrations, due to lack of information and understanding about emissions and formation pathways (Stern et al., 2008). Empirical regression (Brauer et al., 2011) or geostatistical models (Lloyd and Atkinson, 2004) are an alternative to CTM’s for predicting continuous surfaces of PM. Empirical models may give accurate results, but are restricted to the conditions under which they are developed (Manders, Schaap and Hoogerbrugge, 2009). Assimilating monitoring station observations and CTM output, with appropriate bias adjustments, has been shown to provide improvements over using either data source alone  (van de Kassteele and Stein, 2006; Denby et al., 2008; Candiani et al., 2013; Hamm et al., 2015). In such settings, the CTM output enters as a model covariate and the measured station observations are the response. In addition to delivering more informed and realistic maps, analyses conducted using the models detailed in Section 5 can provide estimates of spatial and temporal dependence not accounted for by the CTM and hence provide insights useful for improving the transport models.

We focus on the development and illustration of continuous space-time process models capable of delivering predictive maps and forecasts for PM and similar pollutants using sparse monitoring networks and CTM output. We coupled observed PM measurements across central Europe with corresponding output from the LOTOS-EUROS (Schaap et al., 2008) CTM. Inferential objectives included ) delivering continuous maps of PM with associated uncertainty, ) producing statistically valid forecast maps given CTM projections, and ) developing inference on space and time residual structure, i.e., space and time lags, that can help identify lurking processes missing in the CTM. The study area and dataset are the same as those used by Hamm et al. (2015) and the reader is referred to that paper for more background information. Note that the current paper works with a 2-year time series, whereas Hamm et al. (2015) focused on daily analysis of a limited number of pollution events.

(a) April 3, 2009
(b) April 5, 2009
Figure 1: Observed PM g m for two example dates.

2.1 Study area

The study domain comprises mainland European countries with a substantial number of available PM observations. The countries included were Portugal, Spain, Italy, France, Switzerland, Belgium, The Netherlands, Germany, Denmark, Austria, Poland, The Czech Republic, Slovakia and Slovenia. All data were projected to the European Terrestrial Reference System 1989 (ETRS) Lambert Azimuthal Equal-Area (LAEA) projection which gives a coordinate reference system for the whole of Europe.

2.2 Observed measurements

Air quality observations for the study area were drawn from the Airbase (Air quality database)1. Daily PM concentrations were extracted for January 1 2008 through December 30 2009 resulting in a maximum of =730 observations at each of monitoring stations. Airbase daily values are averaged over the within-day hourly values when at least 18 hourly measurements are available, otherwise no data are provided. Airbase monitors are classified by type of area (rural, urban, suburban) and by type (background, industrial, traffic or unknown). Only rural background monitors were used in our study. This is common for comparing measured observations to coarse resolution CTM simulations (Denby et al., 2008). Monitoring stations above 800 m altitude were also excluded. These tend to be located in areas of variable topography and the accuracy of the CTM for locations that shift from inside to outside the atmospheric mixing layer is known to be poor. No further quality control was performed on the data. The locations of the 308 stations used in the subsequent analysis are shown in Figure 1 with associated observed and missing PM for two example dates. Of the 224,840 () potential observations across 730 day time series and 308 stations, 41,761 observations were missing due to sensor failure or removal, and post-processing removal by Airbase. These missing values were predicted using the proposed models.

2.3 LOTOS-EUROS CTM data

LOTOS-EUROS (v1.8) is a 3D CTM that simulates air pollution in the lower troposphere. The simulator’s geographic projection is longitude-latitude at a resolution of longitude latitude (approximately 25 km 25 km). LOTOS-EUROS simulates the evolution of the components of particulate matter separately. Hence, this CTM incorporates the dispersion, formation and removal of sulfate, nitrate, ammonium, sea salt, dust, primary organic and elemental carbon and non-specified primary material, although it does not incorporate secondary organic aerosol. Hendriks et al. (2013) provide a detailed description of LOTOS-EUROS.

The hour-by-hour calculations of European air quality in 2008-2009 were driven by the European Centre for Medium Range Weather Forecasting (ECMWF). Emissions were taken from the MACC (Monitoring Atmospheric Composition and Climate) emissions database (Pouliot et al., 2012). Boundary conditions were taken from the global MACC service (Flemming et al., 2009). The LOTOS-EUROS hourly model output was averaged to daily mean PM concentrations. LOTOS-EUROS grid cells that were spatially coincident with the Airbase observations were extracted and used as the covariate in the subsequent model.

CTM grid cell values nearest to station locations were used for subsequent model development. No attempt was made to match the spatial support (resolution) of the CTM simulations and station observations. The support of the CTM is 25 km, but the support of the observations is vague. Rural background observations were deliberately chosen because they are distant from urban areas and pollution sources. They are, therefore considered representative of background, ambient pollution conditions and appropriate for matching with moderate resolution CTM-output (Denby et al., 2008; Hamm et al., 2015). This assumption is further backed up by empirical studies indicating that PM concentrations are dominated by rural background values even in urban areas (Eeftens et al., 2012).

3 Scalable Dynamic Nearest-Neighbor Gaussian Processes

Let be a zero-centered continuous spatio-temporal process (see, e.g., Gneiting and Guttorp, 2010, for details), where with (usually or ) is the spatial region, is the time domain and is a space-time coordinate with spatial location and time point . Such processes are specified with a spatio-temporal covariance function . For any finite collection in , let be the realizations of the process over . Also, for two finite sets and containing and points in , respectively, we define the matrix , where the covariances are evaluated using . When or contains a single point, is a row or column vector. A valid spatio-temporal covariance function ensures that is positive definite for any finite set . In particular, for spatio-temporal Gaussian processes, has a multivariate normal distribution and the th element of is .

Storage and computations involving can become impractical when is large relative to the resources available. For full Bayesian inference on a continuous domain, we seek a scalable (in terms of flops and storage) spatio-temporal Gaussian process that will provide an excellent approximation to a full spatio-temporal process with any specified covariance function. We outline a general framework that first uses a set of points in to construct a computationally efficient approximation for the random field and extends the finite dimensional distribution over this set to a process. To ease the notation, we will suppress the explicit dependence of matrices and vectors on whenever the context is clear.

Let be a fixed finite set of points in . We refer to as a reference set. We construct a spatio-temporal process on by first specifying , where is any positive definite matrix and then defining

(3.1)

where is a zero-centered Gaussian process independent of and such that for any two distinct points in .

Observe that in (3.1) is a well defined spatio-temporal Gaussian process on for any choice of ’s, as long as is positive definite. For example, is a Gaussian process with covariance function if we set , where is with elements , and . Now (3.1) represents the ‘kriging’ equation for a location based on observations over (Cressie and Wikle, 2011). Dimension reduction can be achieved with suitable choices for and . Low-rank spatio-temporal processes emerge when we choose to be a smaller set of ‘knots’ (or ‘centers’). Additionally, specifying to be a diagonal or sparse residual process yields to be a non-degenerate (or bias-adjusted) low rank Gaussian Process (Banerjee et al., 2008; Finley, Banerjee and McRoberts, 2009; Sang and Huang, 2012).

Because of demonstrably impaired inferential performance of low-rank models in purely spatial contexts at scales similar to ours (see, e.g., Stein, 2014; Datta et al., 2016a), we use the framework in (3.1) to construct a class of sparse spatio-temporal process models. To be specific, let the reference set be an enumeration of points in , so that each in corresponds to some for and . For any in we define a history set as the collection of all locations observed at times before and of all points at time with spatial locations in . Thus, . For any location in , let be a subset of the history set . Also, for any location , let denote any finite subset of . We refer to the sets as a ‘neighbor set’ for the location and describe their construction later.

We now turn to our choices for and in (3.1). Let . We choose to effectuate a sparse approximation for the joint density of the realizations of over , i.e., . Adapting the ideas underlying likelihood approximations in Vecchia (1988) and Datta et al. (2016a), we specify to be the matrix such that

(3.2)

Here, is the empty set (hence, so is ) and . The underlying idea behind the approximation in Equation 3 is to compress the conditioning sets from to so that the resulting approximation is a multivariate normal distribution with a sparse precision matrix . This implies

(3.3)

where . Also, K is determined by because , where F is a diagonal matrix with diagonal entries and V is the matrix with entries such that and whenever . The remaining entries in column of V are specified by setting the subvector , where . If denotes the limiting size of the neighbor sets , then the columns of V are sparse with at most non-zero elements. Consequently, has at most non-zero elements (this is the spatial-temporal analogue of the result in Datta et al., 2016a). Hence, the approximation in (3) produces a sparsity-inducing proper prior distribution for the spatio-temporal random effects over that closely approximates the realizations from a .

Turning to the vector of coefficients in (3.1), we extend the idea in (3.3) to any point by requiring that . This is achieved by setting in (3.1) whenever for any point . Hence, if contains points, then at most of the elements in the vector can be nonzero. These nonzero entries are determined from the above conditional expectation given . To be precise, if is the vector of these entries, then we solve for . Also note that . Finally, to complete the process specifications in (3.1), we specify , where . The covariance function of the resulting Gaussian Process is given by:

(3.4)

where is element and is column in K.

Owing to the sparsity of , the likelihood can be evaluated using flops for any given . Substantial computational savings accrue because is usually very small (also see later sections). Furthermore as yields a diagonal covariance matrix and has at most non-zero elements, for any finite set outside , the flop count for computing the density will be linear in the size of . We have now constructed a scalable spatio-temporal Gaussian Process from a parent spatio-temporal using small neighbor sets . We denote this Dynamic Nearest Neighbor Gaussian Process (DNNGP) as , where denotes the covariance function of this new GP.

4 Constructing Neighbor-Sets

4.1 Simple Neighbor Selection

Spatial correlation functions usually decay with increasing inter-site distance, so the set of nearest neighbors based on the inter-site distances represents locations exhibiting highest correlation with the given location. This has motivated use of nearest neighbors to construct these small neighbor sets (Vecchia, 1988; Datta et al., 2016a). On the other hand, spatio-temporal covariances between two points typically depend on the spatial as well as the temporal lag between the points. To be specific, non-separable isotropic spatio-temporal covariance functions can be written as where and . This often precludes defining any universal distance function such that will be monotonic with respect to for all choices of .

In the light of the above discussion, we define “nearest neighbors” in a spatio-temporal domain using the spatio-temporal covariance function itself as a proxy for distance. To elucidate, for any three points , and , we say that is nearer to than to if . Subsequently, this definition of “distance” is used to find nearest neighbors for any location.

Of course, this choice of nearest neighbors depends on the choice of the covariance function and . Since the purpose of the DNNGP is to provide a scalable approximation of the parent GP, we always choose to be same as the covariance function of the parent GP. However, for every location , its neighbor set, denoted by , still depends on . This is illustrated in Figures a and b which shows how neighbor sets can differ drastically based on the choice of .

(a)
(b)
(c) Simple neighbor sets
Figure 2: True and simple neighbor sets for a spatio-temporal dataset with one-dimensional spatial domain and covariance function . All points below the red horizontal line constitute the history set for the red point . Green points denote – the sets of true nearest neighbors with (figure (a)) and (figure (b)). The blue points in figure (c) denotes the simple neighbor set.

In most applications, is unknown precluding the use of these newly defined neighbor sets to construct the DNNGP. We propose a simple intuitive method to construct neighbor sets. We choose to be a perfect square and construct a simple neighbor set of size using spatial nearest neighbors and temporal nearest neighbors. Figure c illustrates the simple neighbor set of size for the red point. In order to formally define the simple neighbor sets, we denote , and . Furthermore, for any finite set of spatial locations , let denote the set of nearest neighbors in for the location s. For any point we define the simple neighbor sets

(4.1)

The above construction implies that the neighbor set for any point in consists of spatial nearest neighbors of the preceding time points. For arbitrary , is simply defined as the Cartesian product of nearest neighbors for s in with nearest neighbors of in .

In many applications, one desirable property of the spatio-temporal covariance functions is natural monotonicity, i.e. is decreasing in for fixed and decreasing in for fixed . All Matèrn-based space-time separable covariances and many non-separable classes of covariance functions possess this property (Stein, 2013; Omidi and Mohammadzadeh, 2015). If possesses natural monotonicity, then defined in Equation 4.1 is guaranteed to contain at least nearest neighbors of in . Thus, the neighbor sets defined above do not depend on any parameter and, for any value of , will contain a few nearest neighbors.

4.2 Adaptive Neighbor Selection

The simple neighbor selection scheme described in Section 4.1 does not depend on and is undoubtedly useful for fast implementation of the DNNGP. However, for some values of , the neighbor sets may often consist of very few nearest neighbors. This issue is illustrated in Figure 2 where the simple neighbor set (blue points) contained out of true nearest neighbors (green points) for but only out of true nearest neighbors for . We see that for different choices of the covariance parameters the simple neighbor sets contain different proportions of the true nearest neighbors. The problem is exacerbated in extreme cases with variation only along the spatial or temporal direction. In such cases, the neighbor sets defined in (4.1) will contain only about nearest neighbors and uncorrelated points.

Ideally, if was known, one could have simply evaluated the pairwise correlations between any point in and all points in its history set to obtain  — the set of true nearest neighbors. In practice, however, we encounter a computational roadblock because is unknown and for every new value of in an iterative optimizer or Markov Chain Monte Carlo sampler, we need to redo the search for the neighbor sets within the history sets. As the history sets are typically large this is computationally challenging. For example, in Figure 2, the history set for the red point is composed of all points below the red horizontal line. So, evaluating the pairwise correlations required for updating neighbor sets of all points in and datapoints outside , will use flops at each iteration. The reference set is typically chosen to match the scale of the observed dataset to achieve a reasonable approximation of the parent GP by DNNGP. Hence, for large datasets this updating becomes a deterrent. In fact, Vecchia (1988) and Stein, Chi and Welty (2004) admit that this challenge has inhibited the use of correlation based neighbor sets in a spatial setting. Jones and Zhang (1997) permitted locations within a small prefixed temporal lag of a given location to be eligible for neighbors. However, this assumption will fail to capture any long term temporal dependence present in the datasets.

We now provide an algorithm that efficiently updates the neighbor sets after every update of . The underlying idea is to restrict the search for the neighbor sets to carefully constructed small subsets of the history sets. These small eligible sets are constructed in such a manner that, despite being much smaller than the history sets, they are guaranteed to contain the true nearest neighbor sets for all choices of the parameter . So, for each we can evaluate the pairwise correlations between and only the points in and still find the true set of -nearest neighbors.

(a) Ineligible point
(b) Eligible point
(c) Full eligible set
Figure 3: Construction of eligible sets for finding nearest neighbor sets of size : In figure (a) the black point is ineligible bacause the black rectangle contains more than points. In figure (b) the blue point will belong to as the blue rectangle contains less than points. Figure (c) shows the final eligible set obtained by repeating this algorithm for all points in the history set (below the red line).

Figures a and b illustrate how to determine which points belong to . Let and denote the spatial and temporal lags with the black point and the red point in Figure a. All other points in the black rectangle have spatial lag and temporal lag with the red point. So if the covariance function possess natural monotonicity, the black point has lowest correlation with the red point among all the points in the black rectangle. For the black point to be in the the set of nearest neighbors for any , all other points in the black rectangle should also be included. Since, this is not possible as the black rectangle contains points and , the black point becomes ineligible. By a similar logic, the blue rectangle in Figure b contains points and is included in . Proceeding like this, we can easily determine the entire eligible set (Figure c) without any knowledge of the parameter .

A formal construction of eligible sets is provided in Section S1 of the supplemental article Datta et al. (2016b). Proposition S1 proves that eligible sets are guaranteed to contain the neighbor sets for all choices of . This result has substantial consequences because the size of the eligible sets is approximately equal to . The eligible sets needs to be calculated only once before the MCMC as they are free of any parameter choices. Subsequently, for every new update of in a MCMC sampler or an iterative solver, one can search for a new set of -nearest neighbors only within the eligible sets and use as the conditioning sets to construct the DNNGP. We summarize the MCMC steps of the DNNGP with adaptive neighbor selection in Algorithm 1.

1:Compute the eligible sets for all in from Eqn. S1
2:At the iteration of the MCMC:
  1. Calculate for all in

  2. Define as the set of locations in which maximizes

  3. Repeat steps (a) and (b) for all in

  4. Update based on the new set of neighbor sets computed in step (c) using Metropolis step specified in (5.5)

3:Repeat Step 2 for MCMC iterations
Algorithm 1 Algorithm for adaptive neighbor selection in dynamic NNGP

As the size of the sets are approximately , for every we need to evaluate only pairwise correlations. So the total computational complexity of the search is now reduced to from . This is at par with the scale of implementing the remainder of the algorithm. With this adaptive neighbor selection scheme we gain the advantage of selecting the set of -nearest neighbors at every update while retaining the scalability of the DNNGP. Parallel computing resources, if available, can be greatly utilized to further reduce computations as the search for eligible sets for each point (Algorithm 1: Step (c)) can proceed independent of one another.

5 Bayesian DNNGP model

We consider a spatio-temporal dataset observed at locations and at time points . Note that there may not be data for all locations at all time points i.e. we allow missing data. Let be an enumeration of points in , where each is an ordered pair . Let be a univariate response corresponding to and let be a corresponding vector of spatio-temporally referenced predictors. A spatio-temporal regression model relates the response and the predictors as

(5.1)

where denotes the coefficient vector for the predictors, is the spatio-temporally varying intercept and is the random noise customarily assumed to be independent and identically distributed copies from .

Usually ’s are modeled as realizations of a spatio-temporal GP. To ensure scalability, we will construct a DNNGP from a parent GP with a non-separable spatio-temporal isotropic covariance function , introduced by Gneiting (2002),

(5.2)

where h and refers to the spatial and temporal lags between and and . The spatial covariance function at temporal lag zero corresponds to the Whittle-Matern class with marginal variance , smoothness parameter and decay parameter . The parameters and control smoothness and decay, respectively, for the temporal process, while captures non-separability between space and time.

A straightforward choice of the reference set is the set . While this set will typically be large, its size does not adversely affect the computations. This choice has been shown to yield excellent approximations to the parent random field (Vecchia, 1988; Stein, Chi and Welty, 2004). Also, while several alternate choices of reference sets (like choosing the points over a regular grid) are possible, it is unlikely they will provide any additional computational or inferential benefits; this has been demonstrated in purely spatial contexts by Datta et al. (2016a). Hence, we choose , i.e., for .

A full hierarchical model with a DNNGP prior on is given by

(5.3)

where is the prior on , and denotes the Inverse-Gamma density with shape and rate . Below we describe an efficient MCMC algorithm using Gibbs and Metropolis steps only to carry out full inference from the posterior in Equation 5.

5.1 Gibbs’ sampler steps

Let be the points in where the ’s is observed and denote the binary indicator for presence or absence of data at . Let y be the vector formed by stacking the responses observed and X be the corresponding design matrix. The full conditional distribution of is where and . The full conditional distribution of follows .

We update the elements of sequentially. For any two locations and in , if and is the -th member of , then we define as the -th entry of . Let and for every , define, . Then, for the full conditional distribution for is , where

(5.4)

If is empty for some , then all instances of in (5.1) disappear for that .

5.2 Metropolis step

We update using a random walk Metropolis step. The full-conditional for is proportional to

(5.5)

Since none of the above updates involve expensive matrix decompositions, the likelihood can be evaluated very efficiently. The algorithm for updating the parameters of a hierarchical DNNGP model is analogous to the corresponding updates for a purely spatial NNGP model (see Datta et al. (2016a)). The only additional computational burden stems from updating the neighbor sets in the adaptive neighbor selection scheme, but even this can be handled efficiently using eligible sets (Algorithm 1). Hence, the number of floating point operations per update is linear in the number of points in .

5.3 Prediction

Once we have computed the posterior samples of the model parameters and the spatio-temporal random effects over , we can execute, cheaply and efficiently, full posterior predictive inference at unobserved locations and time points. The Gibbs’ sampler in Section 5.1 generates full posterior distributions of the w’s at all locations in . Let denote a point in where the response is unobserved i.e. . We already have posterior distributions of and the parameters. We can now generate posterior samples of from . Turning to prediction at a location outside , we construct from described in Equation S2 for every posterior sample of . We generate posterior samples of from and, subsequently, draw posterior samples of from .

6 Synthetic data analyses

In this section we compare the DNNGP, the full rank GP and low rank Gaussian Predictive Process (Banerjee et al., 2008). Additional simulation experiments comparing the predictive performance of DNNGP with Local Approximation GP (Gramacy and Apley, 2015) are provided in Section S2 of the supplemental article Datta et al. (2016b). We generated observations over a grid within a unit cube domain. An additional 500 observations used for out-of-sample prediction validation were also located within the domain. All data were generated using model 5.1 with comprising an intercept and covariate drawn from . The spatial covariance matrix was constructed using an exponential form of the non-separable spatio-temporal covariance function (5.2), viz.,.

(6.1)

where and are the time and space Euclidean norms, respectively. By specifying different values of the decay and interaction parameters in , we generated three datasets that exhibited different covariance structures. The first column in Table 1 provides the three specifications for and Figure 4 shows the corresponding space-time correlation surface realizations. As illustrated in Figure 4, the three datasets exhibit: 1) short spatial range and long temporal range, 2) long spatial and temporal range, and 3) long spatial range and short temporal range.

(a) Dataset 1
(b) Dataset 2
(c) Dataset 3
Figure 4: Space-time correlation surface realizations given true parameter values in Table 1. Correlation contours are provided, with the two outer white lines corresponding to 0.05 and 0.01.

For each dataset, model parameters were estimated using: ) full Gaussian Process (GP), ) DNNGP with simple neighbor set selection (Simple DNNGP) described in Section 4.1, ) DNNGP with adaptive neighbor set selection (Adaptive DNNGP) described in Section 4.2, and; ) bias-corrected Gaussian Predictive Process (GPP) detailed in Banerjee et al. (2008) and Finley, Banerjee and McRoberts (2009). DNNGP models were fit using and the Gaussian Predictive Process model used a regularly spaced grid of knots within the domain.

For all models, the intercept and and slope regression parameters, , were assigned flat prior distributions. The variance parameters were assumed to follow inverse-Gamma prior distributions with and . The time and space decay parameters received uniform priors that were dataset specific: 1) , ; 2) , , and; 3) , . The prior for the interaction term matched its theoretical support with .


GP GPP knots=512 Adaptive DNNGP m=25 Simple DNNGP m=25
Dataset 1
1 0.99 (0.80, 1.12) 1.02 (0.89, 1.16) 0.97 (0.86, 1.11) 0.97 (0.86, 1.11)
5 4.99 (4.97, 5.01) 4.98 (4.94, 5.02) 4.99 (4.97, 5.01) 4.99 (4.97, 5.01)
50 46.46 (38.02, 67.46) 16.93 (11.91, 29.17) 48.37 (37.85, 77.93) 53.18 (35.93, 83.78)
25 25.69 (22.00, 29.49) 22.73 (13.53, 34.20) 24.77 (20.55, 28.71) 25.16 (21.91, 29.52)
0.75 0.83 (0.61, 0.94) 0.78 (0.39, 0.91) 0.80 (0.57, 0.99) 0.75 (0.53, 0.98)
1 1.13 (1.03, 1.24) 0.70 (0.56, 0.92) 1.13 (1.03, 1.24) 1.14 (1.04, 1.25)
0.1 0.09 (0.07, 0.11) 0.95 (0.89, 1.02) 0.09 (0.06, 0.11) 0.09 (0.06, 0.11)
pD 2214.57 225.66 2236.81 2258.39
DIC 3700.68 9644.76 3650.55 3567.45
G 104.60 3008.41 100.06 94.50
P 512.29 3436.52 504.66 493.63
D=G+P 616.90 6444.93 604.72 588.13
RMSPE 0.84 0.95 0.84 0.84
95% CI cover % 95.6 94.6 95.6 96
Dataset 2
1 0.81 (0.48, 1.26) 0.79 (0.26, 1.16) 1.01 (0.57, 1.27) 0.92 (0.58, 1.26)
5 4.98 (4.96, 5.00) 4.99 (4.97, 5.02) 4.98 (4.96, 5.00) 4.98 (4.96, 5.00)
500 352.82 (301.69, 521.64) 583.59 (391.79, 661.36) 480.28 (331.63, 662.12) 410.84 (317.29, 602.21)
2.5 2.52 (1.93, 3.13) 1.67 (1.03, 2.31) 2.91 (2.49, 3.37) 2.59 (1.91, 3.37)
0.5 0.56 (0.44, 0.67) 0.39 (0.26, 0.53) 0.46 (0.36, 0.62) 0.53 (0.42, 0.67)
1 1.01 (0.85, 1.31) 1.14 (0.83, 1.77) 0.94 (0.81, 1.10) 1.03 (0.83, 1.32)
0.1 0.11 (0.09, 0.13) 0.44 (0.41, 0.47) 0.10 (0.08, 0.12) 0.10 (0.08, 0.12)
pD 1913.96 312.76 1999.98 2014.78
DIC 3988.36 7091.84 3866.38 3871.09
G 157.52 1336.94 139.28 137.33
P 576.02 1609.99 550.28 550.36
D=G+P 733.53 2946.94 689.56 687.68
RMSPE 0.53 0.71 0.53 0.53
95% CI cover % 96.4 93 94 93.8
Dataset 3
1 0.94 (0.66, 1.14) 0.55 (0.32, 0.84) 0.93 (0.74, 1.17) 0.87 (0.68, 1.12)
5 4.98 (4.96, 5.00) 4.98 (4.95, 5.02) 4.98 (4.96, 5.00) 4.98 (4.96, 5.00)
2000 1214.02 (1008.23, 2141.16) 1590.77 (1151.78, 2118.63) 1635.46 (1046.76, 2922.59) 1495.94 (1019.16, 2751.17)
2.5 2.38 (1.79, 2.95) 1.36 (0.73, 2.16) 2.25 (1.62, 2.81) 2.27 (1.60, 2.97)
0.95 0.91 (0.72, 0.98) 0.68 (0.40, 0.90) 0.71 (0.46, 0.98) 0.81 (0.58, 0.97)
1 1.03 (0.86, 1.35) 0.91 (0.67, 1.83) 1.09 (0.89, 1.44) 1.07 (0.87, 1.42)
0.1 0.11 (0.09, 0.13) 0.68 (0.62, 0.74) 0.11 (0.09, 0.14) 0.11 (0.09, 0.14)
pD 1990.41 210.11 1982.28 1994.77
DIC 4210.71 8463.33 4214.68 4223.88
G 155.87 2137.55 157.24 155.61
P 610.01 2424.66 611.89 612.98
D=G+P 765.89 4562.21 769.13 768.59
RMSPE 0.78 0.92 0.77 0.77
95% CI cover % 92.8 91.4 95.6 95.6
CPU (min) 7646.96 856.54 496.12 430.88
Table 1: Synthetic data analysis parameter estimates and computing time for the candidate models. Parameter posterior summary 50 (2.5, 97.5) percentiles. Bold indicates estimates with 95% credible intervals that do not include the true parameter value.

Candidate model comparison was based on parameter estimates, fit to the observed data, out-of-sample prediction accuracy, and posterior predictive distribution coverage. Goodness-of-fit was assessed using DIC (Spiegelhalter et al., 2002) and posterior predictive loss (Gelfand and Ghosh, 1998). The DIC is reported along with an estimate of model complexity, pD, while the posterior predictive loss is computed as D=G+P, where G is a goodness-of-fit measure and P measures the number of model parameters. Predictive accuracy for the 500 holdout locations was measured using root mean squared prediction error (Yeniay and Goktas, 2002). The percent of holdout locations that fell within the candidate models’ posterior predictive distribution’s 95% credible interval (CI) was also computed. Inference was based on 15,000 MCMC samples comprising post burn-in samples from three chains of 25,000 iterations (i.e., 5,000 samples from each chain).

Table 1 presents parameter estimation and model assessment metrics. With the exception of for Dataset 1, the full GP model recovered the parameter values used to generate the datasets, i.e., the 95% CIs cover the true parameter values. For the DNNGP models, there was negligible difference among parameter estimates for the 15, 25, and 36 neighbor sets. Hence, we report only the cases. There was very little difference between the estimates produced by the Adaptive and Simple DNNGP models and, like the full GP model, they captured the true mean and process parameters, with the exception of for Dataset 1. Given the extremes in the space and time decay in Datasets 1 and 3, we anticipated the Simple DNNGP model—with at most 5 neighbors in any given time point—would not be able to estimate the covariance parameters. Extensive analysis of simulated data, some of which is reported in Table 1, suggested the Simple DNNGP model performed as well as the Adaptive DNNGP and full GP models. Goodness-of-fit and out-of-sample prediction validation metrics in Table