Latent Gaussian modeling and INLA: A review with focus on space-time applications

Latent Gaussian modeling and INLA:
A review with focus on space-time applications

\prenomThomas \nomOpitz \thanksreft1 \contact[label=e2]thomas.opitz@inra.fr [
Abstract

Bayesian hierarchical models with latent Gaussian layers have proven very flexible in capturing complex stochastic behavior and hierarchical structures in high-dimensional spatial and spatio-temporal data. Whereas simulation-based Bayesian inference through Markov Chain Monte Carlo may be hampered by slow convergence and numerical instabilities, the inferential framework of Integrated Nested Laplace Approximation (INLA) is capable to provide accurate and relatively fast analytical approximations to posterior quantities of interest. It heavily relies on the use of Gauss–Markov dependence structures to avoid the numerical bottleneck of high-dimensional nonsparse matrix computations. With a view towards space-time applications, we here review the principal theoretical concepts, model classes and inference tools within the INLA framework. Important elements to construct space-time models are certain spatial Matérn-like Gauss–Markov random fields, obtained as approximate solutions to a stochastic partial differential equation. Efficient implementation of statistical inference tools for a large variety of models is available through the INLA package of the R software. To showcase the practical use of R-INLA and to illustrate its principal commands and syntax, a comprehensive simulation experiment is presented using simulated non Gaussian space-time count data with a first-order autoregressive dependence structure in time.

\kwd
\setmainlanguageenglish\firstpage

1

\runtitle

INLA for space-time statistics \alttitleModèles à processus gaussiens latents et inférence INLA:
un survol orienté vers les applications spatio-temporelles

{aug}

t1]BioSP, INRA, F-84914 Avignon, France.
thomas.opitz@inra.fr
Please cite this review paper as
T. Opitz. Latent Gaussian modeling and INLA: A review with focus on space-time applications. To appear in the Journal of the French Statistical Society, 2017.

Integrated Nested Laplace Approximation \kwdR-INLA \kwdspatio-temporal statistics {altabstract} Les modèèles bayésiens hiérarchiques structurés par un processus gaussien latent sont largement utilisés dans la pratique statistique pour caractériser des comportements stochastiques complexes et des structures hiérarchiques dans les données en grande dimension, souvent spatiales ou spatio-temporelles. Si des méthodes d’inférence bayésienne de type MCMC, basées sur la simulation de la loi a posteriori, sont souvent entravées par une covergence lente et des instabilités numériques, l’approche inférentielle par INLA ("Integrated Nested Laplace Approximation") utilise des approximations analytiques, souvent très précises et relativement rapides, afin de calculer des quantités liées aux lois a posteriori d’intérêt. Cette technique s’appuie fortement sur des structures de dépendance de type Gauss–Markov afin d’éviter des difficultés numériques dans les calculs matriciels en grande dimension. En mettant l’accent sur les applications spatio-temporelles, nous discutons ici les principales notions théoriques, les classes de modèles accessibles et les outils d’inférence dans le contexte d’INLA. Certains champs Markoviens Gaussiens, obtenus comme solution approximative d’une équation différentielle partielle stochastique, sont la base de la modélisation spatio-temporelle. Pour illustrer l’utilisation pratique du logiciel R-INLA et la syntaxe de ses commandes principales, un scénario de simulation-réestimation est présenté en détail, basé sur des données simulées, spatio-temporelles et non gaussiennes, avec une structure de dépendance autorégressive dans le temps. {altkeywords} \kwdIntegrated Nested Laplace Approximation \kwdR-INLA \kwdstatistique spatio-temporelle

1 Introduction

The rapidly increasing availability of massive sets of georeferenced data has spawned a strong demand for suitable statistical modeling approaches to handle large and complex data. Bayesian hierarchical models have become a key tool for capturing and explaining complex stochastic structures in spatial or spatio-temporal processes. Many of these models are based on latent Gaussian processes, typically embedded in a parameter characterizing the central tendency of the distribution assumed for the likelihood of the data, and extend the Gaussian random field modeling brought forward by classical geostatistics. Using a conditional independence assumption for the data process with respect to the latent Gaussian layer makes inference tractable in many cases. Typically, closed-form expressions for the likelihood are not available for these complex models, and simulation-based inference through Markov chain Monte Carlo (MCMC) has become a standard approach for many models. An important alternative, superior to MCMC inference under certain aspects, has been developed through the idea of Integrated Nested Laplace Approximation, proposed in the JRSS discussion paper of Rue.Martino.Chopin.2009. Many case studies have been conducted through INLA in the meantime, with space-time applications to global climate data (Lindgren.al.2011), epidemiology (Bisanzio.al.2011), disease mapping and spread (Schroedle.al.2011; Schroedle.al.2012), forest fires (Serra.al.2014; Gabriel.al.2016), air pollution risk mapping (Cameletti.al.2013), fishing practices (CosandeyGodin.al.2014) or econometrics (Gomez.al.2015a). More generally, INLA has been successfully applied to generalized linear mixed models (Fong.Rue.Wakefield.2009), log-Gaussian Cox processes (Illian.al.2012; Gomez.al.2015) and survival models (Martino.al.2011), amongst many other application fields. The recent monograph of Blangiardo.Cameletti.2015 reviews INLA in detail and gives many practical examples. Instead of applying simulation techniques to produce a representative sample of the posterior distribution, INLA uses analytic Laplace approximation and efficient numerical integration schemes to achieve highly accurate analytical approximation of posterior quantities of interest with relatively small computing times. In particular, we get approximations of univariate posterior marginals of model hyperparameters and of the latent Gaussian variables. By making use of latent Gauss–Markov dependence structures, models remain tractable even in scenarios that are very high-dimensional in terms of observed data and latent Gaussian variables.

The INLA-based inference procedures are implemented in the R-package INLA (referred to as R-INLA in the following) for a large variety of models, defined through basic building blocks of three categories: the (univariate) likelihood specification of data, the latent Gaussian model and prior distributions for hyperparameters. Functionality of R-INLA is continuously extended (Martins.al.2013; Lindgren.al.2015; Rue.al.2016). This review and the code examples refer to R-INLA version 0.0-1463562937. The R-INLA software project is hosted on http://www.r-inla.org/, where one can find lots of INLA-related resources, amongst them details on the specification of likelihoods, latent models and priors, a discussion forum with very active participation of the members of the INLA core team, tutorials and codes, an FAQ section, etc.

2 Modeling and estimation with INLA

2.1 Latent Gaussian modeling

The structured latent Gaussian regression models amenable to INLA-based inference can be defined in terms of three layers: hyperparameters, latent Gaussian field, likelihood model. The univariate likelihood captures the marginal distribution of data and is often chosen as an exponential family (Gaussian, gamma, exponential, Weibull, Cox, binomial, Poisson, negative binomial, …) similar to the framework of generalized linear models, where models like the exponential, Weibull or Cox ones are available as survival models allowing for right- and left-side censoring. The mean (or some other parameter related to the central tendency) of the likelihood distribution is determined by the latent Gaussian predictor through a link function such that in case of the mean, where is the observation, is a Gaussian predictor and is an appropriately chosen link function. Hyperparameters can appear in the likelihood as dispersion parameters like the variance of the Gaussian distribution, the overdispersion parameter of the negative binomial one or the shape parameter of the gamma one, or they can characterize the structure of the latent Gaussian model, for instance through variances, spatial correlation parameters or autoregression coefficients. Formally, this hierarchical model can be written as

hyperparameters (1)
latent Gaussian field (2)
observations (3)

where is the precision matrix (i.e., inverse covariance matrix) of the latent Gaussian vector and with the so-called observation matrix that maps the latent variable vector to the predictors associated to observations . If and can be high-dimensional when using INLA, an important limitation concerns the hyperparameter vector whose dimension should be moderate in practice, say if these hyperparameters are estimated with the default settings of R-INLA (although R-INLA supports using a higher number of hyperparameters); this is due to numerical integration that has to be carried out over the hyperparameter space . Notice that the Gaussian likelihood is particular since, conditional to the hyperparameters, the observations are still Gaussian. In practice, the precision hyperparameter of the Gaussian likelihood (i.e., the inverse of its variance) can correspond to a measurement error or a nugget effect, and we can fix a very high value for the precision hyperparameter if we want the model for data to correspond exactly to the latent Gaussian predictors . The dependence structure between observations is captured principally by the precision matrix of the latent field . In practice, it is strongly recommended or even indispensable from the point of view of computation time and memory requirements to choose Gauss–Markov structures with sparse whenever model dimension is high.

The resulting joint posterior density of latent variables and hyperparameters is

(4)

This density over a high-dimensional space does usually not characterize one of the standard multivariate families and is therefore difficult to interpret and to manipulate. In practice, the main interest lies in the marginal posteriors of hyperparameters , of latent variables and of the resulting predictors , where the latter can be included into for notational convenience. Calculation of these univariate posterior densities requires integrating with respect to and :

(5)
(6)

We notice that the use of astutely designed numerical integration schemes with respect to the moderately dimensioned hyperparameter space can yield satisfactorily accurate approximation of the outer integral. On the other hand, calculating the inner integral with respect to , often of very high dimension ( to ), is intricate.

2.2 Gauss–Markov models

We say that a random vector is Gauss–Markov if the number of nonnull entries of its precision matrix is . Such sparse precision matrices allow efficient numerical computation of matrix operations like -decomposition (with sparse factors and ), determinant calculation, matrix-vector products, etc. For instance, complexity of matrix inversion decreases from for matrices without any structural constraints to around for sparse matrices. Using Gauss–Markov structures fundamentally shifts the dependence characterization from covariance matrices to precision matrices . Notice that the conditional expectation is easily expressed through the regression where only a small number of the sum terms, also called the neighborhood of , are non-zero owing to the sparse structure of . The conditional variance is . R-INLA uses fast and efficient algorithms for sparse matrix calculations (Rue.Held.2005), already implemented in the GMRFLib library. For efficient calculations, it is important to make the precision matrix “as diagonal as possible” by reordering variables to regroup nonzero elements as close as possible to the diagonal. R-INLA has implemented several of those reordering strategies; see Rue.Held.2005 for more details on reordering algorithms. If certain Gauss–Markov models exist for spatially indexed graphs, useful covariance functions defined over and leading to Gauss–Markov covariance matrices are difficult to establish. An exception is the very flexible approximate Gauss–Markov representation of Matérn-like covariances based on certain stochastic partial differential equations (often referred to as the SPDE approach in the literature), which is also implemented in R-INLA; see Section 3.1 for more details.

2.3 Inla

The fundamental idea of INLA consists in applying the device of Laplace approximation to integrate out high-dimensional latent components. This theoretical foundation is combined with efficient algorithms and numerical tricks and approximations to ensure a fast yet accurate approximation of posterior marginal densities of interest like those of the latent field (including the predictors ) in (6) or of hyperparameters in (5). Since the details of methods implemented in the INLA approximation are quite technical, we here content ourselves with a presentation of the main ideas and the related options available in R-INLA.

2.3.1 The principle of Laplace approximation

We first recall the principle of the Laplace approximation and its calculation in practice. Typically, one seeks to evaluate an integral , where the positive integrand function , here written as with a scale variable , is defined over a high-dimensional space and is “well-behaved” in the sense that it satisfies some minimal regularity requirements, is unimodal and its shape is not too far from gaussianity; for instance, requiring strict log-concavity of is useful, see Saumard.al.2014. Since the integral value is mainly determined by the behavior around the mode of , a second-order Taylor approximation of can be substituted for to calculate an approximate value of the integral. Assuming that is the unique global maximum of , we get for values close to with the Hessian matrix . Notice that is positive definite. An approximate value of the integral can be calculated using the fact that a multivariate Gaussian density integrates to . The resulting following integral approximation in dimension is expected to become more and more accurate for higher values of , i.e., when the area below the integrand becomes concentrated more and more closely around the mode:

(7)
(8)

here means that (Tierney.Kadane.1986). In statistical practice, may represent the number of i.i.d. replications, each of which has density . Higher values of usually lead to better approximation, and more detailed formal results on the quality of approximation have been derived (Tierney.Kadane.1986; Rue.Martino.Chopin.2009). Many of the models commonly estimated with INLA have no structure of strictly i.i.d. replication, but the Laplace approximation remains sufficiently accurate in most cases since there usually still is a structure of internal replication; ideally, for each latent variable we have at least several observations which contain information about (and which are conditionally independent with respect to by construction of the model).

In the context of INLA, the following observation will be interesting and useful. Fix in (7) and suppose that , where is the joint probability density of a random vector . Then, in (8), the term is the value of at its mode for fixed , whereas is with a Gaussian approximation with mean vector to the conditional density of . In practice, we can determine the mean and the precision matrix of through an iterative Newton–Raphson optimization. Starting from the joint posterior (4) of our latent Gaussian model, we set . We further write and calculate its second-order Taylor expansion . Without loss of generality, we here assume that the linear predictor corresponds to the latent Gaussian vector . We start the iterative optimization with initial values and , where . We then iterate this procedure until convergence such that and , , , where an appropriate convergence criterion must be used. Notice that the conditional independence assumption of observations with respect to allows preserving the sparse structure in . Moreover, a strictly log-concave likelihood function ensures such that are valid precision matrices and local curvature information around can be used for constructing a useful Gaussian approximation. It is further possible to impose linear constraints onto and with given matrix and vector by using the approach of conditioning through kriging (Rue.Martino.Chopin.2009).

2.3.2 Posterior marginal densities of hyperparameters

To calculate

(9)

we use the Laplace approximation of the inner integral as described in Section 2.3.1 such that the approximated density satisfies

(10)

with the mode of the joint density for fixed and a Gaussian density that approximates :

(11)

Notice that the Gaussian approximation is exact if the data likelihood itself is Gaussian. An approximation of the posterior marginal of in (9) is now obtained through a numerical integration with a set of integration nodes chosen from a numerical exploration of the surface of the density (with held fixed). This yields

(12)

with weights (which are chosen to be equal in the approaches implemented in R-INLA). In R-INLA, can either be chosen as a grid around the mode of (int.strategy="grid", the most costly variant), or through a simpler so-called complete composite design which is less costly when the dimension of is relatively large (int.strategy="ccd", the default approach), or we may use only one integration node given as the mode value (int.strategy="eb", corresponding to the idea of an empirical Bayes approach).

2.3.3 Posterior marginal densities of the latent Gaussian field

For calculating the marginal density of a latent variable , we lean on representation (6). Numerical integration with respect to can be done in analogy to the procedure described in Section 2.3.2, and the Laplace approximation (10) allows approximating . It thus remains to (approximately) evaluate . A simple and fast solution would be to use the univariate Gaussian approximation resulting from the multivariate Gaussian approximation (11) whose mean value is and whose variance can easily and quickly be calculated from a partial inversion of the precision (Rue.2005) (strategy="gaussian" in R-INLA). However, this Gaussian approximation often fails to capture skewness behavior and can generate nonnegligible bias in certain cases – an important exception to this issue being the case where the data likelihood is Gaussian. In the general case, using again a Laplace-like approximation

(13)

with mode of for fixed would be preferable, but is relatively costly (strategy="laplace" in R-INLA). Instead, Rue.Martino.Chopin.2009 propose a so-called simplified Laplace approximation based on third-order Taylor developments of numerator and denominator in (13) that satisfactorily remedies location and skewness inaccuracies of the Gaussian approximation (strategy="simplified.laplace" in R-INLA, the default). Notice that the “Nested” in INLA refers to this second Laplace-like approximation.

3 Space-time modeling approaches

Modeling trends over space and time and spatio-temporal dependence in repeated spatial observations is paramount to understanding the dynamics of processes observed over space and time. We here review approaches to integrating the time component into the latent Gaussian predictor that are suitable for high-dimensional space-time inference with INLA. In principle, any space-time Gaussian process could be used, but the requirement of a Gauss–Markov structure for fast matrix calculations and the current scope of models implemented in R-INLA impose some constraints. Flexible, Matérn-like spatial Gauss–Markov models with continuous sample paths are available in R-INLA and will be discussed in Section 3.1. A generalization of such purely spatial models to flexible nonseparable space-time dependence structures is still pending, but R-INLA allows extending such spatial models to capture temporal dependence through autoregressive structures, which will be discussed in the following and explored in the examples of Section 4. New approaches to generic nonseparable space-time Gauss–Markov model classes and their implementation in R-INLA are projected by members from the scientific community working on the SPDE approach. Any other nonseparable covariance model would in principle be amenable to INLA-based inference without difficulty as long as it keeps a Markovian structure when using a high-dimensional latent model.

We start by formulating a generic latent Gaussian space-time model for the predictor that covers many of the models that can be fitted with R-INLA. We denote by , given covariate data, which may depend on space or time only. In some cases, one may obtain useful dynamical structures by constructing artificial covariates at time based on the observations at time . Notice that in cases where covariates are not available except at the observation points , of , one may have to include them into the observation vector and formulate a latent Gaussian model for their interpolation over space and time. This would give a model that achieves both interpolation of covariate values and prediction of at the same time. Here, we consider

(14)

The linear coefficients are known as fixed effects, whereas functions and the processes , and are referred to as random effects. Notice that and are to be understood as marginal models that capture purely spatial or temporal marginal effects. We now shortly present some typical examples of Gaussian prior models that are commonly used and available in R-INLA for the intercept , the linear covariate effects , the nonlinear covariate effects , the marginal temporal effect , the marginal spatial effect and the space-time effect .

A temporal effect could be modeled as an autoregressive process or as a random walk, where autoregression coefficients and the step variance of the random walk represent hyperparameters that could be estimated. A nonparametric spatial effect could be modeled with a Gauss–Markov Matérn-like prior random field, the details of whose construction are presented in the following section 3.1. A simple extension to a space-time effect is obtained from considering independent replicates of the spatial field for each time point. An important class of space-time models that allows for temporal dependence and preserves the Gauss–Markov structure are the stationary first-order autoregressive models

(15)

where and is a stationary spatial innovation process, i.i.d. in time, typically chosen as the Gauss–Markov Matérn-like field. If the process starts in with , then its marginal distributions are stationary. We here allow for to include the purely spatial case; temporal independence arises for . This AR model is a group model where spatial groups , , are joined through an AR group model. In R-INLA, it is possible to work with more general group models that define a type of dependence like “autoregressive”, “exchangeable”, “random walk” or “besag” between certain groups of latent variables. When marginal variances tend to increase over time, an interesting alternative to the autoregressive model (15) may be to link spatial random fields through a random walk structure such that ; the variance of the innovation fields then determines the marginal variance at instant . It is further possible to specify certain graph structures among latent variables; we here refer to www.r-inla.org for full details about the specification of a large variety of available latent models. Owing to issues of identifiability and model complexity, usual only a subset of the terms in (14) is used to construct the latent field in practical applications.

3.1 Spatial Gauss–Markov random fields based on the SPDE approach

The spatial SPDE model of Lindgren.al.2011 defines a Gauss–Markov random field as the approximate solution to a certain stochastic partial differential equation. It is an important building block for latent Gaussian models with spatial and spatio-temporal effects. Contrary to classical covariance function models, this approach provides sparse precision matrices that make numerical procedures efficient even for very high-dimensional problems. Formally, a Gaussian process on is defined through

(16)

with the Laplace operator , a standard Gaussian white noise process and a nonempty spatial domain with regular boundary. Depending on the value of and , the Laplace operator is fractionary with noninteger exponent , and it must be defined in an appropriate way (Lindgren.al.2011). The only stationary solution to (16) for is a Gaussian random field with the Matérn covariance function whose shape parameter is (with yielding the exponential covariance model) and whose scale parameter is . The marginal variance is , and the “empirical range” where a correlation of approximately is attained between two points is around . The Matérn model is known to be very flexible through its scale and shape parametrization, with regularity properties of sample paths governed by the shape parameter .

In practice, when working on a finite domain , boundary effects come into play. One can assume a polygon-shaped boundary , as it is implemented in R-INLA. An interesting choice of boundary condition is the Neumann condition with zero normal derivatives at the boundary such that the Gaussian field is “reflected” at the boundary. Lindgren.al.2011 show that Neumann conditions principally lead to an increase in variance close to the boundary, the factor being approximately when there is one close linear boundary segment, and when we are close to the -degree angle of a rectangle where two linear segments meet. Whereas such boundary conditions may be interesting for some applications, we often prefer to extend the domain beyond the study region towards a larger domain, such that boundary effects become negligible within the study region. This requires that the extended domain’s boundary is separated by a distance superior to the empirical range from the study region.

Approximate solutions to (16) are obtained based on the finite element approach commonly used in the numerical solution of partial differential equations. Using a triangulation of the spatial domain leads to a high-dimensional multivariate representation with Gaussian variables located in the triangulation nodes . Spatial interpolation between nodes is achieved by considering these Gaussian variables as weights for piecewise linear basis functions , one for each node. Basis functions are of compact support giving by the triangles touching the node (“pyramid functions”); see Figure 1 for an example of a basis function and of the approximation of a spatial surface through a linear combination of such basis functions. By “projecting” the SPDE (16) on the space spanned by the Gaussian variables, one can calculate the precision matrix governing the dependence structure between these variables.

Figure 1: Illustration taken from Blangiardo.al.2013. Example of a Gaussian surface (left) and its finite element approximation (right). On the right display, we can further see the triangulation and one of the finite element pyramid-shaped basis functions.

There are certain rules of thumb to be respected for a construction of the triangulation that does not strongly distort the actual dependence structure of the exact solution to (16) and that remains numerically stable with respect to certain matrix computations, mainly concerning maximum sizes of triangles and minimum sizes of interior angles. For numerical efficiency, overly fine triangulations can be avoided by requiring minimum edge lengths, for instance; see Lindgren.al.2011 for further details on the finite element construction. The approximate Gauss–Markov solution has mean zero and its precision matrix has entries that are determined by the SPDE. We refer to the Appendix section of Lindgren.al.2011 for the calculation of , which is explicit. Based on the approximate solutions for and , an approximate solution of the SPDE for can be obtained by injecting the solution for at the place of the white noise in (16). For non-integer values of , additional approximations are necessary to obtain a solution, see the authors’ response in the discussion of the Lindgren.al.2011 paper. We have here considered and to be constant over space. It is possible to allow spatial variation of these parameters for nonstationary models with as in (16) and a precision-related parameter that varies over space. To wit, Ingebrigtsen.al.2014 apply such second-order nonstationary modeling to precipitation in Norway, where altitude is used as a covariate that acts on the local covariance range in the dependence structure; the recent contribution of Bakka.al.2016 uses second-order nonstationarity to account for physical barriers in species distribution modeling. The SPDE (16) is also well-defined over manifolds , for instance the sphere embedded in . In general, Lindgren.al.2011 show that the approximate solution converges to the true solution of the SPDE for an adequately chosen norm when the triangulation is refined in a way such that the maximum diameter of a circle inscribed into one of the triangles tends to .

R-INLA currently implements calculation, estimation and simulation of the Gauss–Markov SPDE solution for , for a subset of , or a two-dimensional manifold embedded into , and for spatially constant or varying values and . A large number of tools is available to construct numerically efficient and stable triangulations.

4 Using R-Inla on a complex simulated space-time data set

To execute the following code, the INLA package of R must be installed; see www.r-inla.org for information on the R command recommended for installing this package, which is not hosted on the site of the Comprehensive R Archive Network (CRAN) due to its use of external libraries. We here illustrate the powerful estimation and inference tools of R-INLA in a controlled simulation experiment with data simulated from a latent Gaussian space-time model with the Poisson likelihood. The full code for the simulation study below can be requested from the author.

4.1 Simulating the data

We simulate a space-time count model based on a latent first-order autoregressive Gaussian process defined on for . We use two covariates, given as and simulated according to an autoregressive time series model. The simulated model is

The covariance function of the standard Gaussian field is chosen of Matérn type with shape parameter and effective range (corresponding to a Matérn scale parameter ). We now fix observation points of , determined as the Cartesian product of sites uniformly scattered in and , .

To illustrate the simulation capacities of R-INLA, we here use the SPDE approach to achieve simulation based on the Gauss–Markov approximation of the Matérn correlation structure of the spatial innovations. After loading the INLA-package and fixing a random seed for better reproducibility of results, we start by defining the , and parameters of the SPDE. To avoid boundary effects in the SPDE simulation, we will use the square as spatial support.

library(INLA)
seed=2;set.seed(seed)
n.repl=60;n.sites=50
nu=1;alpha=nu+1;range0=0.25;sigma0=1;a=.5
kappa=sqrt(8*nu)/range0
tau=1/(2*sqrt(pi)*kappa*sigma0)

Next, we create a fine 2D triangulation mesh for relatively accurate simulation, with maximum edge length within and in . The minimum angle between two edges is set to , a value recommended to avoid ill-conditioned triangulations containing very elongated triangles. Polygon nodes should be given in counterclockwise order, whereas polygon-shaped holes in the support would be specified by clockwise order of nodes; see the left display of Figure 2 for the resulting triangulation.

nodes.bnd=matrix(c(0,0,1,0,1,1,0,1),ncol=2,byrow=T)
segm.bnd=inla.mesh.segment(nodes.bnd)
nodes.bnd.ext=matrix(c(-.5,-.5,1.5,-.5,1.5,1.5,-.5,1.5),ncol=2,byrow=T)
segm.bnd.ext=inla.mesh.segment(nodes.bnd.ext)
mesh.sim=inla.mesh.2d(boundary=list(segm.bnd,segm.bnd.ext),
   max.edge=c(.04,.2),min.angle=21)
plot(mesh.sim)

The list slot mesh.sim$n informs us that there are triangulation nodes. The mesh object mesh.sim has a slot mesh$loc which contains a three column matrix. The first two columns indicate the 2D coordinates of mesh nodes. In our case, the third coordinate, useful for specifying 2D manifolds in 3D space, is constant . We continue by creating the SPDE model through an R-INLA function inla.spde2.matern whose main arguments are used to pass the mesh object and to fix parameters , and . Its arguments B.tau and B.kappa are matrices with one row for each mesh node. If only row is given, it describes a model with stationary values of and , which will be duplicated internally for all nodes. For simulating a model with fixed parameters and , these matrices have only one column that must be specified as or respectively. Then, we extract the precision matrix of the resulting SPDE model and use it to create independent samples of , through the function inla.qsample. We fix the random seed for simulation through its seed=... argument. Finally, we manually create the first order AR model with coefficient .

B.kappa=matrix(log(kappa),1,1)
B.tau=matrix(log(tau),1,1)
model.sim=inla.spde2.matern(mesh.sim,alpha=alpha,
   B.tau=B.tau,B.kappa=B.kappa)
Q=inla.spde2.precision(model.sim)
x=inla.qsample(n=n.repl,Q,seed=seed)
a=.5
for(i in 2:n.repl){x[,i]=a*x[,i-1]+sqrt(1-a^2)*x[,i]}

It remains to fix covariate values and to generate the time trend in the mean to add it to the centered Gauss–Markov space-time field. We fix an intercept and the two covariates covar1 covar2.

covar1=1:n.repl/n.repl
covar2=as.numeric(arima.sim(n=n.repl,model=list(ma=rep(1,5))))
xtrend=-1+covar1+0.5*covar2
x=t(t(x)+xtrend)
plot(xtrend,type="l",xlab="time",ylab="trend",lwd=2)

For the observed data to be used in estimation, we sample uniformly sites among the triangulation nodes contained in . By using R-INLA’s methods inla.mesh.projector and inla.mesh.project to project a spatial field with known values for triangulation nodes onto a regular grid necessary for standard plotting methods, we further provide a plot of and the observation sites. At , we observe the maximum value of the time trend, and we will later use R-INLA to do spatial prediction for .

nodes=mesh.sim$loc[,1:2]
idx.inside=which(pmin(nodes[,1],nodes[,2])>=0&pmax(nodes[,1],nodes[,2])<=1)
idx.obs=sample(idx.inside,size=n.sites)
sites=nodes[idx.obs,]
eta.i=as.numeric(x[idx.obs,])
y=rpois(length(eta.i),lambda=exp(eta.i))
t.pred=which.max(xtrend)
n.grid=100
grid=inla.mesh.projector(mesh.sim,dims=c(n.grid,n.grid))
image(grid$x,grid$y,inla.mesh.project(grid,field=x[,t.pred]),xlab="x",
   ylab="y",asp=1)
points(sites,pch=19, cex=.5)

Figure 2 shows the trend component (middle display) and the spatial field at a fixed instant with observation sites indicated (right display).

Figure 2: Simulated latent Gaussian model. Left: triangulation used for the approximate SPDE simulation. Middle: time trend. Right: simulation of the linear predictor for , with observation sites marked by black dots.

4.2 Fitting a complex space-time model with INLA

We now define and fit different candidate models to the above data y and its covariates covar1 and covar2. In Section 4.4, we will then explore tools for goodness-of-fit checks and model selection within the R-INLA package. We will first consider a model with prior structure similar to the simulated one, but we will also compare it to models where covariates are missing, where the Matérn shape parameter takes a different value or where the likelihood is not Poisson, but of the negative binomial type (with an additional hyperparameter for overdispersion).

First, let us define the triangulation mesh and the corresponding prior spatial SPDE model for estimation. For estimation, we must be careful about the dimension of the latent model to minimize memory requirements and high-dimensional matrix calculations. Therefore, will use a mesh with a lower resolution than for simulation, which may slightly increase the approximation error with respect to the stationary Matérn correlation structure. It is often reasonable to use observation sites as initial nodes of the triangulation and to refine it by adding nodes where necessary, or by removing nodes too close together which could be source for numerical instabilitities. R-INLA implements standard methods from the finite element literature and offers a conveniently parametrized interface to produce numerically stable and moderately dimensioned triangulations. R-INLA’s prior parametrization of and is a bit technical; it essentially assumes that and , where the values correspond to the columns of B.tau and B.kappa. The first column is a fixed offset that must always be specified (even if it is ), whereas the following columns correspond to hyperparameters that are estimated when for or . For instance, specifying B.tauB.kappamatrix(c(0,1),nrow1) would lead to a model where . In our model, we fix the offset and we estimate two hyperparameters, one corresponding to , the other to . This can be seen as the standard prior specification of the SPDE model in R-INLA. We fix the “correct” simulated value of in the SPDE model.

mesh=inla.mesh.2d(sites,offset=c(-0.125,-.25),cutoff=0.075,min.angle=21,
   max.edge=c(.1,.25))
plot(mesh)
points(sites,col="red",pch=19,cex=.5)
spde=inla.spde2.matern(mesh=mesh,alpha=alpha,B.tau=matrix(c(0,1,0),nrow=1),
   B.kappa=matrix(c(0,0,1),nrow=1))

The mesh counts nodes. Notice that further arguments of the inla.spde2.matern(...) function can be set to modify default priors, impose "integrate-to-zero" constraints, etc. We now run a first estimation by considering a model with Poisson likelihood and prior structure of the latent Gaussian field corresponding to the model that we simulated to generate the data. Naturally, this model should provide a good fit and we will later compare it to a number of alternative models. The observation matrix links observations to the latent variables through and must therefore be of dimension (number of latent variables) for our model. Since construction of this matrix and certain preprocessing steps before estimation like the removal of duplicate rows is rather complicated for complex models involving the spatial SPDE solution, R-INLA has helper methods that allow constructing this matrix and keeping track of latent variable indices more easily. In the following, the inla.spde.make.index command creates an index named "spatial", i.e., a data frame with a vector spatial (an index to match latent variables and triangulation nodes), a vector spatial.group (an index that indicates the membership of a latent variable in a "group" , here given as the instant ), and a vector spatial.repl (an index that indicates the group membership when groups are i.i.d. replicates). In our case, all values of spatial.repl are since there is no structure of i.i.d. blocks in our space-time model due to the non-zero autoregression coefficient.

idx.spatial=inla.spde.make.index("spatial",n.spde=spde$n.spde,n.group=n.repl)
A.obs=inla.spde.make.A(mesh,loc=sites,index=rep(1:nrow(sites),n.repl),
   group=rep(1:n.repl,each=nrow(sites)))
stack.obs=inla.stack(data=list(y=y),A=list(A.obs,1),effects=list(idx.spatial,
   data.frame(intercept=1,covar1=rep(covar1,each=n.sites),
      covar2=rep(covar2,each=n.sites))),tag="obs")

In practice, we may want to use a fitted model for prediction at unobserved sites. A natural way to achieve prediction in the Bayesian framework of INLA is to add the prediction points to the data by considering the associated observations as missing data. To illustrate R-INLA’s facilities for this approach, we here consider prediction at instant over a regular spatial grid covering . Therefore, we first create a separate observation matrix and a stack for the prediction points with missing data, and we will then use R-INLA’s join-mechanism that allows regrouping several groups of predictors through their observation matrices . The inla.stack.join(stack1,stack2,...)-function creates a single model corresponding to a structure , where information relative to is regrouped in stackk for

n.grid=51
xgrid=0:(n.grid-1)/(n.grid-1)
grid.pred=as.matrix(expand.grid(xgrid,xgrid))
A.pred=inla.spde.make.A(mesh,loc=grid.pred,index=1:nrow(grid.pred),
   group=rep(t.pred,nrow(grid.pred)),n.spde=spde$n.spde,n.group=n.repl)
stack.pred=inla.stack(data=list(y=rep(NA, n.grid^2)),A=list(A.pred,1),
   effects=list(idx.spatial,data.frame(intercept=1,
      covar1=rep(covar1[t.pred],n.grid^2),
         covar2=rep(covar2[t.pred],n.grid^2))),tag="pred")
stack=inla.stack(stack.obs,stack.pred)

We now run the estimation with the inla(...) function. Its syntax ressembles that of R’s glm(...)-function for generalized linear models, although with a variety of extensions and additional arguments. We need a model formula given in the usual R notation. For better handling of the intercept term, it is often preferable to make it appear explicity (form=y -1+intercept+...), such that it can later be directly included into the latent space(-time) model. Fixed effects (i.e., the intercept and covariates whose linear regression coefficients are estimated) are added to the formula in the usual way, whereas random effects are added with the f(...) function. In our model, the approximate SPDE solution is a random effect. The first argument of f is the name of the "covariate" associated to the random effect. Having created an index with name spatial beforehand, we now have a covariate spatial in the data set that indicates the triangulation node index of the spatial SPDE model (repeated times since the spatial model is duplicated for each observation time). For prior SPDE models, we further specify the argument model=spde in f, where spde is the R object already created for the SPDE prior model. The SPDE model is of purely spatial nature whereas we have observations in space and time, such that we can use the group-functionality of R-INLA to define the type of dependence between the groups of spatially indexed Gaussian variables. Corresponding to the simulated model, we here use an AR(1)-group model that models site-wise first-order autoregression of variables over time. Since we have created the index spatial, we can specify the argument group=spatial.group to indicate group membership of the covariates, and we have to set control.group=list(model="ar1") for the AR(1)-model between groups.

Data must be passed to the fitting function inla(...) as a data.frame or list, and the inla.stack.data-function allows convenient extraction of the preprocessed data object from the stack. Further control arguments to inla(...) can be specified explicitly through R’s usual control.=list(...) syntax, which allows overriding the default control arguments. Here, should be replaced by one of inla (for controlling INLA-related details like the reordering scheme used for making the precision matrix as diagonal as possible), compute (for specifying which quantities should be calculated, e.g. goodness-of-fit and model selection criteria), predictor (for specifying the observation matrix if there is one, and for indicating which posterior quantities should be calculated for the predictor vector ), family (for modifying the default prior of likelihood hyperparameters), amongst others. The choice of prior distributions is often not a straightforward exercise in Bayesian statistics. R-INLA proposes default choices, as for instance non informative priors for fixed effects, but the user can override the default settings by using the hyper=list(...) syntax in the control.family list (for hyper parameters related to the likelihood family) or in the f(...) function when generating random effects; for fixed effects, the mean and precision elements of the control.fixed list allow modifying the prior. In the following, we here fix the METIS-reordering strategy in control.inla to avoid the higher memory requirements of the standard reordering scheme (which were too high for the machine with GB of memory used for fitting, leading to a "bus error"). In control.predictor, we pass the observation matrix that can be extracted from the stack via inla.stack.A(stack), and we advise the program to calculate (discretized) posterior densities for the predictor variables (compute=T). Moreover, for a correct prediction of the NA values, we must tell inla to use the link function from the first likelihood family in control.family (in our case, there is only one) by indicating link=1. By default, R-INLA would have assumed an identity link for NA values. In control.compute, we here demand the calculation of CPO-values (cross-validated predictive measures, see Held.al.2010 for a comparison of Markov chain Monte Carlo and INLA), the marginal likelihood , the Deviance Information Criterion (DIC) and the Watanabe–Akaike Information Criterion (WAIC, Watanabe.2010; Gelman.al.2014), where the latter can be considered as a Bayesian variant of the common AIC. The CPO-related values are the density and cdf of the posterior distribution , evaluated at the observed . With INLA, these cross-validation quantities can be calculated quickly without explicitly reestimating the model, see Rue.Martino.Chopin.2009 for details. However, certain theoretical assumptions might be violated such that some or all of these CPO-related values are not trustworthy, which is then indicated in the inla-output in fit$cpo$failure. In such a case, the inla.cpo(...)-function can be used for “manual” reestimation of the cross-validated model for the concerned data points . We now construct the inla(...)-call. For later use, we here also store the data and parameters of this first model in an object mod1:

data=inla.stack.data(stack,spde=spde)
form=y~-1+intercept+covar1+covar2+f(spatial,model=spde,
   group=spatial.group,control.group=list(model="ar1"))
cc=list(cpo=T,dic=T,mlik=T,waic=T)
cp=list(A=inla.stack.A(stack),compute=T,link=1)
ci=list(reordering="metis")
mod1=list(stack=stack,data=data,A.pred=A.pred,A.obs=A.obs,
   idx.spatial=idx.spatial,spde=spde,form=form,cp=cp,ci=ci,cc=cc)
fit=inla(form,family="poisson",data=data,control.compute=cc,
   control.predictor=cp,control.inla=ci)

Here we have used the default prior for , which is defined as a Gaussian prior with initial value , mean and precision on . We could have modified it by specifying the hyper argument in control.group; for example, control.group =list(model="ar1", hyper=list(rho=list(prior="normal", initial=0, param=c(0,25))) would keep the Gaussian prior and set a high precision of , therefore leading to an informative prior concentrating strongly around the value resulting in temporal independence. Since our likelihood is not Gaussian (but Poisson) and since the latent model is quite complex, the inla run takes some time (around minutes on a state-of-the art 4 core machine), and memory requirements are rather high. Notice that the standard reordering scheme for the precision matrix could lead to a reduced computation time. We remark that inla(...) has a num.threads argument which allows fixing the maximum number of computation threads used by R-INLA. By default, R-INLA takes control over all available cores of the machine for parallel execution, which can lead to problems in terms of too high memory requirements since each thread occupies a certain amount of memory. As it is usual in R, we can now call summary(fit) to obtain principal results of the fit. Part of its output is as follows:

[…]
Time used:
 Pre-processing    Running inla Post-processing           Total
         1.0956       2766.6987          1.0865       2768.8808
Fixed effects:
             mean     sd 0.025quant 0.5quant 0.975quant    mode kld
intercept -0.9094 0.1451    -1.1959  -0.9091    -0.6251 -0.9084   0
covar1     0.7827 0.2352     0.3193   0.7829     1.2444  0.7833   0
covar2     0.5092 0.0262     0.4579   0.5091     0.5610  0.5089   0
Random effects:
Name¯  Model
 spatial   SPDE2 model
Model hyperparameters:
                        mean     sd 0.025quant 0.5quant 0.975quant    mode
Theta1 for spatial   -3.5143 0.0670    -3.6459  -3.5142    -3.3828 -3.5139
Theta2 for spatial    2.2750 0.0696     2.1385   2.2750     2.4119  2.2750
GroupRho for spatial  0.4892 0.0325     0.4256   0.4889     0.5532  0.4875

Expected number of effective parameters(std dev): 1020.18(24.28)
Number of equivalent replicates : 2.941
Deviance Information Criterion (DIC) …: 8544.98
Effective number of parameters ………: 979.41
Watanabe-Akaike information criterion (WAIC) …: 8522.71
Effective number of parameters ……………..: 745.45
Marginal log-Likelihood:  -4752.98
[…]

We see that the posterior means of the autoregression coefficient (GroupRho) and of covariate coefficients of covar1 and covar2 are not far from the actually simulated values, and the true values of those parameters lie clearly inside the credible intervals. The object fit is of list type; its various slots contain a multitude of information. For our model, we could be interested in a better interpretable representation of the hyperparameter estimates of the spatial SPDE model in terms of effective range and variance. In the following, inla.spde.result(...) extracts the fitting result for the spatial index spatial associated to the SPDE. Then, for instance, inla.qmarginal allows extracting posterior marginal quantiles, and inla.emarginal(FUN, ...) calculates posterior marginal expectations of , where is a function and is the posterior marginal distribution (note that expectations are particular since ).

result.spatial=inla.spde.result(fit,"spatial",spde)
inla.emarginal(identity,result.spatial$marginals.range.nominal[[1]])
[1] 0.2914313
inla.qmarginal(c(0.025,0.25,0.5,0.75,0.975),
   result.spatial$marginals.range.nominal[[1]])
[1] 0.2539053 0.2773829 0.2906817 0.3046177 0.3327678
inla.emarginal(identity,result.spatial$marginals.variance.nominal[[1]])
[1] 0.9515285
inla.qmarginal(c(0.025,0.25,0.5,0.75,0.975),
   result.spatial$marginals.variance.nominal[[1]])
[1] 0.8368486 0.9083829 0.9490067 0.9918896 1.0788070

Summary statistics for any marginal distribution can further be obtained through the function inla.zmarginal(...). We also plot a histogram of the cross-validated cdf values, and give a histogram of the fit$cpo$failure values (between and ), where values far from indicate a violation of internal assumptions in the calculation of cdf values (see Figure 3 for the resulting displays):

hist(fit$cpo$pit,breaks=50,main="",xlab="PIT value",ylab="number")
hist(fit$cpo$failure,breaks=50,main="",xlab="failure indicator",ylab="number")
Figure 3: Left: histogram of Internally cross-validated cdf values for . Right: histogram of the failure indicator of trustworthiness of calculated cdf values.

Cdf values are not very far from being uniform, which indicates that there are no strong systematic biases in posterior predictions made with the model. A certain proportion of the failure indicator values are above zero and some are even , meaning that there are some cdf values that must be interpreted with caution.

Since we have done prediction for , we now extract and visualize the marginal posterior mean and standard deviation over the prediction grid. To get the index of predicted variables, we can apply the inla.stack.index function to the stack object by indicating the prediction sub-stack through the argument tag="pred". The following code visualizes the predictions and the originally simulated Gaussian values on the prediction grid. The inla.mesh.projector function allows switching between the finite element representation and a regular grid by calculating the Gauss–Markov finite element approximation value to the SPDE for the grid positions by using the "pyramid" basis functions to interpolate between triangulation nodes. We use the inla.emarginal-function for calculating posterior expectations, where the standard deviation is calculated as in the following code:

idx.pred=inla.stack.index(stack, tag="pred")$data
eta.marginals=fit$marginals.linear.predictor[idx.pred]
eta.mean=unlist(lapply(eta.marginals,function(x) inla.emarginal(identity,x)))
eta.mean=matrix(eta.mean,n.grid,n.grid)
image(x=xgrid,y=xgrid,eta.mean,asp=1,xlab="x",ylab="y",main="posterior mean")
proj=inla.mesh.projector(mesh.sim,xlim=c(0,1),ylim=c(0,1),dims=c(n.grid,n.grid))
image(grid$x,grid$y,inla.mesh.project(proj,field=x[,t.pred]),xlab="x",
   ylab="y",xlim=c(0,1),ylim=c(0,1), asp=1,main="original")
points(sites,pch=19,cex=1)
eta2.mean=unlist(lapply(eta.marginals,function(x) inla.emarginal("^",x,2)))
eta2.mean=matrix(eta2.mean,n.grid,n.grid)
eta.sd=sqrt(eta2.mean-eta.mean^2)
image(x=xgrid,y=xgrid,eta.sd,asp=1,xlab="x",ylab="y",main="posterior sd")
points(sites,pch=19,cex=1)

Figure 4 shows the resulting displays. As expected, uncertainty is lower close to observation sites. The prediction captures the spatial variation of the actual data, and a deeper analysis shows that the predicted surface is smoother than the original values: since the Gaussian prior on the predictor is centered at , the spatial variation in posterior predictions is naturally less strong owing to the Bayesian approach.

Figure 4: Predictions of the latent Gaussian field for . Left: original simulated data. Middle: posterior mean. Right: posterior standard deviation.

4.3 Other candidate models

In Section 4.2, we have used our knowledge about the simulated data model to construct the Bayesian model that should be the most appropriate. For comparison and to illustrate R-INLA’s functionality for other types of models, we here propose to fit some alternative candidate models.

For Model 2, we drop the temporal autoregression and consider the spatial blocks of SPDE variables as independent in the prior. This necessitates some (slight) adaptations in the code since there is no more group model with dependence between blocks, but we now have replicates, i.e., independent blocks.

idx.spatial=inla.spde.make.index("spatial",n.spde=spde$n.spde,n.repl=n.repl)
A.obs=inla.spde.make.A(mesh,loc=sites,index=rep(1:nrow(sites),n.repl),
   repl=rep(1:n.repl,each=nrow(sites)))
stack.obs=inla.stack(data=list(y=y),A=list(A.obs,1),effects=list(idx.spatial,
   data.frame(intercept=1,covar1=rep(covar1,each=n.sites),
      covar2=rep(covar2,each=n.sites))),tag="obs")
A.pred=inla.spde.make.A(mesh,loc=grid.pred,index=1:nrow(grid.pred),
   repl=rep(t.pred,nrow(grid.pred)),n.spde=spde$n.spde,n.repl=n.repl)
stack.pred=inla.stack(data=list(y=NA),A=list(A.pred,1),effects=
   list(idx.spatial,data.frame(intercept=1,covar1=rep(covar1[t.pred],n.grid^2),
      covar2=rep(covar2[t.pred],n.grid^2))),tag="pred")
stack=inla.stack(stack.obs,stack.pred)
data=inla.stack.data(stack,spde=spde)
form=y~-1+intercept+covar1+covar2+f(spatial,model=spde,replicate=spatial.repl)
cp=list(A=inla.stack.A(stack),compute=T)
fit=inla(form,family="poisson",data=data,control.compute=mod1$cc,
   control.predictor=cp,control.inla=mod1$ci)

Model 3 focuses on the temporal trend, neglects spatial variation and supposes that covariates are not known. Here we use a temporal first-order random walk as prior model. We set a rather high initial prior value for the precision of the random walk innovations (corresponding to a standard deviation of ). A sum-to-zero constraint is added (constr=T) for better identifiability (notice that in the case of the rw1 model it is already added by default).

data3=data.frame(y=y,covar1=rep(covar1,each=n.sites))
form=y~f(covar1,model="rw1",hyper=list(prec=list(initial=log(1/.01^2),
   fixed=F)),constr=T)
cp=list(compute=T)

Once the model is fitted, it would be relatively easy to extract information about the posterior distribution of the random trend from the lists fit$summary.linear.predictor or fit$summary.fitted.values, which contain copies of the same posterior information for each time step due to the sites with data. As an alternative, we here illustrate the powerful lincomb-tool to directly calculate posterior distributions for certain linear combinations of the latent effects, which is very useful whenever we need precise information on the posterior distribution of some linear combinations of certain latent variables. In our case, the values of the random trend are represented as the sum of the intercept (fixed effect) and each of the latent rw1 variables (random effect). The command

lc=inla.make.lincombs("(Intercept)"=rep(1,n.repl),covar1=diag(n.repl))
¯

defines linear combinations with structure "intercept value plus th component of the random walk", , where "(Intercept)" refers to the intercept if it has been defined implicitly in the formula without a variable name assigned to it. We now fit the model:

fit=inla(form,family="poisson",data=data3,lincomb=lc,control.predictor=cp)

The fit object will contain a list summary.lincomb.derived providing the requested posterior summaries for the explicitly defined linear combinations of the latent variables. Another interesting model could be obtained from combining Models 2 and 3, i.e., using a random walk in time and a spatial SPDE model. Such a model is relatively complex if not overly complicated and its estimation is computationally demanding, so we do not consider it here. In Models 4 and 5, we specify a shape parameter of the Gauss–Markov Matérn model in the prior that is different from the simulated model, using either (exponential model) or (not a proper Matérn model, but still a valid covariance model).

nu=0.5 #model 4
nu=0 #model 5
alpha=nu+1
spde=inla.spde2.matern(mesh=mesh,alpha=alpha,B.tau=matrix(c(0,1,0),nrow=1),
   B.kappa=matrix(c(0,0,1),nrow=1))
form=y~-1+intercept+covar1+covar2+f(spatial,model=spde,group=spatial.group,
   control.group=list(model="ar1"))
fit=inla(form,family="poisson",data=mod1$data,control.compute=mod1$cc,
   control.predictor=mod1$cp,control.inla=mod1$ci)

Finally, Model 6 uses not the Poisson likelihood but the negative binomial one that has an additional overdispersion parameter , where the variance of the negative binomial distribution is and is its mean. Notice that the Poisson distribution arises in the limit for . The internal parametrization of considers as a hyperparameter. We here fix a relatively high initial value and use an informative log-gamma prior with shape and rate , such that the prior expectation of is (with prior variance ); this yields a prior likelihood model relatively close to the Poisson one:

cf=list(hyper=list(list(theta=list(initial=log(10),prior="loggamma",
   param=c(10,1)))))
fit=inla(mod1$form,
family="nbinomial",data=mod1$data,control.compute=mod1$cc,
   control.family=cf,control.predictor=mod1$cp, control.inla=mod1$ci)

More generally, the use of relatively narrow informative priors may improve the stability of computations in complex models. Recent modifications of R-INLA go towards a more systematic use of the so-called penalized complexity priors (Simpson.al.2014), designed to shrink the model towards a relatively simple reference model in a natural way and independently of any reparametrization of prior parameters, where shrinkage towards the reference happens when data do not provide clear evidence of the contrary.

4.4 Analyzing fitted models

We now compare the fitted models. For the purely temporal model 3, the following code plots a posterior mean estimate of the fitted temporal trend (using the lincomb-feature explained in Section 4.3) and the simulated data, see Figure 5:

plot(1:n.repl,fit$summary.lincomb.derived$mean,type="l",xlab="t",
   ylab="time trend",lwd=2)
lines(1:n.repl,xtrend,col="blue", lwd=2)

We see that neglecting the spatial variation in data and using a relatively simple, purely temporal model here permits to reconstruct quite accurately the simulated temporal trend.

Figure 5: Purely temporal random walk model 3. Posterior mean curve (black) and simulated curve (blue).

Finally, we can compare the information criteria DIC and WAIC, marginal likelihoods, estimates of spatial range, variance and temporal autocorrelation over the candidate models, see Table 1. The marginal likelihood is relatively close for all space-time models with an explicit spatial structure and temporal autoregression (Models 1,4,5,6), and has its by far lowest value for the purely temporal model 3. Model 1 whose prior structure is closest to the simulated model turns out best in terms of DIC, but has slightly higher WAIC values than models 4 and 5 characterized by a different fixed shape parameter in the SPDE solution, leading to less smooth sample paths in the spatial prior random field. Notice however that differences in the estimated WAIC values between models 1,4 and 5 are relatively small such that they should be interpreted with caution. Model 6 with the negative binomial likelihood but with the same latent Gaussian prior model as Model 1 has rather high DIC and WAIC values, maybe due to approximations and computations that are less accurate for this fitted model. Concerning range and variance parameters of the fitted spatial models, we find that they are close to simulated values in all cases except Model 5, where the different prior shape parameter in the SPDE seems having perturbed the calculations of the dependence structure and the variance. Throughout, the posterior distribution of the autoregression coefficient (if estimated) is concentrated around the true value .

Model DIC WAIC
1 (;) (;) (;)
2 (;) (;)
3
4 (;) (;) (;)
5 (;) (;) (;)
6 (;) (;) (;)
Table 1: Comparison of fitted models: DIC, WAIC, marginal likelihood , spatial range, variance of spatial model, autoregression coefficient . All values are rounded to three significant digits and to at most two decimals. Estimates are posterior means and credible intervals are in parentheses.

5 Discussion

We have illustrated the theory and practice of Integrated Nested Laplace Approximation, implemented in the powerful R-INLA software library, which enables fast and accurate inference of complex Bayesian models. The dynamic behavior and dependence structure in the models covered by R-INLA is primarily governed by a latent Gaussian process for the mean of the univariate data distribution. In view of the near endless range of models that are available, one should perhaps also sound a note of caution since users may be misled to construct overly complicated models in practice.

R-INLA has mechanisms to manage several different likelihoods in the same model, to use the same latent variables in different parts of the latent model (“copying”) or to deal with non-informative missing data. A more detailed explanation of recently added features of R-INLA can be found in Martins.al.2013 and Ferkingstad.Rue.2015, where the latter paper describes an improvement over the default Laplace approximation strategy for difficult modeling cases where the number of latent variables is of the order of the number of observations or where the concavity of the log-likelihood of the data is not strong enough. Although the INLA approach does not directly provide the posterior dependence structure between predictors or between the data distributions for different , one could assume a Gaussian copula model with the Gaussian dependence given by the precision matrix of the Gaussian approximation (11) to obtain a practically useful approximation of the posterior dependence; this approach is implemented through the inla.posterior.sample(...) function in R-INLA. New users should look around at www.r-inla.org, the main hub for staying informed about new INLA-related developments, for finding implementation details of R-INLA and for getting advice on specific problems via its discussion forum.

The SPDE approach, providing flexible spatial Gauss–Markov models, is of interest in its own far beyond the INLA framework where Markovian structures lead to fast high-dimensional matrix computations. Multivariate extensions (Hu.al.2013) and certain nested version of SPDEs (Bolin.Lindgren.2011) have already been proposed in the literature, although they are not yet available in R-INLA. Further modeling extensions in terms of data likelihoods and latent models can be expected in the near future. At the current stage, certain types of new, user-defined models may be implemented through the inla.rgeneric.define(...) function. In particular, the construction of more complex and realistic Gauss–Markov space-time dependence structures based on the SPDE approach, capable to model effects like the nonseparability of space and time would be another big step forward.

6 Acknowledgements

The author is grateful to the editors and two reviewers for many helpful comments and to Håvard Rue for a discussion on the Laplace approximation (and for developing R-INLA with the help of numerous other contributors, of course!).

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48393
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description