1 Introduction

An intuitive Bayesian spatial model for disease mapping that accounts for scaling

Andrea Riebler 111(to whom correspondence should be addressed) Department of Mathematical Sciences, Norwegian University of Science and Technology, Norway, andrea.riebler@math.ntnu.no, Sigrunn H. Sørbye 222Department of Mathematics and Statistics, UiT The Arctic University of Norway, Norway., Daniel Simpson 333Department of Mathematical Sciences, University of Bath, United Kingdom., Håvard Rue,

Abstract
In recent years, disease mapping studies have become a routine application within geographical epidemiology and are typically analysed within a Bayesian hierarchical model formulation. A variety of model formulations for the latent level have been proposed but all come with inherent issues. In the classical BYM (Besag, York and Mollié) model, the spatially structured component cannot be seen independently from the unstructured component. This makes prior definitions for the hyperparameters of the two random effects challenging. There are alternative model formulations that address this confounding, however, the issue on how to choose interpretable hyperpriors is still unsolved. Here, we discuss a recently proposed parameterisation of the BYM model that leads to improved parameter control as the hyperparameters can be seen independently from each other. Furthermore, the need for a scaled spatial component is addressed, which facilitates assignment of interpretable hyperpriors and make these transferable between spatial applications with different graph structures. The hyperparameters themselves are used to define flexible extensions of simple base models. Consequently, penalised complexity (PC) priors for these parameters can be derived based on the information-theoretic distance from the flexible model to the base model, giving priors with clear interpretation. We provide implementation details for the new model formulation which preserve sparsity properties, and we investigate systematically the model performance and compare it to existing parameterisations. Through a simulation study, we show that the new model performs well, both showing good learning abilities and good shrinkage behaviour. In terms of model choice criteria, the proposed model performs at least equally well as existing parameterisations, but only the new formulation offers parameters that are interpretable and hyperpriors that have a clear meaning.

KEY WORDS: disease mapping, Bayesian hierarchical model, INLA, Leroux model, penalised complexity prior, scaling

## 1 Introduction

Over the recent years, there has been much interest in spatial modeling and mapping of disease or mortality rates. Due to inherent sampling variability it is not recommended to inspect crude rates directly, but borrow strength over neighbouring regions to get more reliable region-specific estimates. The state of the art is to use Bayesian hierarchical models, where the risk surface is modelled using a set of spatial random effects, in addition to potentially available covariate information [1]. The random effects shall capture unobserved heterogeneity or spatial correlation that cannot be explained by the available covariates [2]. However, there has been much confusion on the design of the spatial random effects. Besag et al.[3] proposed an intrinsic autoregressive model, often referred to as the CAR prior or Besag model, where the spatial effect of a particular region depends on the effects of all neighbouring regions. They also proposed the commonly known BYM model, where an additional unstructured spatial random effect is included to account for independent region-specific noise. Consequently, various model modifications and alternative approaches have been proposed, see for example Leroux et al. [4], Stern and Cressie [5] and Dean et al. [6] The appropriateness and behaviour of different latent spatial models have been compared in a full Bayesian context [7, 8, 9, 2], an empirical Bayes setting [10] or using penalized quasi likelihood [4]. It is accepted that the Besag model may lead to misleading results in the case where there is no spatial correlation in the data [4, 8]. For this purpose, most alternative models propose to account for unstructured variability in space.

By including both structured and unstructured components in the model, potential confounding must be addressed [11], as otherwise it is not clear how to split the variability over the effects. This problem has motivated development of reparameterised models, in which the precision parameters of the two components are replaced by a common precision parameter and a mixing parameter, which distribute the variability between the structured and unstructured components [4, 6].

However, existing approaches do have two issues. First, the spatially structured component is not scaled. This implies that the precision parameter does not represent the marginal precision but is confounded with the mixing parameter. Thus, the effect of any prior assigned to the precision parameter depends on the graph structure of the application. This has the additional effect that a given prior is not transferable between different applications if the underlying graph changes [12]. Second, the choice of hyperpriors for the random effects is not straightforward. Approaches to design epidemiologically sensible hyperpriors include, among others, Bernardinelli et al. [11] and Wakefield [8].

Recently, Simpson et al. [13] proposed a new BYM parameterisation that accounts for scaling and provides an intuitive way to define priors by taking the model structure into account. This new model provides a new way to look at the BYM model. The primary goal is not to optimize model choice criteria, such as DIC values, but to provide a sensible model formulation where all parameters have a clear meaning. The model structure is similar to the Dean model[6], with the crucial modification that the precision parameter is mapped to the marginal standard deviation. This makes the parameters of the model interpretable and facilitates assignment of meaningful hyperpriors. The framework of penalised complexity (PC) priors is applied to formulate prior distributions for the hyperparameters. The spatial model is thereby seen as a flexible extension of two simpler base models that it will shrink towards, if not indicated otherwise by the data [13]. The upper level base model assumes a constant risk surface, while the lower level model assumes a varying risk surface over space without spatial autocorrelation. In this paper we investigate systematically the model behaviour under different simulation scenarios. Furthermore, we compare this new model formulation to commonly used model formulations in disease mapping. We point out differences, focusing on parameter interpretability and prior distribution assignment. For completeness we also compare commonly used model choice criteria.

We have chosen to implement all models using integrated nested Laplace approximations (INLA) [14], available to the applied user via the R-package INLA (see www.r-inla.org). INLA is straightforward to use for full Bayesian inference in disease mapping and avoids any Markov chain Monte Carlos techniques [15, 16].

The plan for the paper is as follows. Section 2 gives an introduction to disease mapping and motivates the use of Bayesian hierarchical models. Commonly used spatial models for the latent level are reviewed in Section 3, before the need of scaling and consequently the new modified spatial model is presented. Section 4 uses the PC-prior framework to design sensible hyperpriors for the precision and mixing parameter. In Section 5, we investigate the properties and behaviour of the new model in a simulation study, including comparisons to commonly used models. This section also contains an application to insulin-dependent diabetes mellitus data in Sardinia. Discussion and concluding remarks are given in Section 6.

## 2 Disease Mapping

Disease mapping has a long history in epidemiology and one major goal is to investigate the geographical distribution of disease burden [8]. In the following, we assume that our region of interest is divided into non-overlapping areas. Let denote the number of persons at risk in area (). These expected numbers are commonly calculated based on the size and demographic characteristics of the population living in area [2]. Further, let denote the number of cases or deaths in region . When the disease is non-contagious and rare, it is usually reasonable to assume that

 yi∣θi∼Poisson(Eiθi),

where , , denotes the underlying true area-specific relative risk [11].

The maximum likelihood estimator of is given by and corresponds to the standardised mortality ratio (SMR) if the counts represent deaths. Mapping the SMRs directly can, however, be misleading. Areas with extreme SMR values often have small expected numbers, so that sampling variability is more pronounced. This is reflected in large values for , see Bernardinelli et al. [17] or Wakefield [8] for more details.

Due to these potential instabilities, methods have been proposed to smooth the raw estimates over space. Bayesian hierarchical models are commonly used in this context, where the latent model involves random effects that borrow strength over neighbouring regions and such achieve local smoothness [17]. We refer to Lawson [1] and Banerjee et al. [18] for a detailed description of the class of Bayesian hierarchical models.

A general model formulation is to assume that the log risk has the decomposition:

 ηi=μ+z⊤iβ+bi.

Here, denotes the overall risk level, a set of covariates with corresponding regression parameters , and a random effect. For and we assign weekly informative Gaussian distributions with mean zero and variance 100. The random effects are used to account for extra-Poisson variation or spatial correlation due to latent or unmeasured risk factors. In the following, we will review four models that are commonly used for .

## 3 Modelling the spatial dependency structure

In general, it seems reasonable to assume that areas that are close in space show more similar disease burden than areas that are not close. To define what “close” means we need to set up a neighbourhood structure. A common assumption is to regard areas and as neighbours if they share a common border, denoted here as . This seems reasonable if the regions are equally sized and arranged in a regular pattern [8]. We further denote the set of neighbours of region by and its size by .

### 3.1 Review of common models

#### 3.1.1 The Besag model

One of the most popular approaches to model spatial correlation is an intrinsic Gaussian Markov random field (GMRF) model [3], here referred to as the Besag model. The conditional distribution for is

 bi∣b−i,τb∼N⎛⎝1nδi∑j∈δibj,1nδiτb⎞⎠, (1)

where is a precision parameter and . The mean of equals the mean of the effects over all neighbours, and the precision is proportional to the number of neighbours. The joint distribution for is

 π(b∣τb) ∝exp(−τb2∑i∼j(bi−bj)2)∝exp(−τb2b⊤Qb),

where the precision matrix has the entries

 Qij=⎧⎨⎩nδii=j,−1i∼j,0else. (2)

The model is intrinsic in the sense that is singular, i.e. it has a non-empty null-space , see Rue and Held (Section 3) for details [19]. For any map, , denoting a vector of length with only ones, is an eigenvector of with eigenvalue . Hence the density is invariant to the addition of a constant. If the spatial map has islands the definition of the null-space is more complex and depends on how islands are defined, i.e. whether they form a new unconnected graph or are linked to the main land, see Section 5.2.1 in Hodges [20]. The rank deficiency of the Besag model is equal to the number of connected subgraphs. The rank deficiency is equal to one if all regions are connected. Hence, the density for the Besag model is

 π(b∣τb)=Kτ(n−I)/2bexp(−τb2b⊤Qb),

where denotes the number of connected subgraphs and is a constant [20]. To prevent confounding with the intercept, sum-to-zero constraints are imposed on each connected subgraph.

#### 3.1.2 The BYM model

The Besag model only assumes a spatially structured component and cannot take the limiting form that allows for no spatially structured variability. Hence, unstructured random error or pure overdispersion within area , will be modelled as spatial correlation, giving misleading parameter estimates [21].

To address this issue, the Besag-York-Mollié (BYM) model [3] decomposes the regional spatial effect into a sum of an unstructured and a structured spatial component, so that . Here, accounts for pure overdispersion, while is the Besag model (Section 3.1.1), whereby denotes the generalised inverse of . The resulting covariance matrix of is

 Var(b∣τu,τv)=τ−1vI+τ−1uQ−.

#### 3.1.3 The Leroux model

In the BYM model, the structured and unstructured components cannot be seen independently from each other, and are thus not identifiable [9]. An additional challenge, which makes the choice of hyperpriors more difficult, is that and do not represent variability on the same level. While can be interpreted as the marginal variance of the unstructured random effects, controls the variability of the structured component , conditional on the effects in its neighbouring areas [11, 8]. Leroux et al. [4] proposed an alternative model formulation to make the compromise between unstructured and structured variation more explicit. Here, is assumed to follow a normal distribution with mean zero and covariance matrix

 Var(b∣τb,ϕ)=τ−1b((1−ϕ)I+ϕQ)−1, (3)

where denotes a mixing parameter. The model reduces to pure overdispersion if , and to the Besag model when [10, 9]. The conditional expectation of , given all other random effects, results as a weighted average of the zero-mean unstructured model and the mean value of the Besag model (1). The conditional variance is the weighted average of and .

#### 3.1.4 The Dean model

Dean et al. [6] proposed a reparameterisation of the BYM model where

 b=1√τb(√1−ϕv+√ϕu), (4)

having covariance matrix

 Var(b∣τb,ϕ)=τ−1b((1−ϕ)I+ϕQ−). (5)

Equation (4) is a reparameterisation of the original BYM model, where and [9]. The additive decomposition of the variance is then on the log relative risk scale. This is in contrast to the Leroux model (3), where the precision matrix of resulted as a weighted average of the precision matrices of the unstructured and structured spatial components. As a consequence, the additive decomposition of variance in the Leroux model happens on the log relative risk scale, conditional on , [4].

### 3.2 Why do we need scaling?

One issue inherent to all the models described in Section 3.1, is that the spatially structured component is not scaled. However, as discussed in Sørbye and Rue[12], scaling is crucial to facilitate hyperprior assignment and guarantee that hyperpriors used in one application have the same interpretation in another application.

The Besag model is an intrinsic GMRF and penalises local deviation from its null space, which is a constant level in the case of one connected component [19]. The hyperprior will control this local deviation and, as such, influence the smoothness of the estimated spatial effects. If the estimated field is too smooth, the precision is large and potential spatial variation might be blurred. On the other hand, if the precision is too small the model might overfit due to large local variability [12].

In the following we consider only the structured effect as introduced in the Besag model in Section 3.1.1. In general, the marginal variances depend on the graph structure reflected by the structure matrix [12]. This can be illustrated by calculating a generalized variance, computed as the geometric mean (or some other typical value) of the marginal variances

 σ2GV(u) = exp(1nn∑i=1log(1τb[Q−]ii))=1τbexp(1nn∑i=1log([Q−]ii)). (6)

The generalized variance for two different graph structures are typically not equal, even when the graphs have the same number of regions.

As an example, Figure 1 shows two administrative regions of Germany, Mittelfranken and Arnsberg, that are both fully connected. Assume that we fit models to these two regions separately, including a spatially structured effect in the models. As both regions have 12 districts, we may be tempted to use identical priors for the precision in these two cases. However, the prior will penalise the local deviation from a constant level, differently. Specifically, if in (6), it follows that for Mittelfranken and for Arnsberg. If we fit a similar model to all the districts in Germany, we obtain . These differences, reflecting a characteristic level for the marginal variances, might be even more striking for other applications. We find that the value of the generalised variance is larger for more complex neighbourhood structures, for example for the map of the 3111 counties in the continental United States we obtain . In order to unify the interpretation of a chosen prior for and make it transferable between applications, the structured effect needs to be scaled so that . This implies that represents the precision of the (marginal) deviation from a constant level, independently of the underlying graph.

This issue of scaling applies to all IGMRFs [12]. In practice, the scaling might be costly, as it involves inversion of a singular matrix. The generalised inverse can be used, but the relative tolerance threshold to detect zero singular values will be crucial. Knowing the rank deficiency of the matrix, the smallest eigenvalues could be removed. IGMRFs have much in common with GMRFs conditional on linear constraints, see [19, Section 3.2]. Hence, if we know the null space of , we can use sparse matrix algebra to compute . The marginal variances can be computed based on . First, is made regular by adding a small term to the diagonal and then is computed according to [19, Section 3.2, Equation 3.17]. In this way, we take advantage of the sparse matrix structure of . Extracting the variance components, we compute and use this as a factor to scale . The R-package INLA, available from www.r-inla.org, offers a function inla.scale.model which takes as argument any singular precision matrix and the linear constraints spanning the null-space of . The scaled precision matrix, where the geometric mean of the marginal variances is equal to one, is returned. For the Besag model on a connected graph we have , so that we can scale the matrix in R using {Sinput} R> Q = inla.scale.model(Q, constr=list(A=matrix(1, nrow=1, ncol=n), e=0))

We would like to note that scaling is also crucial if the goal is hypothesis testing involving an IGMRF, as for example in Dean et al. [6], as otherwise the critical region would be naturally dependent on the scale of the random effect. This also applies outside a Bayesian setting.

### 3.3 A modified BYM model accounting for scaling

Simpson et al. [13] propose a modification of the Dean model which addresses both the identifiability and scaling issue of the BYM model. The model uses a scaled structured component , where is the precision matrix of the Besag model, scaled according to Section 3.2. This gives a modified version of the random effect

 b=1√τb(√1−ϕv+√ϕu⋆), (7)

having covariance matrix

 Var(b∣τb,ϕ)=τ−1b((1−ϕ)I+ϕQ−⋆). (8)

Breslow et al. [21] criticised the BYM model for obscuring the conditional mean and conditional variance. Using the new parameterisation, has a clear and intuitive interpretation, and interpretations in terms of the conditional distribution of given , , are avoided.

Similarly to the Leroux and the Dean model, (7) emphasises the compromise between pure overdispersion and spatially structured correlation, where measures the proportion of the marginal variance explained by the structured effect. The model reduces to pure overdispersion for and to the Besag model, i.e. only spatially structured risk, when . More importantly, using the standardised the marginal variances will be approximately equal to . Approximation holds since by construction does not give the same marginal variances for all regions, so that the standardisation only holds on the overall level of the generalised variance . With the scaled structured effect , the random effect is also scaled and the prior imposed on has the same interpretation, being transferred between graphs. Furthermore, the hyperparameters and are now interpretable and no longer confounded. It should be noted that in contrast to the Dean model, the Leroux model cannot be scaled since by construction the scaling would depend on the value of .

### 3.4 Parameterisation preserving sparsity

A challenge of the new model formulation in (7) and (8) is that the covariance matrix involves the generalised inverse . The precision matrix, defined as the inverse of (8), is then no longer sparse. Sparsity properties are important to gain computational efficiency, both implementing the model in the INLA framework [14], or using block updating in Markov chain Monte Carlo methods. To keep the sparsity of the precision matrix, we propose an augmented parameterisation.

Recall that , where is scaled, and in (7). Let , where and . Thus,

 w1∣w2∼N(√ϕτbw2,1−ϕτbI).

As , it follows that is normally distributed with mean and precision matrix

 ⎛⎜ ⎜⎝τb1−ϕI−√ϕτb1−ϕI−√ϕτb1−ϕIQ⋆+ϕ1−ϕI⎞⎟ ⎟⎠.

The marginal of then has the correct distribution. Working with this parameterisation, sparsity is warranted and the structured component is given directly by , i.e. the second half of the vector .

## 4 Hyperprior choice: Penalised-complexity priors

In applying the original BYM model formulation, priors are usually specified separately for and and the prior choices are frequently rather ad-hoc [11]. A reasonable alternative, is to express prior beliefs in terms of the total variability of the model [8], which also facilitates clear interpretation of the hyperparameters.

Here, we follow Simpson et al. [13] and apply the framework of penalised complexity (PC) priors, which allows us to distribute the total variance to the components of the modified BYM model. The construction of PC priors is based on four principles: 1. Occam’s razor — simpler models should be preferred until there is enough support for a more complex model. 2. Measure of complexity — The Kullback-Leibler distance (KLD) is used to measure increased complexity. 3. Constant-rate penalisation — The deviation from the simpler model is penalised using a constant decay rate. 4. User-defined scaling — The user has an idea of a sensible size of the parameter (or any transformed variation) of interest. For ease of readability, we provide the main ideas and results using the PC framework, and refer to Simpson et al. [13] Section 6, Appendices A.2 and A.3 for further mathematical details.

Hyperpriors are formulated in two levels. First, a prior for is defined, which will control the marginal variance contribution of the weighted sum of and . Second, the marginal variance is distributed to the components and by defining a suitable prior for the mixing parameter . Note that testing for the risk decomposition, as proposed in MacNab [9], is implicated using the PC-prior framework at this level. The PC prior will automatically shrink towards , which refers to no spatial smoothing. The modified BYM model is then seen as a flexible extension of two dependent base models, which it will shrink to if not differently supported by the data.

### 4.1 Controlling the spatial variance contribution

The simplest model, referred to as a first-level base model, is to have a constant relative risk over all areas, i.e. no spatial variation or equivalently infinite precision. All regions have the same disease burden. Following the principles of the PC prior framework, we penalise model complexity in terms of a measure of information-theoretic deviation from the flexible model to the base model, which is a function of the Kullback-Leibler divergence

 KLD(τb)=KLD(N(0,Σflex(τb))∣N(0,Σbase)). (9)

KLD measures the information lost using the base model to approximate the more flexible model. To facilitate interpretation, this divergence is transformed to a unidirectional measure of distance defined by

 d(τb)=√2KLD(τb).

In our case, as defined in (8), while the covariance matrix of the base model is , reflecting infinite precision. Details regarding the computation of (9) are presented in Simpson et al. [13] and for completeness provided in Appendix C. On the distance scale the meaning of a prior is much more clear. The prior should have mode at zero, i.e. at the base model, and decreasing density with increasing distance. There are different priors that fulfill these properties, but differ in the way they decrease with increasing distance. Since we are not in a variable selection setting, we follow the recommendation of Simpson et al. [13] and use the exponential distribution, which supports constant-rate penalisation. Applying the ordinary change of variable transformation, a type-2 Gumbel distribution[13]

 π(τb)=θ2τ−3/2bexp(−θτ−1/2b)

is obtained as prior for .

The given prior corresponds to an exponential prior with rate for the standard deviation . Hence, to infer we can use the probability statement , which gives . It is not obvious how to interpret epidemiologically. However, in the BYM2 model, represents the marginal precision, and as such, can be related to the total residual relative risk for which an intuitive interpretation is available [8]. For example, an assumed probability of of having residual relative risks smaller than 2, corresponds approximately to . Figure 2 (left panel) shows the resulting prior compared to two gamma distributions, which will be used in the simulation study in Section 5. Please note that the use of gamma priors is questionable as the base model, i.e. distance equal to zero, has no density mass and can therefore never be obtained. We refer to Simpson et al.[13] for further discussions about overfitting of priors.

### 4.2 Distributing the spatial variance contribution

Assume now that the relative risk has spatial variation. This implies that the disease burden over the regions is not constant. Conditioned on a fixed marginal variance, the second-level base model is defined as having only unstructured spatial noise and no spatial dependency, i.e. . By increasing the value of , spatial dependency is gradually blended into the model and the model component explaining most of the variance shifts from to [13].

To derive the PC prior for , we first compute the KLD of the base model from the flexible model, see Appendix C.2. The covariance matrices for the base and flexible models are and , respectively. Analogously as for , we assign an exponential prior with parameter to the distance scale . For simplicity, this prior is transformed to give a prior on , as is defined on a bounded interval. Reasonable values of can be inferred in a similar way as in Section 4.1, using a probability statement . In this way it is straightforward to specify both strong or weak prior information. The parameter has a more direct interpretation than the precision parameter , representing a fraction of the total variance which can be attributed to spatial dependency structure. A reasonable formulation might be , which gives more density mass to values of smaller than . This can be seen as a conservative choice, assuming that the unstructured random effect accounts for more of the variability than the spatially structured effect.

In contrast to the precision parameter , the resulting PC prior for is not available in closed form. Within INLA, the function INLA:::inla.pc.bym.phi can be used to compute the prior for (on logit-scale) for a specific Besag neighbourhood matrix . The scaling of the graph is implemented by default. Thus, the prior can be obtained in tabulated form and also incorporated in other software packages than INLA. Figure 2 (right panel) shows the prior for with and , generated in R using {Sinput} R> log.prior = INLA:::inla.pc.bym.phi(Q = Q, rankdef = 1, R> u = 0.5, alpha = 2/3) R> phis = 1/(1+exp(-seq(-10, 10, len=10000))) R> plot(phis, exp(log.prior(phis)), type="l", R> lwd = 2, bty = "l", ylim = c(0,10), R> xlab = expression(phi), ylab = "Density") For comparison, the figure also displays the commonly used uniform prior for .

## 5 Applications

### 5.1 Does the modified BYM model shrink to the respective base models?

Various types of cancer data, observed in the districts of Germany (Figure 1), have been widely analysed in the literature, both to study the epidemiology [22] and to test methodological developments [23]. Simpson et al. [13] re-analyse a larynx cancer dataset, assigning the PC priors shown in Figure 2 to the hyperparameters. The modified BYM model (7), here referred to as the BYM2 model, is clearly seen to learn from the data, leading to a posterior marginal concentrated around 1 for .

#### 5.1.1 Results

Here, we perform a simulation study to investigate whether the BYM2 model shrinks towards the two different base models described in Section 4. Furthermore, we assess the behaviour under a perfectly structured risk surface. The simulations are based on the neighbourhood structure of districts of Sardinia, for which a real case study will be analysed in Section 5.2. Two hundred datasets were simulated under nine different scenarios, using three different types of risk surfaces and three different disease prevalences. The model we simulate from assumes that the log relative risk is given as , , where is a zero-mean normal distributed random effect controlled by the precision parameter . In the simulations, we assume either a constant risk over all regions, i.e. , an independent area-specific risk setting with and , or a spatially structured area-specific risk setting with and . We alter the expected number of cases to analyse whether the model performance changes when applied to diseases with different underlying disease prevalence. We set all equal to either , or and analyse all datasets using the INLA package in R.

Table 1 shows the posterior mean estimates for the intercept and the two hyperparameters, also including the average standard deviations of the estimates in parentheses.

The constant risk surface scenario corresponds to the first-level base model described in Section 4. The resulting estimates are reasonable for all parameters. Specifically, the standard deviation is close to zero, and decreases with higher prevalence. As these datasets give no information about the mixing parameter, the prior will dominate the posterior estimates of . The assumption of independent, area-specific spatial variation of the risk surface, corresponds to the second-level base model in Section 4. Again, the posterior mean parameter estimates are seen to be close to the true values. In particular, the model is seen to shrink towards the base model as the estimated mixing parameters are very close to zero, distributing all of the spatial variation to the unstructured component. In the case of a spatially structure surface risk, the intercept and are well estimated. The mixing parameter is estimated slightly lower when compared to and , where the estimates are closer to . This illustrates the learning ability of the model to support a more complex model if indicated by the data.

We repeated the simulation study using a uniform prior for the mixing parameter . The obtained results are comparable, see Table 4 in Appendix A. Differences occur in the case of a simulated constant risk surface where the estimates of the mixing parameter are now close to , as the estimate is dominated by the new prior. In the case of a spatially unstructured or structured risk, the obtained parameter estimates are seen to be robust to the prior choice for the mixing parameter.

#### 5.1.2 Comparison to other spatial latent models

The main benefit of the BYM2 model is that it allows for an intuitive parameter interpretation and facilitates prior assignment. Performance in terms of model choice criteria is regarded secondary and will be assessed in this section. To compare the BYM2 model with existing approaches, we fit the spatial latent models presented in Section 3.1 to the simulated data. We also include a model having only an unstructured spatial component, referred to as the iid-model. For the BYM2 model we use either a PC prior with and , denoted as BYM2 (PC), or a uniform prior, denoted as BYM2 (unif), for the mixing parameter.

Table 2 illustrates the resulting posterior parameter estimates, assuming a constant risk surface and . The table also reports three model choice criteria for this scenario: The root-mean squared error (RMSE), computed as the average of over all 200 simulations with denoting the posterior mean fitted risk, the average deviance information criterion (DIC) [24], and the average proper logarithmic scoring rule (LS) [25]. For all three criteria, lower values imply better model properties. Note that the parameter estimates of the BYM model are not included here as it is not parameterised in terms of one precision and one mixing parameter. However, we will consider it in terms of model choice, see Figure 56.

The estimated parameter values in Table 2 have to be interpreted with great care. Even though the different models look comparable and the parameters have the same name, their interpretations are not equivalent. A first issue is that we use different priors for the hyperparameters. Specifically, we assume that for the Leroux and Dean model, as this represents a popular non-informative prior choice in the literature [8, 9, 26]. Further, we assume a distribution for the precision parameter in the iid-model. For the spatially structured term, we took the graph structure of Sardinia into account and use a [11] prior. However, also the different parameterisation of the Leroux and Dean model imply that the parameters are not comparable. It is only when or , these models reduce to the same model (assuming the same prior for the precision parameter).

The second issue is that none of the models are properly scaled, except the BYM2 model. Specifically, the precision parameter in the Leroux and Dean model does not represent the marginal precision. It is still confounded with and hence not comparable to the BYM2 model. The only parameter estimates that are comparable are those from the iid-model and the BYM2 model. The Besag model can be scaled, and so can the Dean model leading to the parameterisation of the BYM2 model. However, the Leroux model cannot be scaled as any scaling would depend on the mixing parameter . Hence, the precision parameter cannot be interpreted as marginal precision as it depends on the underlying graph structure and is in addition confounded with the mixing parameter.

Even though the parameters of the models have different interpretations and priors are chosen differently, the model choice criteria give quite similar average values, for all the models and only the DIC values seem to slightly favour the two BYM2 models. This means that the BYM2 performs at least as well as the other models and changing the prior for the mixing parameters leads to the same results. This is also confirmed inspecting the logarithmic scores and DIC values over all simulations, in terms of boxplots, see Figures 5 and 6 in Appendix A, respectively. Results for these two model choice criteria coincide well and indicate that the Besag model performs worse than the other models when simulating an independent area-specific risk surface, where the iid-model is naturally but only marginally favoured. In contrast, the iid model performs worse when simulating a spatially-structured surface, where the Besag model is marginally favoured for low values of . The other models perform almost identically in both of these simulation settings. Simulating a constant-risk surface, all models behave similarly for . For increasing numbers of expected cases, the BYM2 model seems to be slightly beneficial. The figures only include the boxplots obtained by the BYM2 model using a PC prior for , as results using a uniform prior are visually identical.

### 5.2 Analysis of insulin-dependent diabetes mellitus in Sardinia

We re-analyse insulin-dependent diabetes mellitus (IDDM) counts in Sardinia, see Bernardinelli et al. [17] and Knorr-Held and Rue [23] for previous analyses. This dataset has low counts. Only cases were registered for districts in the period , for the population aged years and there are no covariates. Spatial smoothing is essential to reduce the large variance in the estimates, and to get a realistic picture of the underlying risk surface. Held [27] has demonstrated that a constant risk is well supported by the posterior distribution, and based on this we would expect shrinkage towards the first-level base model.

We use and for the prior of . For we use and which corresponds to a marginal standard deviation of . The R-code to fit the BYM2 model using the INLA-package is explained in Appendix B. The posterior estimates are shown in Table 3 (first quarter). We see that the precision estimate is rather high supporting a constant log relative risk as a zero log relative risk is well within all the posterior credible intervals for . The posterior marginal is mostly uniform giving essentially no information on which component is describing the variance. Figure 3 shows the posterior mean for the relative risk over all regions.

MacNab et al. [28] noted considerable sensitivity regarding the prior choice for in the Leroux model. For the Sardinia data, we empirically check sensitivity by changing the parameters of the PC-prior for to and , and . That means a priori we expect that either or of the variation is explained by the spatially structured component, respectively. As a fourth alternative we consider a uniform prior for . The rest of Table 3 shows the obtained posterior quantities. The estimates for the intercept stay unchanged. We see a slight change in the posterior estimates for and , whereby the posterior marginal of still supports almost the whole range for all priors. The posterior precision slightly decreases, whereby the quantile and the median stay fairly constant. The upper quantile and therefore the mean change more clearly. As in MacNab et al. [28], the effect on the posterior risk estimates is negligible (Figure 4).

## 6 Discussion

Throughout this paper we have stressed the importance of scaling in Bayesian disease mapping. Without proper scaling, hyperparameters have no clear meaning and may be falsely interpreted. To be more specific, the precision parameter of the commonly used Leroux and Dean models tends to be falsely interpreted as marginal precision controlling the deviation from a constant disease risk over space. However, the parameter depends on the underlying graph structure and is confounded with the mixing parameter if the structured spatial component is not appropriately scaled. In addition, it is not clear how to choose a prior distribution for this parameter. First, due to the lack of scaling, a fixed hyperprior for the precision parameter gives different amount of smoothing if the graph on which given disease counts are observed is changed. Second, commonly used hyperprior distributions induce overfitting[13] and will not allow to reduce to simpler models such as a constant risk surface or uncorrelated noise over space.

Simpson et al. [13] proposed a new modification of the commonly known BYM model, termed BYM2 model, which addresses the aforementioned issues, and applied it to one case study. The BYM2 model consists of one precision parameter and one mixing parameter. The precision parameter represents the marginal precision and controls the variability explained by a spatial effect. The mixing parameter distributes existing variability between an unstructured and structured component. Importantly, the structured component is scaled [12], which facilitates prior assignment. Penalised complexity priors, i.e. principle-based priors, are used to favour simpler models until a more complex model is supported. Furthermore, these priors allow epidemiologically intuitive specification of hyperparameters based on the relative risks.

In this paper we have systematically assessed the behaviour of the BYM2 model formulation. By means of a simulation study based on the neighbourhood structure of regions in Sardinia, we have shown that the model is able to shrink towards both a constant risk surface or a spatially unstructured risk for different disease prevalences. This shows that the model does not overfit. When the disease risk is spatially structured, the model shows good learning abilities. In contrast to other model formulations, the posterior estimates of all hyperparameters can be directly and intuitively interpreted. In terms of model comparison, we have found that the BYM2 model is slightly favored compared to the Leroux and Dean model in terms of DIC and logarithmic score in the case of a constant risk. In the case of an unstructured risk the used model choice characteristics are almost indistinguishable. We think that the practical benefits in terms of interpretability and prior assignment makes the BYM2 model advantageous compared to existing models and we recommend its usage also since its model criteria performance is at least as good as for existing methods.

All models were implemented using INLA [14], which provides efficient Bayesian inference without the need of MCMC sampling. This facilitated the simulation study considerably and allowed us to investigate different simulation settings. The user-friendly implementation is illustrated by the R-code in Appendix B.

MacNab et al.[28] noted sensitivity in choosing a prior distribution for the mixing parameter. Here, we investigated this issue empirically in both the simulation study and an application to insulin-dependent diabetes mellitus in Sardinia. However, more effort needs to be placed into a detailed prior sensitivity analysis following the theoretical framework of Roos et al. [29].

The BYM2 model can be naturally combined with covariate information, see Simpson et al[13] for an application, or integrated into a space-time context. However, it will require further work to distribute the variance not only within the spatial components but over all model parameters in the linear predictor. It should be noted that the BYM2 model is not only interesting within disease mapping but also within other fields, such as genetics, where genes can be regarded as regions and the neighbourhood structure represents which genes are linked.

## References

• [1] Lawson AB. Bayesian disease mapping: hierarchical modeling in spatial epidemiology. Chapman and Hall/CRC press, 2013.
• [2] Lee D. A comparison of conditional autoregressive models used in Bayesian disease mapping. Spatial and Spatio-temporal Epidemiology 2011; 2(2): 79–89.
• [3] Besag J, York J and Mollié A. Bayesian image restoration, with two applications in spatial statistics. Annals of the Institute of Statistical Mathematics 1991; 43(1): 1–20.
• [4] Leroux BG, Lei X and Breslow N. Estimation of disease rates in small areas: A new mixed model for spatial dependence. In Statistical Models in Epidemiology, the Environment, and Clinical Trials. Springer, 2000. pp. 179–191.
• [5] Stern HS and Cressie N. Posterior predictive model checks for disease mapping models. Statistics in Medicine 2000; 19(17-18): 2377–2397.
• [6] Dean C, Ugarte M and Militino A. Detecting interaction between random region and fixed age effects in disease mapping. Biometrics 2001; 57(1): 197–202.
• [7] Best N, Richardson S and Thomson A. A comparison of Bayesian spatial models for disease mapping. Statistical Methods in Medical Research 2005; 14(1): 35–59.
• [8] Wakefield J. Disease mapping and spatial regression with count data. Biostatistics 2007; 8(2): 158–183.
• [9] MacNab YC. On Gaussian Markov random fields and Bayesian disease mapping. Statistical Methods in Medical Research 2011; 20: 49–68.
• [10] Lee DJ and Durbán M. Smooth-CAR mixed models for spatial count data. Computational Statistics & Data Analysis 2009; 53(8): 2968–2979.
• [11] Bernardinelli L, Clayton D and Montomoli C. Bayesian estimates of disease maps: How important are priors? Statistics in Medicine 1995; 14(21-22): 2411–2431.
• [12] Sørbye SH and Rue H. Scaling intrinsic Gaussian Markov random field priors in spatial modelling. Spatial Statistics 2014; 8: 39–51.
• [13] Simpson DP, Rue H, Martins TG et al. Penalising model component complexity: A principled, practical approach to constructing priors. arXiv preprint arXiv:14034630 2015; .
• [14] Rue H, Martino S and Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2009; 71(2): 319–392.
• [15] Schrödle B and Held L. A primer on disease mapping and ecological regression usin INLA. Computational statistics 2011; 26(2): 241–258.
• [16] Schrödle B and Held L. Spatio-temporal disease mapping using INLA. Environmetrics 2011; 22(6): 725–734.
• [17] Bernardinelli L, Pascutto C, Best N et al. Disease mapping with errors in covariates. Statistics in Medicine 1997; 16(7): 741–752.
• [18] Banerjee S, Carlin BP and Gelfand AE. Hierarchical modeling and analysis for spatial data. CRC Press, 2014.
• [19] Rue H and Held L. Gaussian Markov random fields: theory and applications. CRC Press, 2005.
• [20] Hodges JS. Richly parameterized linear models: additive, time series, and spatial models using random effects. CRC Press, 2013.
• [21] Breslow N, Leroux B and Platt R. Approximate hierarchical modelling of discrete data in epidemiology. Statistical Methods in Medical Research 1998; 7(1): 49–62.
• [22] Natário I and Knorr-Held L. Non-parametric ecological regression and spatial variation. Biometrical Journal 2003; 45(6): 670–688.
• [23] Knorr-Held L and Rue H. On block updating in Markov random field models for disease mapping. Scandinavian Journal of Statistics 2002; : 597–614.
• [24] Spiegelhalter DJ, Best NG, Carlin BP et al. Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2002; 64(4): 583–639.
• [25] Gneiting T and Raftery AE. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 2007; 102(477): 359–378.
• [26] Ugarte MD, Adin A, Goicoa T et al. On fitting spatio-temporal disease mapping models using approximate Bayesian inference. Statistical Methods in Medical Research 2014; 23(6): 507–530.
• [27] Held L. Simultaneous posterior probability statements from Monte Carlo output. Journal of Computational and Graphical Statistics 2004; 13(1).
• [28] MacNab YC, Farrell PJ, Gustafson P et al. Estimation in Bayesian disease mapping. Biometrics 2004; 60(4): 865–873.
• [29] Roos M, Martins TG, Held L et al. Sensitivity analysis for Bayesian hierarchical models. Bayesian Analysis 2014; 10(2): 321–349.
• [30] Erisman A and Tinney W. On computing certain elements of the inverse of a sparse matrix. Communications of the ACM 1975; 18(3): 177–179.

## Appendix Appendix B INLA R-code to implement the BYM2 model

In the following, we illustrate how the BYM2 model can be implemented within the R-package INLA.

First we have to load the INLA package which can be installed by typing

in the R-terminal. In this paper we used the INLA version 0.0-1445883880. In order to define the model incorporating a spatially structured component we need to provide the neighbourhood structure for the regions we analyse. In INLA this is done by providing the path to the file which stores this information. The graph file for Sardinia is named sardinia.graph and has the following structure:

366
0 5 13 61 69 73 81
1 5 8 16 40 46 94
2 4 47 59 63 77
3 3 11 17 43
4 5 24 41 51 56 67
...
363 5 292 307 324 344 346
364 5 291 310 317 329 345
365 3 289 303 308


The first line provides the number of regions, here . The following lines, corresponds to each of the regions, where the first entry always specifies a unique region index. The second entry specifies the number of neighbouring regions, and then the corresponding region indices for those neighbours are listed. INLA transforms this information into the neighbourhood matrix with entries as specified in (2).

Lines 23–36 specify the model structure in terms of a formula object. Here, we have an intercept and a latent Gaussian model given by (7). The f(.) function is used to specify a random effect in INLA. The first argument is always the variable name to which it applies, here region. We specify the use of the BYM2 model in line 24. Then, we provide the graph and specify that we want to scale the model in line 26. This will scale the graph as described in Section 3.2. To avoid confounding with the intercept a sum-to-zero constraint is specified in line 27. The next lines specify the hyperprior for the two hyperparameters in form of a list of length two. The first list entry specifies the prior for the mixing parameter , named phi in the INLA implementation, the second entry for the precision parameter , named prec. Each entry is itself a list. Using the argument prior we specify the prior we would like to use. The parameters are set in the argument param. They coincide with the values used in Section 5.2. Using the argument initial a starting value for the numerical optimisation of the INLA algorithm can be provided. This has to be given on the log scale for the precision and the logit scale for , as these are the scales INLA works on internally.

After defining the model, we call the inla function providing the formula, the data and the likelihood. Further, we specify the expected number of counts and specify that we not only would like to get posterior estimates for the components of but also and (line 40). Improved hyperparameter estimates can be obtained by calling inla.hyperpar afterwards. Here, dz and diff.logdens refer to options in the numerical integration algorithm, see help page.

Results can be inspected using summary(result) or plot(result). Posterior marginals for the fixed effects and hyperparameters can be consequently accessed via result$summary.fixed or result$summary.hyperpar, respectively.

## Appendix Appendix C PC prior derivation

### c.1 Precision parameter τb

In deriving the PC prior for , we start by assuming a large fixed precision for the base model, where . The resulting -dimensional stochastic part of the Kullback-Leibler divergence is

 KLD(τb)=n−12τ0τb(1+τbτ0log[τbτ0]−τbτ0)→n−12τ0τb, (10)

which implies .

The next step is to assign a prior to the distance and transform this by a change of variables. We use the exponential prior with decay rate , which supports the constant-rate penalisation and apply the ordinary change of variable transformation

 π(τb)=λexp(−λd(τb))∣∣∣∂d(τb)∂τb∣∣∣.

Formulating this expression out and setting , where is chosen so that is kept constant in the limit , we obtain a type-2 Gumbel distribution[13]

 π(τb)=θ2τ−3/2bexp(−θτ−1/2b).

as prior for .

### c.2 Mixing parameter ϕ

To derive the PC prior for , we assume that and define the covariance matrices for the base and flexible models as and , respectively. The resulting Kullback-Leibler divergence is

 KLD(N(0,Σflex(ϕ))∣N(0,Σbase)) =12(trace(Σflex(ϕ))−n−log|Σflex(ϕ)|) =nϕ(1ntrace(Q−1⋆)−log|(1−ϕ)I+ϕQ−1⋆|).

The computation of the trace is quick when is sparse [30]. The determinant can be computed as , where denote the eigenvalues of , and [13]. As the given precision matrix is singular with rank deficiency 1, one of the eigenvalues is zero. This implies that if , or else . We can avoid calculating eigenvalues if the dimension is high, see Appendix A3 of Simpson et al. [13] for details.

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters