Global space–time models for climate ensembles

Global space–time models for climate ensembles

\fnmsStefano \snmCastruccio\corref\thanksreft1label=e1] [    \fnmsMichael L. \snmStein\thanksreft2label=e2] [ University of Chicago Department of Statistics
University of Chicago
5734 S. University Avenue
60637 Chicago, Illinois
E-mail: \printead*e2
\smonth10 \syear2012\smonth5 \syear2013
\smonth10 \syear2012\smonth5 \syear2013
\smonth10 \syear2012\smonth5 \syear2013

Global climate models aim to reproduce physical processes on a global scale and predict quantities such as temperature given some forcing inputs. We consider climate ensembles made of collections of such runs with different initial conditions and forcing scenarios. The purpose of this work is to show how the simulated temperatures in the ensemble can be reproduced (emulated) with a global space/time statistical model that addresses the issue of capturing nonstationarities in latitude more effectively than current alternatives in the literature. The model we propose leads to a computationally efficient estimation procedure and, by exploiting the gridded geometry of the data, we can fit massive data sets with millions of simulated data within a few hours. Given a training set of runs, the model efficiently emulates temperature for very different scenarios and therefore is an appealing tool for impact assessment.


10.1214/13-AOAS656 \volume7 \issue3 2013 \firstpage1593 \lastpage1611 \setattributecopyrightownerIn the Public Domain \newproclaimdefnDefinition


Global space–time models {aug} and \thankstextt1Supported in part by US NSF Grant 09-544(RDCEP) and by STATMOS, an NSF funded Network (NSF-DMS awards 1106862, 1106974 and 1107046). \thankstextt2Supported by US NSF Grant 09-544(RDCEP).

GCM \kwdclimate ensembles \kwdglobal space–time model \kwdmassive data set

1 Introduction

There is a wide consensus among the scientific community that climate is changing and this will bring significant imbalance to the present state of the system [IPCC AR4; Meehl et al. (2007)]. In order to assess the potential impacts of climate change both on the environment and human life, the geophysical community is providing constantly growing ensembles of climate models that include different scenarios of changing greenhouse gases (e.g., the CMIP5 archive [Taylor, Stouffer and Meehl (2012)]). The advantages of a statistical analysis of climate data lie in a framework that not only can provide insights about the ability to reproduce the real climate, but also has crucial practical advantages. If the climate output can be reproduced efficiently with a simple statistical model under some scenarios, then it is possible to predict how the output will behave for a different scenario, both in terms of its mean and its covariance structure. In other words, a statistical model can be used to fit some climate model output under some scenarios and reproduce (emulate) the behavior of the climate model under a new forcing in much less time than the original computer run. This approach can provide policy makers with a powerful tool for impact assessment. This work focuses on temperature at surface for an initial condition/scenario ensemble of a single General Circulation Model (GCM), where the scenarios differ only in the trajectory of annual values of CO concentrations. The ultimate goal is to provide a statistical model and to show how, given a small training set, it is possible to reproduce the computer output of other scenarios with only a small number of parameters for the mean and the covariance structure and a reasonable computational effort. Annual averages of temperature at the pixel level are well-approximated by a Gaussian distribution and we will use Gaussian process models throughout this work. A further simplification that we will examine and make use of is that the covariance structure is independent of the scenario.

To date, most work on climate model emulation has been done on Regional Circulation Models (RCMs); see Sain, Furrer and Cressie (2011), Sain, Nychka and Mearns (2011) and especially Greasby and Sain (2011) for a statistical model of RCMs, and Berrocal, Craigmile and Guttorp (2012) for a model to adjust RCM output to real observations. Only a few studies have been conducted on statistical analysis of GCMs [see, e.g., Jun, Knutti and Nychka (2008) for a statistical model for a multi-model GCM ensemble], and this is likely due to the dearth of literature regarding modeling data on the spheretime domain. Recently, Lindgren, Rue and Lindström (2011) introduced a Stochastic Partial Differential equation approach to fit random fields. Jun and Stein (2007, 2008) proposed a model for processes on this domain based on taking derivatives of simpler models and Jun (2011) extended it to the multivariate case. The latter approach relies on embedding the sphere in , selecting an isotropic model, and then applying partial derivatives to account for anisotropies, directional effects and nonstationarities. This procedure generates flexible models with explicit forms for the covariance function, but its coefficients are difficult to interpret so that it can be a challenge to specify forms of the model that would be appropriate in any particular setting. The main contribution of our work is to introduce a spectral approach in modeling GCM output that results in more interpretable coefficients, improved fits and reduced computational cost for parameter estimation.

Since a single climate run can contain several million simulated values or even more for annual averages, care must be taken in fitting a model. Current statistical methods to deal with massive space time data sets often rely on different forms of a reduced rank approach, from fixed rank kriging [Cressie and Johannesson (2008)] to predictive processes [Banerjee et al. (2008)]. Such methods are effective in fitting models in a feasible amount of time, but can result in loss of information and misfit [Stein (2008)]. In this work, the particular geometry of the data set and the use of parallel computing achieve the goal of fitting the mean and covariance structure of a massive data set in a few hours, by using a two-stage procedure that estimates some latitude specific parameters separately for each latitude and then estimates a few parameters describing dependence across parameters. The fitted model, although not exactly the global maximizer of the likelihood under the model, has a much higher likelihood than the maximized likelihoods under current alternative models in the literature.

Section 2 presents the ensemble and explains how the data are preprocessed. Section 3 introduces a general framework for the statistical analysis of this ensemble. Section 4 reviews the approach in Jun and Stein (2007) for data on a spherical domain. Section 5 presents our spectral model, the spectral decomposition of a class of spatial processes on a regular grid and finally shows some results on the fit compared to current alternatives in the literature. In order to emulate, Section 6 gives a parametric model for the mean, extending the work in Castruccio et al. (2013), and shows an example of extrapolation for a different forcing scenario. Section 7 draws some conclusions.

2 The ensemble

We use the Climate Simulation Library of the Center for Robust Decision Making on Climate and Energy Policy (RDCEP) consisting of model runs made with the Community Climate System Model Version 3 [CCSM3; Yeager et al. (2006), Collins et al. (2006)] at T31 resolution ( grid points on a resolution of ). The ensemble consists of multicentury model forecasts for a variety of CO trajectories (see Figure 1 for some examples), with all other greenhouse gas concentrations held constant at preindustrial values. The relatively coarse spatial resolution allows generation of a rich library for statistical analysis.

Figure 1: Examples of CO scenarios in the Climate Simulation Library. We refer to these as 1: slow, 2: moderate, 3: high and 4: drop scenario. This work mostly focuses on drop and slow (in red), while moderate and high scenarios (in blue) will only be used in the introduction of Section 6.

Our ensemble consists of multiple realizations for each scenario: initial conditions are sampled from the restart files of well spaced out years of the NCAR b30.048 preindustrial control run [Collins et al. (2006)]. For the purpose of this work these runs will be treated as statistically independent, an assumption consistent with a preliminary analysis of the data and physically realistic given the extreme sensitivity to the initial conditions of the climate system. From the CCSM3 output files we considered only the yearly average temperature at surface, which we denote as . We also removed the three northernmost and southernmost latitude bands to avoid having to model the process in the narrow strips that form pixels near the poles. A typical length of a single run is 500 years with latitudes and longitudes, so the data set has million model simulated temperatures for each realization.

3 Statistical analysis of a climate ensemble

In this section we explore some consequences of having independent realizations of random fields in the ensemble. We consider runs with the same forcing and treat them as independent and identically distributed. We denote with for the latitude, with for the longitude, with for the time. The latitude bands do not need to be equally spaced in this framework.

We will assume that the th realization has distribution


for , where

is the vector of temperatures, is a mean, and is the mean stochastic component, which is assumed to be normally distributed and with covariance matrix . We denote by the mean across realizations and by the size of the data set made up of all realizations of a scenario. If the data set consists of more than one realization, we have that for . Therefore, it is possible to estimate the covariance structure without specifying a model for the mean.

3.1 The restricted likelihood approach for the covariance structure

Suppose now that the field has a parametrized covariance structure that needs to be estimated. Also, define , and . By merging all the different realizations, we can reformulate (1) as the following linear model:


where is the identity matrix and is a column vector of length with all entries equal to 1. The design matrix is , each column allowing for different means for every location in the grid and year, and the mean parameter vector of length .

A natural way to estimate is by restricted likelihood, and the following result gives an explicit formula.

Result 1

The restricted loglikelihood for (2) is

Also, the corresponding estimator for obtained by generalized least squares is .

The proof can be found in the supplementary material [Castruccio and Stein (2013)], along with further theory on how the variogram can be estimated without bias in this context. Result 1 shows how adding independent realizations reduces to summing quadratic forms for with the same matrix , therefore, it does not require storing matrices larger than in the case of a single realization. Moreover, the REML estimate of the mean vector does not depend on the covariance structure and is just the sample average, which is expected since we assume the realizations are independent and identically distributed.

4 Processes on a spherical domain

Throughout this section we only consider the spatial part of the process, so we drop the time index. As the data have global coverage, a specific statistical theory for random fields on a sphere is required. The theory of valid covariance functions on a sphere is different from that of the plane [see Gneiting (2013) for a complete discussion]. Furthermore, an isotropic process on a sphere is not the natural choice in our case, as we expect temperature fields to behave differently at different latitudes. A more natural starting point is the following: {defn} A Gaussian process on a sphere is axially symmetric [Jones (1963)] if it has mean only depending on latitude and

Furthermore, the process is longitudinally reversible [Stein (2007)] if

In this work we only focus on axial symmetry although we believe that such models are not fully adequate to describe surface temperatures. In particular, accounting for land/sea differences may be one of the most promising avenues for improving what we present here.

Axially symmetric models have seen a noticeable development recently. Jun and Stein (2007) proposed a constructive approach for generating such processes, which results in an explicit form for the covariance function:

  • define independent isotropic random fields , on ,

  • consider its restriction on the unit sphere,

  • define



    and are Legendre polynomials of order .

We will refer to this modeling framework as the partial derivative (PD) approach. Using the PD approach guarantees that is axially symmetric; it can be extended to more general processes if and depend on longitude and Jun (2011) extends the approach to the multivariate setting. Despite this flexibility, it has some disadvantages. First, by starting out with models that must be valid in , some possible models are lost, especially models with substantial negative spatial correlation at some lags, which could occur for quantities for which mass or energy are approximately conserved over time. More importantly, the interpretation of the coefficients and is not straightforward and limits the flexibility of the model.

5 Spectral modeling of axially symmetric processes

We propose to represent the process in the spectral domain, and we show how this results in a more flexible and interpretable model. We first present the temporal part of the model, then define the model for a single latitudinal band, a model for multiple latitudinal bands, the structure of the spatial covariance matrix, and finally we compare this model with the PD approach. We work under the assumptions of model (1). Sections 5.15.3 describe our model and summarize information about the parameter estimates. Section 5.4 shows how the axial symmetry of the spatial part of the model can be exploited to speed up the calculations. Section 5.5 shows that our model yields much larger loglikelihoods than some PD models.

5.1 The temporal structure

Define the vector of the variabilities at time , and for the temperature difference. We assume that the vector-valued time series has the following structure:


where if pixel is land333If the grid point is on the boundary, we will consider it as land if its percentage of land is greater than . and otherwise. In other words, we are assuming a temporal AR(1) structure with different correlation parameters, and , depending on whether the grid point is over land or ocean. Diagnostic plots (see the supplementary material [Castruccio and Stein (2013)]) show that this structure is sufficient to capture the temporal features of the data. We also assume so the model is not exactly stationary in time, but given the weak temporal correlation for annual temperatures, this simplification has negligible impact on the model fit. For this section we define and .

5.2 A model for a single latitudinal band

Assume now that is axially symmetric, as described in Section 4. If we consider a single latitudinal band, the covariance is only a function of the longitudinal lag , and is symmetric about . Therefore, we observe an evenly spaced stationary process on a circletime domain and we define to be the spectral density on the circle at wavenumber . Since the grid we use here is the same grid on which a discretized version of the partial differential equations underlying the GCM are solved, it makes sense to work directly with this finite spectrum rather than to model a spectrum at all integer wavenumbers .

For observations on a line, a common spectral density is the Matérn:


We propose the following modification for :


The parameters have similar interpretations as for the ordinary Matérn model, with an inverse range parameter, controlling the rate of decrease of the spectrum at large wavenumbers and thus the “smoothness” of the process (even though one cannot talk about differentiability for a process on a discrete grid), and the overall level of variation.

A first analysis can be done by considering each band separately from the others. Figure 2 shows the results for a training set of five drop scenarios in which the parameters are estimated separately for each latitude using REML.

Figure 2: (a): for different latitudes. (b): vs latitude. (c): vs latitude. The blue dots are the estimates and the solid red line is a fitted cubic smoothing spline. (d): empirical and fitted log periodogram for two different latitudinal bands, computed using the normalized differences for .

In this way, one can visualize how and in (7) are changing across latitude [Figures 2(a)–(c)]. All the parameters display complex patterns; it is especially noticeable how and have very similar behaviors, as they both show an increase at midlatitudes. The tropical behavior is very different from all the other latitudinal bands, as estimates of both parameters show a sharp drop in this region.

Figure 2(d) shows an example of the periodogram fit for two different bands: one near the equator and one at a northern midlatitude. The spectrum near the equator drops off faster at low wavenumbers, which is reflected in the smaller value for . However, at high wavenumbers, the spectrum near the equator is flatter, which is reflected in the smaller value for . The functional form chosen is flexible enough to capture the different behaviors across latitudes. The supplementary material [Castruccio and Stein (2013)] provides a table of all the estimates with their corresponding asymptotic standard deviations. The variability of these estimates is extremely small, as we would expect from an analysis of such a large data set; therefore, the larger differences in patterns across latitudes in Figures 2(a)–(c) are statistically significant. In the supplementary material [Castruccio and Stein (2013)], we further show how and do not substantially vary over time.

5.3 A model for multiple latitudinal bands

To define a global model, we need to describe the following:

  • how , and are changing across latitude,

  • how different latitude bands are correlated.

The first point could be addressed with a parametric model of the three parameters as a function of , but, as shown in Figure 2, the pattern is complex and likely requires many parameters to be adequately captured. Instead, we use the estimates obtained from the analysis of single latitudinal bands. Computing estimates separately for each band allows the estimates to be obtained in a parallel fashion on multiple processors. In our analysis with latitudinal bands, the entire procedure does not require more than 5 minutes on a small 4 node cluster. In contrast, the PD approach of Jun and Stein (2008) does not lead to any obvious algorithm to fit parameters that describe variation within latitude separately for each latitude. With higher resolution model output, it might become more important to parametrize how , and change with latitude.

As for the second point, we need to model

More specifically, if we denote by the modulus of a complex number and by arg its argument, we need to specify coherence and the phase

where and are defined in (7). A null phase results in a symmetric cross-covariance between bands, which corresponds to a symmetric circulant covariance matrix and therefore results in a longitudinally reversible process (see Section 5.4). The flexibility of spectral methods allows one to account for longitudinal reversibility independently from other features of the model. This is not possible in the PD approach: to have a longitudinally reversible process one needs for [Jun and Stein (2008)] and it is not straightforward to determine how such a constraint would impact other features of the model. Since diagnostic plots (see supplementary material [Castruccio and Stein (2013)]) show that the phase is small, we work under the assumption of longitudinal reversibility.

Parameter Estimate CI
0.9696 0.51 (0.9695, 0.9697)
0.2080 2.0 (0.2076, 0.2084)
0.1010 3.4 (0.1004, 0.1017)
0.1141 3.5 (0.1134, 0.1148)
Table 1: Parameter estimates for the coherence (8) in the spectral model, for five drop scenarios. All estimates treat , and for all as fixed at the values estimated in the first stage of fitting

We assume the following model for the coherence:


where and . The proposed model has only 2 parameters: controls the overall rate of decay of coherence across all wavenumbers as the difference in latitude increases and describes how much faster coherence decays at higher wavenumbers than at lower wavenumbers. Note that we could allow as long as so that all absolute coherences are bounded by 1 as they must be, but it would be very unusual for a natural process to have stronger coherence across latitudes at higher wavenumbers. A more flexible form for (8) has been considered but did not result in significant improvement of the fit (see supplementary material [Castruccio and Stein (2013)]).

Table 1 shows the estimated coefficients for the five realizations of the drop, together with their asymptotic standard deviations and confidence intervals by treating the previously estimated values of , and as known. We can see how all the estimates have very small variability, which is expected since the data set is very large ( million temperatures). The temporal structure is slightly different for land and ocean, as the latter tends to show a slightly stronger temporal dependence. In the supplementary material [Castruccio and Stein (2013)] we further show how , , and are not dependent on time and, therefore, the assumption of stationarity of the stochastic term is reasonable.

5.4 Spectral decomposition of the covariance matrix for axially symmetric processes

The evaluation of the likelihood in this setting is a challenging problem, as in general it requires evaluation of a quadratic form with an inverse covariance matrix of size , and computation of a log determinant. In this particular setting, the gridded geometry of the data and the axial symmetry allow for some degree of sparsity of , the covariance matrix for , in the spectral domain. Here we focus on describing matrix calculations for , which in turn with the AR(1) model in (5) for allows for fast calculation of the restricted likelihood. For simplicity of notation, we drop the time index throughout this subsection.

In general, this problem requires operations and the storage of distinct values using the Cholesky decomposition. In fact, the resolution of our model is sufficiently coarse that a general Cholesky decomposition algorithm could be used here with some difficulty. However, by exploiting the structure of the covariance matrices, we can greatly speed up the computation and reduce the memory requirement, which would be essential when modeling higher resolution of GCM output.

The regular lattice geometry for GCM output over the sphere, together with the assumption that the model is axially symmetric, allows for some exact computations using spectral methods that drastically reduce the computational time and the memory needed. This approach was first introduced by Jun and Stein (2008) for the analysis of Total Ozone Mapping Spectrometer (TOMS) Level 3 data, which are post-processed data on a regular grid.

The key idea is that a stationary process of size on a circle results in a (symmetric) circulant covariance matrix. The (real) eigenvalues can be written in terms of the Fast Fourier Transform (FFT) of the coefficients of the first row, which requires only operations, and the eigenvector matrix is simply the Discrete Fourier Transform matrix [Davis (1979), page 72].

Over the sphere, is a process on a circle for every . The covariance matrix for every is (symmetric) circulant and can therefore be diagonalized via FFT. The cross-covariance matrix for is circulant but not necessarily symmetric, as

Therefore, the diagonalization via FFT results in complex eigenvalues. It should be pointed out that the condition for the cross-covariance matrix to be symmetric is that the process is longitudinally reversible. If we call the operation of FFT, we know that the covariance matrix of is a block matrix with diagonal blocks, and if we rearrange rows and columns over latitude, we have that is a block diagonal matrix with blocks with each block being an Hermitian matrix. Therefore, the evaluation of the likelihood requires flops for the FFT and for the Cholesky decompositions of the blocks. In terms of memory, a general axially symmetric process requires values to store, while a longitudinally reversible process only requires values.

Model ind mat h3 h10 h3,2 sp
# param 130 (4)
time (hours) 1.8
loglik/NMT(R-1) 0
Table 2: Comparison between different models in terms of number of parameters, computational time (hours) and restricted loglikelihood (1). For the spectral model, the actual number of parameters for the global maximization is reported in parentheses

5.5 Comparisons to other models

We compare the spectral model(model sp) presented in the previous two sections to several other models. For all models, the temporal structure is given by the AR(1) model (5). To model the spatial structure of the residual term in (5), we consider the following possibilities: a model with independent and identically distributed components (model ind), an isotropic Matérn model (model mat) and the PD model with Matérn model for the underlying isotropic fields. Referring to equation (4), we consider the following settings for the PD models:

  • Model h3: , , and .

  • Model h10: , , and .

  • Model h3,2: , , , and .

We use 5 realizations of a drop scenario, for a total of approximately 10.7 million GCM temperatures, and we compare the results in terms of the restricted loglikelihood (1). For such massive data sets, we found it helpful to normalize differences in (restricted) loglikelihoods by the number of contrasts . We will write loglik to generically mean a difference in loglikelihoods. For the sp model, we first maximize the likelihood for the single band parameters in parallel and then we maximize the likelihood for and conditional on the values of , and . All the other models are maximized over the full parameter space. The results are reported in Table 2, where the number of parameters and the difference in loglikelihood (normalized with respect to the size of the data set) is shown and compared. The model ind is clearly not adequate, and the isotropic Matérn results are a noticeable improvement. Model h3 gives better results, therefore underlying the need for an anisotropic model. Model h3,2 and especially h10 result in better likelihoods but the number of parameters is very large, and the estimation requires several weeks. Model sp outperforms all the previous models and even if the actual number of parameters is , the maximization is done only with the 4 parameters in Table 1 and requires only 1.8 hours. The precomputation of the parameters for the latitudinal bands (a procedure that, as mentioned in Section 5.2, requires a few minutes using multiple processors on a cluster) plays a crucial role in this model, as it adds flexibility and allows for a maximization of a conditional loglikelihood with respect to only a few parameters.

Although the model presented here already provides a substantially better fit than even the best PD model with much less computation, one might wonder if maximizing the restricted loglikelihood over all 130 parameters would lead to a model with a much better fit. We ran a full parameter search over all 130 parameters using fminsearch in MATLAB, which resulted in an improvement of loglik after approximately 1670 hours of computation. Our goal here is to find the best-fitting model to the data that we can for a given computational effort and it is clear that the sp model with parameters estimated by our proposed two-stage procedure dominates the PD models with parameters estimated by REML in this application.

To have a better understanding of how the model is able to capture the local spatial dependence of the data, it is useful to show how variances of spatial contrasts are reproduced by the model [see, e.g., Stein (2005)]. Figure 3 shows a comparison between models mat, h3, h10 and sp in terms of their ability to reproduce the variances of some contrasts of 444The index for will be dropped, as the distribution of the contrasts is independent of the realization; note that depends on the estimates of the AR(1) parameters and , but the values of these parameters vary so little across models that the visual impression of Figure 3 is unaffected by which estimates are used. (the details about how the empirical estimates are computed are found in the supplementary material [Castruccio and Stein (2013)]). Figures 3(a)–(b) represent the east–west contrast and the north–south contrast . In both cases the spectral model is able to reproduce the patterns of the empirical contrast, therefore showing an overall good fit of both the single band spectral model (east–west) and the coherence (north–south). The isotropic model shows a pattern in the east–west contrast that is only due to the geometry of the sphere: points closer to the poles are physically closer for the same longitude spacing, therefore resulting in smaller variances of the contrasts. Model h3 instead is able to capture some of the features of the data, especially for the north–south contrast, but is overall too smooth and would require more flexibility. Model h10 shows a decent fit in the north–south contrast, but the east–west contrast is significantly misfitted. Figures 3(c)–(d) represent the variance across latitudes and the Laplacian. The spectral model can reproduce most of the trend for the variance, therefore proving to be flexible enough to capture the low wavenumbers behavior. The Laplacian is overestimated for the northern hemisphere, a sign of a lack of fit for the coherence between multiple bands that could be fixed by allowing nonstationarity across latitudes in (8), although at the cost of a substantially more complex model. The model h10 shows a somewhat overall better fit for this contrast.

Figure 3: (a): estimated and fitted . (b): . (c): . (d): . The vertical axis is plotted on a log scale. The details about computing the empirical estimates are in the supplementary material [Castruccio and Stein (2013)].
Figure 4: Differences of parameter estimates for different scenarios ( realizations each) for (a): , (b): , (c): and (d): . The reference is the slow scenario. The dashed red lines represent the Bonferroni confidence bands around 0. The estimates are computed as in Figure 2.

6 Emulation

Throughout this section, we drop the index denoting the realization to simplify the notation. The model we have presented can reproduce efficiently the covariance structure for a single scenario, but in order to emulate temperature for a different forcing, we need to determine how the mean and the covariance structure are changing across different scenarios. Figure 4 addresses the latter issue. The plots represent the change in parameter value for the single latitudinal band features across four scenarios indicated in Figure 1; is not included but shows similar patterns. For all of them, an analysis of different realizations is performed and we show the differences with respect to the slow scenario, together with the Bonferroni confidence bands around 0. Since all the standard deviations are similar across scenarios, we choose to plot the differences of parameter estimates between the slow and the drop scenario. The differences between the slow and drop scenarios are significant for at higher latitudes, as shown in Figure 4(a), but all the other parameters and scenarios are largely within the confidence bands. Therefore, it seems reasonable to assume that differences in the covariance structure across scenarios are modest. In the supplementary material [Castruccio and Stein (2013)], a similar diagnostic is carried out for the coherence parameters. Therefore, only a parametrization of the mean is necessary to describe the trend of the data, train the model with some scenarios, and then predict the temperature for an unknown scenario. Our approach to emulation of the mean is based on the method of Castruccio et al. (2013), where a simple model was proposed and the analysis was performed independently for 47 regions without accounting for spatial dependence.

6.1 A model for the mean

Before proceeding with this analysis, we standardize to account for the different variability across different grid points. This simple adjustment was not done in the previous sections and accounts for some of the nonstationarity in longitude that our model is not able to capture. In this section we consider the output of a control run with constant CO concentration, compute its average and its standard deviation over time for every grid point, and then for every temperature vector in the training set normalize to .

With reference to (1), the goal of this section is to give a (scenario dependent) parametric model for in order to extrapolate the temperature value for another given scenario. The model we use is the following:


where is modeled as in Section 5 and indexes the 47 predefined regions shown in the supplementary materials [Castruccio and Stein (2013)].

The mean has three components:

  • an intercept , different for every grid point,

  • a short term effect , different for every grid point,

  • a long term effect , different for the 47 regions.

The last term accounts for the long term contribution of the forcing via the weights , which are expected to be decreasing as the time lag increases. We model this decrease exponentially with decay rate identical for every grid point. The linear parameters can be profiled but the estimation of the parameters describing the behavior of and of requires the numerical maximization of the likelihood. The evaluation of this likelihood for a single run requires approximately 8 minutes, so to make the computation faster and reduce the optimization to , we plugged in the estimated spatiotemporal structure of obtained via REML using the same procedure as in the previous section.

We estimated the stochastic structure with realizations of the drop scenario, obtained based on a single drop scenario (estimation of for all scenarios was not computationally feasible), and finally emulate for the slow scenario. We choose the drop scenario for the training set to show the results for a severe extrapolation and because the trend is more evident under a forcing with an abrupt change, making the estimation of the coefficients more stable. In order to reproduce the mean climate for less drastic scenarios such as the Representative Concentration Pathways in the CMIP5 archive [Van Vuuren et al. (2011)], a simpler form of the mean function is likely preferable.

Figure 5: Training set: one drop realization. Prediction set: a slow scenario. (a): an example of a conditional simulation. The solid black line represents the fitted (emulated) value, the gray lines represent the actual realizations from the GCM and the red lines are the conditional simulations. (b): global plot of the lack of fit index , which is a measure of goodness of fit for emulation of the mean. The upper bound at is only for visualization purposes as the fit index can be as large as 18.9. There are 74 out of 4032 points with .

The estimated value for is and once this parameter is estimated, each conditional simulation takes only a few seconds. Given the very large number of temperatures and the small variability of the estimates, we choose not to account for parameter uncertainty in the simulations. The top part of Figure 5 gives an example of emulation of the mean and conditional simulation of the slow scenario for a grid point in the middle of the Pacific Ocean. To assess the fit, for every grid point we use, as in Castruccio et al. (2013), the following simple lack of fit index (the indication of latitude and longitude has been removed for simplicity):


where is the fitted value and is the average across realizations. This index measures how close the fitted value (emulated mean) is to the average of the realizations. The smaller is the better, but since the expected value of the numerator cannot be less than that of the denominator, values of near 1 indicate an excellent fit.

Figure 5(a) shows an example of conditional simulation for a point in the Pacific Ocean. The mean and variation are similar to the original simulations, but there is noticeable underestimation of extreme events, especially cold extremes. This lack of ability of the statistical model to reproduce such features of the climate poses some problems and can limit the extent of the use of this model on impact assessment. A possible direction to address this issue is to fit the data with a model more general than an axially symmetric, but this will require further advances in modeling, and substantial computational resources. Another approach is to develop direct methods to fitting and emulating extremes along the lines of Mannshardt-Shamseldin et al. (2010).

Figure 5(b) shows for the slow scenario over all the regions. It is evident that the emulation does poorly in the area near the Southern Ocean stretching from the southern tip of South Africa to near Tasmania. We speculate that the sea ice albedo effect may create strong nonlinearities in this area, but further investigations are needed. Also, the fit is not fully adequate in the equatorial regions, even though the misfit is not as strong as in the Southern Ocean. The details about the algorithm are provided in the supplementary material [Castruccio and Stein (2013)], and the file called climate_movie shows a movie of a conditional simulation in terms of anomalies from preindustrial conditions.

7 Conclusions

Spectral modeling is a natural choice for gridded data on spheretime, and we have shown how it outperforms the current alternatives in the literature in terms of simplicity, flexibility and computational requirements. Although this work has focused on temperature, a similar approach can be extended to other climate variables such as precipitation [see Castruccio et al. (2013) for an example of mean emulation] and possibly different time scales, as long as the normality hypothesis is tenable. On a single latitudinal band, our model assumes independence of the spectral process across wavenumbers, but this assumption can in principle be relaxed to account for more complex nonstationarities. We have also shown an example of fit for climate model output of challenging size, and we have carried out the analysis without appealing to reduced rank representations of the process. The specific geometry of the climate output has played an important role, but distributing some parts of the algorithm across different processors has also contributed to reduce the computational burden, and further work is needed to understand how parallel computation can be helpful in fitting massive data sets.


The authors thank Elisabeth Moyer and David McInerney at the Department of Geophysical Sciences at the University of Chicago for providing the ensemble data and for the useful discussions. This work is part of the RDCEP effort to improve the understanding of computational models needed to evaluate climate and energy policies.


[id=suppA] \stitleSupplementary material \slink[doi]10.1214/13-AOAS656SUPP \sdatatype.pdf \sfilenameaoas656_supp.pdf \sdescriptionFurther technical details and theoretical results can be found in the online supplementary material.


  • Banerjee et al. (2008) {barticle}[mr] \bauthor\bsnmBanerjee, \bfnmSudipto\binitsS., \bauthor\bsnmGelfand, \bfnmAlan E.\binitsA. E., \bauthor\bsnmFinley, \bfnmAndrew O.\binitsA. O. \AND\bauthor\bsnmSang, \bfnmHuiyan\binitsH. (\byear2008). \btitleGaussian predictive process models for large spatial data sets. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume70 \bpages825–848. \biddoi=10.1111/j.1467-9868.2008.00663.x, issn=1369-7412, mr=2523906 \bptokimsref \endbibitem
  • Berrocal, Craigmile and Guttorp (2012) {barticle}[mr] \bauthor\bsnmBerrocal, \bfnmVeronica J.\binitsV. J., \bauthor\bsnmCraigmile, \bfnmPeter F.\binitsP. F. \AND\bauthor\bsnmGuttorp, \bfnmPeter\binitsP. (\byear2012). \btitleRegional climate model assessment using statistical upscaling and downscaling techniques. \bjournalEnvironmetrics \bvolume23 \bpages482–492. \biddoi=10.1002/env.2145, issn=1180-4009, mr=2958926 \bptokimsref \endbibitem
  • Castruccio and Stein (2013) {bmisc}[auto] \bauthor\bsnmCastruccio, \bfnmS.\binitsS. \AND\bauthor\bsnmStein, \bfnmM. L.\binitsM. L. (\byear2013). \bhowpublishedSupplement to “Global space–time models for climate ensembles.” DOI:\doiurl10.1214/13-AOAS656SUPP. \bptokimsref \endbibitem
  • Castruccio et al. (2013) {bmisc}[author] \bauthor\bsnmCastruccio, \bfnmS.\binitsS., \bauthor\bsnmMcInerney, \bfnmD. J.\binitsD. J., \bauthor\bsnmStein, \bfnmM. L.\binitsM. L., \bauthor\bsnmLiu, \bfnmF.\binitsF., \bauthor\bsnmJacob, \bfnmR. L.\binitsR. L. \AND\bauthor\bsnmMoyer, \bfnmE. J.\binitsE. J. (\byear2013). \bhowpublishedStatistical emulation of climate model projections based on precomputed GCM runs. Unpublished manuscript. \bptokimsref \endbibitem
  • Collins et al. (2006) {barticle}[author] \bauthor\bsnmCollins, \bfnmW. D.\binitsW. D., \bauthor\bsnmBitz, \bfnmC. M.\binitsC. M., \bauthor\bsnmBlackmon, \bfnmM. L.\binitsM. L., \bauthor\bsnmBonan, \bfnmG. B.\binitsG. B., \bauthor\bsnmBretherton, \bfnmC. S.\binitsC. S., \bauthor\bsnmCarton, \bfnmJ. A.\binitsJ. A., \bauthor\bsnmChang, \bfnmP.\binitsP., \bauthor\bsnmDoney, \bfnmS. C.\binitsS. C., \bauthor\bsnmHack, \bfnmJ. J.\binitsJ. J., \bauthor\bsnmHenderson, \bfnmT. B.\binitsT. B., \bauthor\bsnmKiehl, \bfnmJ. T.\binitsJ. T., \bauthor\bsnmLarge, \bfnmW. G.\binitsW. G., \bauthor\bsnmMcKenna, \bfnmD. S.\binitsD. S., \bauthor\bsnmSanter, \bfnmB. D.\binitsB. D. \AND\bauthor\bsnmSmith, \bfnmR. D.\binitsR. D. (\byear2006). \btitleThe community climate system model: CCSM3. \bjournalJ. Climate \bvolume19 \bpages2122–2143. \bptokimsref \endbibitem
  • Cressie and Johannesson (2008) {barticle}[mr] \bauthor\bsnmCressie, \bfnmNoel\binitsN. \AND\bauthor\bsnmJohannesson, \bfnmGardar\binitsG. (\byear2008). \btitleFixed rank kriging for very large spatial data sets. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume70 \bpages209–226. \biddoi=10.1111/j.1467-9868.2007.00633.x, issn=1369-7412, mr=2412639 \bptokimsref \endbibitem
  • Davis (1979) {bbook}[mr] \bauthor\bsnmDavis, \bfnmPhilip J.\binitsP. J. (\byear1979). \btitleCirculant Matrices. \bpublisherWiley, \blocationNew York. \bidmr=0543191 \bptokimsref \endbibitem
  • Gneiting (2013) {barticle}[author] \bauthor\bsnmGneiting, \bfnmTillman\binitsT. (\byear2013). \btitleStrictly and non-strictly positive definite functions on spheres. \bjournalBernoulli \bvolume19 \bpages1327–1349. \bptokimsref \endbibitem
  • Greasby and Sain (2011) {barticle}[mr] \bauthor\bsnmGreasby, \bfnmTamara A.\binitsT. A. \AND\bauthor\bsnmSain, \bfnmStephan R.\binitsS. R. (\byear2011). \btitleMultivariate spatial analysis of climate change projections. \bjournalJ. Agric. Biol. Environ. Stat. \bvolume16 \bpages571–585. \biddoi=10.1007/s13253-011-0072-8, issn=1085-7117, mr=2862299 \bptokimsref \endbibitem
  • Jones (1963) {barticle}[mr] \bauthor\bsnmJones, \bfnmRichard H.\binitsR. H. (\byear1963). \btitleStochastic processes on a sphere. \bjournalAnn. Math. Statist. \bvolume34 \bpages213–218. \bidissn=0003-4851, mr=0170378 \bptokimsref \endbibitem
  • Jun (2011) {barticle}[mr] \bauthor\bsnmJun, \bfnmMikyoung\binitsM. (\byear2011). \btitleNon-stationary cross-covariance models for multivariate processes on a globe. \bjournalScand. J. Stat. \bvolume38 \bpages726–747. \biddoi=10.1111/j.1467-9469.2011.00751.x, issn=0303-6898, mr=2859747 \bptokimsref \endbibitem
  • Jun, Knutti and Nychka (2008) {barticle}[mr] \bauthor\bsnmJun, \bfnmMikyoung\binitsM., \bauthor\bsnmKnutti, \bfnmReto\binitsR. \AND\bauthor\bsnmNychka, \bfnmDouglas W.\binitsD. W. (\byear2008). \btitleSpatial analysis to quantify numerical model bias and dependence: How many climate models are there? \bjournalJ. Amer. Statist. Assoc. \bvolume103 \bpages934–947. \biddoi=10.1198/016214507000001265, issn=0162-1459, mr=2528820 \bptokimsref \endbibitem
  • Jun and Stein (2007) {barticle}[mr] \bauthor\bsnmJun, \bfnmMikyoung\binitsM. \AND\bauthor\bsnmStein, \bfnmMichael L.\binitsM. L. (\byear2007). \btitleAn approach to producing space–time covariance functions on spheres. \bjournalTechnometrics \bvolume49 \bpages468–479. \biddoi=10.1198/004017007000000155, issn=0040-1706, mr=2394558 \bptokimsref \endbibitem
  • Jun and Stein (2008) {barticle}[mr] \bauthor\bsnmJun, \bfnmMikyoung\binitsM. \AND\bauthor\bsnmStein, \bfnmMichael L.\binitsM. L. (\byear2008). \btitleNonstationary covariance models for global data. \bjournalAnn. Appl. Stat. \bvolume2 \bpages1271–1289. \biddoi=10.1214/08-AOAS183, issn=1932-6157, mr=2655659 \bptokimsref \endbibitem
  • Lindgren, Rue and Lindström (2011) {barticle}[mr] \bauthor\bsnmLindgren, \bfnmFinn\binitsF., \bauthor\bsnmRue, \bfnmHåvard\binitsH. \AND\bauthor\bsnmLindström, \bfnmJohan\binitsJ. (\byear2011). \btitleAn explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume73 \bpages423–498. \biddoi=10.1111/j.1467-9868.2011.00777.x, issn=1369-7412, mr=2853727 \bptnotecheck related\bptokimsref \endbibitem
  • Mannshardt-Shamseldin et al. (2010) {barticle}[mr] \bauthor\bsnmMannshardt-Shamseldin, \bfnmElizabeth C.\binitsE. C., \bauthor\bsnmSmith, \bfnmRichard L.\binitsR. L., \bauthor\bsnmSain, \bfnmStephan R.\binitsS. R., \bauthor\bsnmMearns, \bfnmLinda O.\binitsL. O. \AND\bauthor\bsnmCooley, \bfnmDaniel\binitsD. (\byear2010). \btitleDownscaling extremes: A comparison of extreme value distributions in point-source and gridded precipitation data. \bjournalAnn. Appl. Stat. \bvolume4 \bpages484–502. \biddoi=10.1214/09-AOAS287, issn=1932-6157, mr=2758181 \bptokimsref \endbibitem
  • Meehl et al. (2007) {bincollection}[author] \bauthor\bsnmMeehl, \bfnmG. A.\binitsG. A., \bauthor\bsnmStocker, \bfnmT. F.\binitsT. F., \bauthor\bsnmCollins, \bfnmW. D.\binitsW. D., \bauthor\bsnmFriedlingstein, \bfnmP.\binitsP., \bauthor\bsnmGaye, \bfnmA. T.\binitsA. T., \bauthor\bsnmGregory, \bfnmJ. M.\binitsJ. M., \bauthor\bsnmKitoh, \bfnmA.\binitsA., \bauthor\bsnmKnutti, \bfnmR.\binitsR., \bauthor\bsnmMurphy, \bfnmJ. M.\binitsJ. M., \bauthor\bsnmNoda, \bfnmA.\binitsA., \bauthor\bsnmRaper, \bfnmS. C. B.\binitsS. C. B., \bauthor\bsnmWatterson, \bfnmI. G.\binitsI. G., \bauthor\bsnmWeaver, \bfnmA. J.\binitsA. J. \AND\bauthor\bsnmZhao, \bfnmZ. C.\binitsZ. C. (\byear2007). \btitleGlobal climate projections. In \bbooktitleClimate Change 2007: The Physical Sciences Basis. Contribution of Working Group I to the Forth Assessment Report of the Intergovernmental Panel on Climate Change (\beditor\bfnmS.\binitsS. \bsnmSolomon, \beditor\bfnmD.\binitsD. \bsnmQin, \beditor\bfnmM.\binitsM. \bsnmManning, \beditor\bfnmZ.\binitsZ. \bsnmChen, \beditor\bfnmM.\binitsM. \bsnmMarquis, \beditor\bfnmK. B.\binitsK. B. \bsnmAveryt, \beditor\bfnmM.\binitsM. \bsnmTignor \AND\beditor\bfnmH. L.\binitsH. L. \bsnmMiller, eds.). \bpublisherCambridge Univ. Press, \blocationCambridge. \bptokimsref \endbibitem
  • Sain, Furrer and Cressie (2011) {barticle}[mr] \bauthor\bsnmSain, \bfnmStephan R.\binitsS. R., \bauthor\bsnmFurrer, \bfnmReinhard\binitsR. \AND\bauthor\bsnmCressie, \bfnmNoel\binitsN. (\byear2011). \btitleA spatial analysis of multivariate output from regional climate models. \bjournalAnn. Appl. Stat. \bvolume5 \bpages150–175. \biddoi=10.1214/10-AOAS369, issn=1932-6157, mr=2810393 \bptokimsref \endbibitem
  • Sain, Nychka and Mearns (2011) {barticle}[mr] \bauthor\bsnmSain, \bfnmStephan R.\binitsS. R., \bauthor\bsnmNychka, \bfnmDoug\binitsD. \AND\bauthor\bsnmMearns, \bfnmLinda\binitsL. (\byear2011). \btitleFunctional ANOVA and regional climate experiments: A statistical analysis of dynamic downscaling. \bjournalEnvironmetrics \bvolume22 \bpages700–711. \biddoi=10.1002/env.1068, issn=1180-4009, mr=2843137 \bptokimsref \endbibitem
  • Stein (2005) {barticle}[mr] \bauthor\bsnmStein, \bfnmMichael L.\binitsM. L. (\byear2005). \btitleStatistical methods for regular monitoring data. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume67 \bpages667–687. \biddoi=10.1111/j.1467-9868.2005.00520.x, issn=1369-7412, mr=2210686 \bptokimsref \endbibitem
  • Stein (2007) {barticle}[mr] \bauthor\bsnmStein, \bfnmMichael L.\binitsM. L. (\byear2007). \btitleSpatial variation of total column ozone on a global scale. \bjournalAnn. Appl. Stat. \bvolume1 \bpages191–210. \biddoi=10.1214/07-AOAS106, issn=1932-6157, mr=2393847 \bptokimsref \endbibitem
  • Stein (2008) {barticle}[mr] \bauthor\bsnmStein, \bfnmMichael L.\binitsM. L. (\byear2008). \btitleA modeling approach for large spatial datasets. \bjournalJ. Korean Statist. Soc. \bvolume37 \bpages3–10. \biddoi=10.1016/j.jkss.2007.09.001, issn=1226-3192, mr=2420389 \bptokimsref \endbibitem
  • Taylor, Stouffer and Meehl (2012) {barticle}[author] \bauthor\bsnmTaylor, \bfnmK. E.\binitsK. E., \bauthor\bsnmStouffer, \bfnmR. J.\binitsR. J. \AND\bauthor\bsnmMeehl, \bfnmG. A.\binitsG. A. (\byear2012). \btitleAn overview of CMIP5 and the experiment design. \bjournalBull. Amer. Meteor. Soc. \bvolume93 \bpages485–498. \bptokimsref \endbibitem
  • Van Vuuren et al. (2011) {barticle}[author] \bauthor\bsnmVan Vuuren, \bfnmD. P.\binitsD. P., \bauthor\bsnmEdmonds, \bfnmJ.\binitsJ., \bauthor\bsnmKainuma, \bfnmM.\binitsM., \bauthor\bsnmRiahi, \bfnmK.\binitsK., \bauthor\bsnmThomson, \bfnmA.\binitsA., \bauthor\bsnmHibbard, \bfnmK.\binitsK., \bauthor\bsnmHurtt, \bfnmG. C.\binitsG. C., \bauthor\bsnmKram, \bfnmT.\binitsT., \bauthor\bsnmKrey, \bfnmV.\binitsV., \bauthor\bsnmLamarque, \bfnmJ.-F.\binitsJ.-F., \bauthor\bsnmMasui, \bfnmT.\binitsT., \bauthor\bsnmMeinshausen, \bfnmM.\binitsM., \bauthor\bsnmNakicenovic, \bfnmN.\binitsN., \bauthor\bsnmSmith, \bfnmS. J.\binitsS. J. \AND\bauthor\bsnmRose, \bfnmS. K.\binitsS. K. (\byear2011). \btitleThe representative concentration pathways: An overview. \bjournalClim. Chang. \bvolume109 \bpages5–31. \bptokimsref \endbibitem
  • Yeager et al. (2006) {barticle}[author] \bauthor\bsnmYeager, \bfnmS. G.\binitsS. G., \bauthor\bsnmShields, \bfnmC. A.\binitsC. A., \bauthor\bsnmLarge, \bfnmW. G.\binitsW. G. \AND\bauthor\bsnmHack, \bfnmJ. J.\binitsJ. J. (\byear2006). \btitleThe low-resolution CCSM3. \bjournalJ. Climate \bvolume19 \bpages2545–2566. \bptokimsref \endbibitem
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description