An approximate fractional Gaussian noise model with {\mathcal{O}}(n) computational cost
Abstract

Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length using a likelihood-based approach is , exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to . This paper presents an approximate fGn model of computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

Keywords: Autoregressive processes, Gaussian Markov random field, long memory, R-INLA, Toeplitz matrices

1 Introduction

Many natural processes observed in either time or space exhibit long memory dependency structures. One way to characterise long memory is in terms of the autocorrelation function having a slower than exponential decay, as a function of increasing temporal or geographical distance between observational points. In second-order stationary time series, long memory implies that the autocorrelations are not absolutely summable (McLeod and Hipel, 1978). Long memory behaviour has been observed within a vast variety of time series applications, like hydrology (Hurst, 1951; Hosking, 1984), geophysical time series (Mandelbrot and Wallis, 1969), network traffic modelling (Willinger et al., 1996), econometrics (Baillie, 1996; Cont, 2005) and climate data analysis (Franzke, 2012; Rypdal and Rypdal, 2014). For comprehensive introductions to long memory processes and their applications, see for example Doukhan et al. (2003) and Beran et al. (2013).

Fractional Gaussian noise is defined as the increment process of fractional Brownian motion. It is a stationary and parsimoniously parameterised model, with dependency structure characterised by the Hurst exponent , which gives long memory when . The computational cost of likelihood-based inference in fitting an fGn process of length is , using the Durbin-Levinson or Trench algorithms (McLeod et al., 2007; Golub and Loan, 1996; Durbin, 1960; Levinson, 1947; Trench, 1964). These algorithms make use of the Toeplitz structure of the covariance matrix of fGn and are considered to be sufficiently fast when is not too large (McLeod et al., 2007). A main problem is that the required Toeplitz structure is fragile to modifications of the Gaussian observational model and computations of conditional distributions. For example, the Toeplitz structure is destroyed if fGn is observed indirectly with Gaussian inhomogeneous noise, or has missing data. Without the Toeplitz structure, the computational cost in fitting fGn increases to , which is far too expensive in many real data applications.

This paper presents an accurate and computationally efficient approximate fGn model of cost , both with direct and indirect Gaussian observations, with or without additional conditioning. This represents a major computational improvement compared with existing approaches, and allows routinely use of fGn models with large , with negligible loss of accuracy. The new approximate model uses a weighted sum of independent first-order autoregressive processes (AR). The motivation is that aggregation of short-memory processes plays an important role to explain long memory behaviour in time series (Beran et al., 2010) and an infinite weighted sum of AR(1) processes will give long memory (Granger, 1980). However, the physical intuition about long memory processes suggests that we leverage this relationship much more than the asymptotic result states, see for example Haldrup and Valdés (2017). The new approximate model only needs a weighted sum of a few AR(1) processes to be accurate. We obtain this by fitting the weights and the coefficients of the approximation to mimic the autocorrelation function of the exact fGn model, as a continuous function of .

A key feature of the approximate fGn model is a high degree of conditional independence within the model. Specifically, the approximate model will be represented as a Gaussian Markov random field (GMRF), which computational properties are not destroyed by indirect Gaussian observations nor conditioning (Rue and Held, 2005). The approximate model is also stationary, a desired property which is not common among GMRFs, as they typically have boundary effects. Since the approximate model is a local GMRF, it also fits well within the framework of latent Gaussian models for which approximate Bayesian analysis is obtained with integrated nested Laplace approximations (INLA) (Rue et al., 2009) using the R-package R-INLA (http://www.r-inla.org). A different aspect is that an aggregated model of a few AR components could actually represent a more plausible and interpretable model than fGn in real data applications. Specifically, the approximate model can serve as a tool for automatic source separation in situations where the data at hand represent combined signals.

The plan of this paper is as follows. Section 2 reviews the computational cost in fitting the exact fGn model to direct and indirect Gaussian observations. Section 3 presents the new approximate fGn model and derives the computational cost and memory requirement for evaluating the log-likelihood. The performance of the approximate model is demonstrated by simulations in Section 4, also including a study of its predictive properties. In Section 5, we use the implicit source separation ability in decomposing the historical dataset of annual water level minima for the Nile river (Toussoun, 1925; Beran, 1994). Implementation within the class of latent Gaussian models is demonstrated in analysing a monthly mean surface air temperature series for Central England (Manley, 1953, 1974; Parker et al., 1992). Concluding remarks are given in Section 6.

2 The computational cost of fGn

Let be a zero-mean fGn process, . The elements of the covariance matrix, , , are defined by the autocorrelation function

This function has a hyperbolic decay as . The fGn process has long memory when , while it reduces to white noise when . When , the fGn model has anti-persistent properties, but we do not discuss this case here.

When fGn is observed directly, we estimate by maximizing the log-likelihood function

where is the precision matrix of . Making use of the Toeplitz structure of , the likelihood is evaluated in flops using the Durbin-Levinson algorithm (Golub and Loan, 1996, Algorithm 4.7.2). Also, the precision matrix can be calculated in flops by the Trench algorithm (Golub and Loan, 1996, Algorithm 4.7.3). In general, the Trench algorithm can be combined with the Durbin-Levinson recursions (Golub and Loan, 1996, Algorithm 5.7.1), to calculate the exact likelihood of general linear Gaussian time series models (McLeod et al., 2007).

A major drawback of relying on these algorithms for Toeplitz matrices is that the Toeplitz structure is easily destroyed if the time series is observed indirectly. Consider a simple regression setting in which an fGn process is observed with independent Gaussian random noise,

(1)

where and is diagonal. The log-density of is

The conditional mean of is found by solving with respect to , while the marginal variances equal . The Toeplitz structure of is only retained when the noise term has homogeneous variance, i.e. . With non-homogenous observation variance or missing data, the computational cost in fitting (1) would require general algorithms of cost . This makes analysis of many real data sets infeasible, or at best challenging.

The motivation for expressing the log-likelihood function in terms of the precision matrix , is to prepare for an approximate GMRF representation of the fGn model. We have already noted that aggregation of an infinite number of short-memory processes can explain long memory behaviour in time series. This implies that is (or can be approximated with) a sparse band matrix, but with a larger dimension (still denoted by ) for a finite sum. Assume for a moment that such an approximation exists. We can then apply general numerical algorithms for sparse matrices which only depend on the non-zero structure of the matrix. This implies that the numerical cost in dealing with or , is the same. Conditioning on subsets of implies nothing else than working with a submatrix of or , and does not add to the computational costs; see (Rue and Held, 2005, Ch. 2) for details. Specifically, we can make use of the Cholesky decomposition, in which the relevant precision matrix is factorised as , where is a lower triangular matrix. The log-likelihood is then evaluated with negligible cost as the log-determinant is (Rue, 2001). The conditional mean is found by solving and . The numerical cost in finding the Cholesky decomposition depends on the non-zero structure of the matrix. For time-series models (or long skinny graphs), the cost is (Rue and Held, 2005). The explicit construction of such an approximation is discussed next.

3 An approximate fGn model

This section presents an approximate fGn model which is a weighted sum of just a few independent AR processes. We will fit the parameters of the approximation to mimic the autocorrelation structure of fGn up to a given finite lag. The resulting approximate model is a GMRF with a banded precision matrix of fixed bandwidth, which gives a computational cost of .

3.1 Fitting the autocorrelation function

Define independent AR processes by

(2)

where denotes the first-lag autocorrelation coefficient of the th AR process. Further, let be independent zero-mean Gaussians, with variance . Define the cross-sectional aggregation of the AR processes,

(3)

where and . This implies that . The finite-sample properties of a similar aggregation of AR(1) processes are studied in Haldrup and Valdés (2017), where , and where the coefficients are Beta distributed. They conclude that “one should be aware that cross-sectional aggregation leading to long memory is an asymptotic feature that applies for the cross-sectional dimension tending to infinity. In finite samples and for moderate cross-sectional dimensions the observed memory of the series can be rather different from the theoretical memory”.

The approximation presented here only needs a small value of the cross-sectional dimension to be accurate. The key idea to our approach is to fit the weights and the autocorrelation coefficients in (3) to match the autocorrelation function of fGn, as a function of . The autocorrelation function of (3) follows directly as

Now, fix a value of . We fit the weights and coefficients by minimizing the weighted squared error

(4)

where represents a user-specified upper limit (we use ). The squared error is weighted by to ensure a good fit for the autocorrelation function close to lag 0, while less weight is given to tail behaviour as the autocorrelation function is decaying slowly.

By a quite huge calculation done only once, we find for a fine grid of -values. Spline interpolation is used for values of in between, to represent the weights and coefficients as continuous functions of . The interpolation and fitting are performed using reparameterised weights and coefficients to ensure uniqueness and improved numerical behaviour. These reparameterisations are defined as

where and where . The Hurst exponent is transformed as . This ensures a stable and unconstrained parameter space on for fixed , where . Note that the error of the fit tends to zero, when goes to 1 or . The resulting coefficients and weights for and are displayed in Figure 1. The fitted weights and coefficients are also available in R using the function INLA::inla.fgn.

Figure 1: Upper panels: The fitted weights and coefficients in (4) using components in the approximation. Lower panels: Similar using .

3.2 A Gaussian Markov random field representation

We will now discuss the precision matrix for the approximate fGn model. We start with one AR process (2) of length , with unit variance and a tridiagonal precision matrix

For the approximate fGn model, we have such processes and their sum. Hence we need the precision matrix of the vector

(5)

To ensure a non-singular distribution, we will add a small Gaussian noise term to the sum, i.e. we let

(6)

where the precision of is high, like . The (upper part of the) precision matrix is found as

The non-zero structure is displayed in Figure 2 (left panel) for and . Even though the matrix is sparse, a more optimal structure can be achieved by grouping the variables associated with each of the time points,

(7)

The benefit of this reordering is that the corresponding precision matrix is a band matrix, see Figure 2 (middle panel). Doing the Cholesky decomposition, , the lower triangular matrix will inherit the lower bandwidth of (Rue, 2001; Golub and Loan, 1996, Thm. 4.3.1), see Figure 2 (right panel). This leads to the following key result concerning the computational cost of the approximate model, with a trivial proof.

Theorem 3.1.

The number of flops needed for Cholesky decomposition of is . The memory requirement for the Cholesky triangle is reals.

Proof.

is a band matrix with dimension and bandwidth . The computational cost of the Cholesky factorisation, is and the memory requirement needed is (Golub and Loan, 1996, Section 4.3.5). ∎

Figure 2: Left panel: The structure of the precision matrix of the vector (5). Middle panel: The structure of the precision matrix of the reordered vector (7). Right panel: The resulting structure of the lower triangular matrix in the Cholesky decomposition. The matrices are illustrated for the case and .

The computational cost and memory requirement of the Cholesky decomposition do not change if the approximate fGn model is observed indirectly, like in the regression model (1). Notice that it is also possible to construct an approximation using the cumulative sums of to form a sparse precision matrix, with the same bandwidth. Obviously, this approach gives computational savings but it does not allow for automatic source separation in situations where the fGn can be seen to represent combined signals. This feature of the approximate model is demonstrated in Section 5.1.

3.3 Choosing the number of AR(1) components in the approximation

The choice of in (6) reflects a trade-off between computational efficiency and approximation error. This implies that should be as small as possible but still large enough to give a reasonably accurate approximation of the autocorrelation function of fGn. Figure 3 illustrates the autocorrelation function of fGn compared with the approximate model when and , using in (4). We only show results for as the differences between the curves will be less visible using smaller values of .

Figure 3: Left panel: The exact autocorrelation function (solid) when , versus the autocorrelation function of the approximate model (dashed), using and . Right panel: Similar using .

We do notice that gives an almost perfect match of the autocorrelation function up to . For larger lags, the autocorrelation function of the approximate fGn model will have an exponential decay, hence we cannot match the hyperbolic decay of the exact fGn. As a consequence, can be seen as a trade-off between having a good fit for the first part of the autocorrelation function versus tail behaviour.

Figure 4: Left panel: The square-root of the Kullback-Leibler divergence as a function of for time series of length (solid) and (dashed), using the approximate fGn model with . Right panel: Similar using .

A different way to illustrate the difference between the approximate and exact fGn models is in terms of the Kullback-Leibler divergence. This is a measure of complexity between probability distributions, which here measures the information lost when the approximate fGn model is used instead of the exact fGn model. Figure 4 displays the square-root of the Kullback-Leibler divergence for and , as a function of . We notice that clearly gives an improvement over , in particular for larger values of . The loss in information when compared to is small, despite the fact that the autocorrelation function is fitted only up to lag .

4 Simulation results

To evaluate the properties of the approximate fGn model, we now study the loss of accuracy when using the approximate versus the exact fGn model, for estimation and prediction. The results will demonstrate an impressive performance for both the estimation and prediction exercises using the approximate fGn model with .

4.1 Maximum likelihood estimation of

We first study the loss of accuracy using the approximate versus the exact fGn model in maximum likelihood estimation of . We fit the approximate model using and to simulated fGn series of length , with replications. The error is evaluated in terms of the root mean squared error (RMSE) and the mean absolute error (MAE) of , where and denote the estimates using the approximate versus the exact fGn, for the th replication.

The results are summarised in Table 1 in which the true Hurst exponent ranges from 0.60 to 0.95. Using , the Hurst exponent is underestimated and the error is seen to increase with , at least up to 0.90. The situation really improves for , in which the error is small for all values of . The standard deviation estimates found from the empirical Fisher information are more similar than the estimates themselves (results not shown).

Average MLE of
Exact
0.60 0.5998 0.5998 0.5998 0.0019 0.0007 0.0015 0.0006
0.65 0.6481 0.6478 0.6480 0.0026 0.0008 0.0021 0.0006
0.70 0.7004 0.6997 0.7003 0.0033 0.0008 0.0026 0.0006
0.75 0.7488 0.7472 0.7487 0.0032 0.0007 0.0025 0.0006
0.80 0.7998 0.7974 0.7996 0.0031 0.0006 0.0026 0.0005
0.85 0.8503 0.8471 0.8500 0.0035 0.0004 0.0032 0.0004
0.90 0.8999 0.8965 0.8997 0.0035 0.0003 0.0034 0.0003
0.95 0.9500 0.9475 0.9499 0.0025 0.0002 0.0025 0.0001
Table 1: The average of the maximum likelihood estimates of , the root mean squared error and the absolute mean error using the exact versus the approximate models with and . The generated fGn processes are of length with replications.

Figure 5 displays scatterplots of the maximum likelihood estimates for the approximate model with and , versus the estimates using the exact model, when , and . The inaccuracy for is clearly visible and increases with increasing values of , while shows very good performance. We have noticed that the same general remarks also hold when we increase the length of the series to . The series then contain more information about , and the error due to using is negligible. In conclusion, we do get a very low loss of accuracy using the approximate model with . This is impressive, especially as it applies for all reasonable values of in the long memory range.

Figure 5: Upper panels: The maximum likelihood estimates of for replications using the approximate fGn model with versus the exact fGn (labelled ). The true H-values are (left), (middle) and (right) and the generated series have length . Lower panels: Similar using .

4.2 Predictive properties

This section investigates the effect of the approximation error when we observe an fGn process of length with fixed , and then want to predict future time points. The approximate model is implemented with . To evaluate the properties of the predictions, we consider the empirical mean of the standardised absolute prediction error,

(8)

where is the number of replications. is the conditional expectation for time points ahead from the th replication using the approximate fGn model. Correspondingly, is the conditional expectation using the exact fGn model while is the conditional standard deviation. To measure the error in the conditional standard deviation, we use

(9)

which does not depend on the replication.

Figure 6: Left panel: The prediction error for the mean (8), predicting up to points ahead, when . The observed time series is either of length (solid) or (dashed). Right panel: The similar error in the conditional standard deviation (9).

The left panel of Figure 6 illustrates the empirical prediction error in (8) for time-points ahead, following either or observations. The right panel shows the corresponding error in the prediction standard deviation (9). We only report results for as other values of the Hurst exponent give similar results. We notice that the mean prediction error increases slightly when compared to , which is explained by the increased error for lags larger than . Otherwise, both errors are relatively small and also quite stable with .

5 Real data applications: Source separation and full Bayesian analysis

This section demonstrates two different aspects of the approximate fGn model in real data applications. First, the approximate model can be used as a tool for source separation of a combined signal, for example representing underlying cycles or variations for different time scales. This will be illustrated in analysing the Nile river dataset (available in R as FGN::NileMin). These data give annual water level minimas for the period 622 - 1284, measured at the Roda Nilometer near Cairo. Second, the approximate model can easily be combined with other model components within the general framework of latent Gaussian models and fitted efficiently using R-INLA. This is demonstrated in analysing the Hadley Centre Central England Temperature series (HadCET), available at http://www.metoffice.gov.uk/hadobs. These data give mean monthly measurements of surface air temperatures for Central England in the period 1659 - 2016. The two datasets are illustrated in Figure 7.

Figure 7: Left panel: Mean-centered annual minimum water level of the Nile river. Right panel: Monthly mean surface air temperatures for Central England, also including the annual mean temperatures in white.

5.1 Signal separation for the Nile river annual minima

The Nile river dataset is a widely studied time series (Beran, 1994; Eltahir, 1996) often used as an example of a real fGn process (Koutsoyiannis, 2002; Benhmehdi et al., 2011). Analysis of this dataset led to the discovery of the Hurst phenomenon (Hurst, 1951). For hydrological time series, this phenomenon has been explained as the tendency of having irregular clusters of wet and dry periods and can be related to characteristics of the fluctuations of the series at different temporal scales (Koutsoyiannis, 2002).

We can easily fit the exact fGn model to this dataset as the process is observed directly and the length of the series is only . The maximum likelihood estimate for the Hurst exponent is . Using the approximate fGn model with , we get . The resulting four estimated weighted AR(1) components are illustrated in Figure 8. The fitted autocorrelation coefficients for these components equal , while the weights are . The estimated standard deviation is .

As illustrated in Figure 1, the first autocorrelation coefficient will always be quite close to 1. This gives a slowly varying trend, which in this case basically represents the mean. The second component also reflects a slowly varying signal, which can be interpreted to represent cycles of the water level fluctuations of about 200 - 250 years. The third component seems to reflect shorter cycles of length 30 - 100 years. These cycles are seen to appear more irregularly and we also notice the tendency of having clusters of years with high and low water levels, respectively. The fourth component, which has the smallest autocorrelation coefficient and the largest weight, can be interpreted as weakly correlated annual noise. These interpretations are in correspondence with the Hurst phenomenon, in which the components reflect signal fluctuations at different temporal scales.

Figure 8: The estimated weighted AR(1) components for mean-centered annual minimum water levels of the Nile river, using the approximate fGn model with .

5.2 Full Bayesian analysis of a temperature series

The HadCET series is the longest existing instrumental record of monthly temperatures in the world. The observations started in January 1659 and have been updated monthly. The observed temperatures do have uncertainties (Parker and Horton, 2005), especially in the earliest years, and has been revised several times (Manley, 1953, 1974; Parker et al., 1992). We analyse temperatures up to December 2016, which gives a total of observations.

In fitting a model to the given temperatures, we assume

where is the temperature in month (measured in degrees Celsius). The given linear predictor includes an intercept , a linear trend , and a seasonal effect of periodicity which captures monthly variations. This seasonal effect is modelled as an intrinsic GMRF of rank , having precision parameter (Rue and Held, 2005, p. 122) and scaled to have a generalized variance equal to 1 (Sørbye and Rue, 2014). The term denotes the approximate fGn model with , having precision parameter .

The parameters and are assigned vague Gaussian priors, , while we use penalised complexity priors (PC priors) (Simpson et al., 2017) for all hyperparameters. This implies a type II Gumbel distribution for the precision parameters and , scaled using the probability statement . The PC prior for (Sørbye and Rue, 2017) is scaled by assuming the tail probability .

Figure 9: The marginal posterior , analysing the HadCET data.

Analysis of the given model using an exact fGn term is infeasible in terms of computational cost and memory usage. A MacBook Pro with 16GB of RAM crashes due to memory shortage when analysing exact fGn processes of length . Using the approximate fGn term with , the full Bayesian analysis takes about 14 seconds. The inference gives a significantly positive trend with posterior mean with credible interval . This corresponds to an overall increase in temperature of about degrees Celsius. The marginal posterior for is illustrated in Figure 9. The posterior mean is with credible interval equal to .

6 Concluding remarks

In this paper, we obtain a remarkably accurate approximation of fGn using a weighted sum of only four AR components. Our approximate fGn model has a small loss of accuracy for the whole long memory range of . The key idea to obtain this is to ensure that the approximate model captures the most essential part of the autocorrelation structure of the exact fGn model. This is achieved by appropriate weighting, matching the autocorrelation structure up to a specified maximum lag.

By construction, the autocorrelation function of the approximate model has an exponential decay for lags larger than the specified maximum lag. This implies that the approximate model does not satisfy formal definitions of long memory processes. However, this trade-off is needed to make analysis of realistically complex models computationally feasible. The great benefit of the resulting approximation is that it has a GMRF structure. This is crucial, especially as computations can be performed equally efficient in unconditional and conditional scenarios.

An approximate model can never reflect the properties of the exact model perfectly, but neither does a theoretical model in explaining an observed data set. In theory, the fGn model corresponds to an aggregation of an infinite number of AR components which indicates that the model is difficult to interpret in practice. The given decomposition of just a few AR terms might provide a more realistic model. As an example, we have provided a decomposition of the Nile river data, which reflects fluctuations and cycles for different temporal scales. Such a decomposition could also be valuable in analysing climatic time series. For example, long memory in temperature series has been related to an aggregation of a few simple underlying geophysical processes (Fredriksen and Rypdal, 2017).

Implementation of the approximate fGn model in R-INLA provides an easy-to-use tool to analyse models with fGn structure. As demonstrated in the temperature example, we can easily combine the fGn model component with other terms in an additive linear predictor, for example covariates, other random effects or deterministic effects like climate forcing. Also, we do see a potential to incorporate fGn model components in analysis of spatial time series, for example by making use of the methodology in Lindgren et al. (2011).

Acknowledgement

The authors acknowledge the Met Office Hadley Centre for making the HadCET data freely available at http://www.metoffice.gov.uk/hadobs. We also want to thank Finn Lindgren, Hege-Beate Fredriksen and Martin Rypdal for helpful discussions.

References

  • Baillie (1996) Baillie, R. T. (1996). Long memory processes and fractional integration in econometrics. Journal of Econometrics, 73:5–59.
  • Benhmehdi et al. (2011) Benhmehdi, S., Makarava, N., Menhamidouche, N., and Holschneider, M. (2011). Bayesian estimation of the self-similarity exponent of the Nile River fluctuation. Nonlinear Processes in Geophysics, 18:441–446.
  • Beran (1994) Beran, J. (1994). Statistics for Long-Memory Processes. Chapman & Hall/CRC, New York, 1st edition.
  • Beran et al. (2013) Beran, J., Feng, Y., Ghosh, S., and Kulik, R. (2013). Long-Memory Processes - Probabilistic Properties and Statistical Methods. Springer, Heidelberg.
  • Beran et al. (2010) Beran, J., Schützner, M., and Ghosh, S. (2010). From short to long memory: Aggregation and estimation. Computational Statistics and Data Analysis, 54:2432–2442.
  • Cont (2005) Cont, R. (2005). Long range dependence in financial markets. In Fractals in Engineering, pages 159–180. Springer, London.
  • Doukhan et al. (2003) Doukhan, P., Oppenheim, G., and Taqqu, M. (2003). Theory and Applications of Long-Range Dependence. Birkhauser Boston, c/o Springer-Verlag, New York Inc.
  • Durbin (1960) Durbin, J. (1960). The fitting of time-series models. Review of the International Statistical Institute, 28:233–244.
  • Eltahir (1996) Eltahir, E. A. B. (1996). El Niño and the natural variability in the flow of the Nile River. Water Resources Research, 32:131–137.
  • Franzke (2012) Franzke, C. (2012). Nonlinear trends, long-range dependence, and climate noise properties of surface temperature. Journal of Climate, 25:4172–4182.
  • Fredriksen and Rypdal (2017) Fredriksen, H.-B. and Rypdal, M. (2017). Long-range persistence in global surface temperatures explained by linear multibox energy balance models. Journal of Climate, doi.org/10.1175/JCLI-D-16-0877.1.
  • Golub and Loan (1996) Golub, G. and Loan, C. V. (1996). Matrix Computations. Johns Hopkins University Press, Baltimore, 3rd edition.
  • Granger (1980) Granger, C. W. J. (1980). Long memory relationships and the aggregation of dynamic models. Journal of Econometrics, 14:227–238.
  • Haldrup and Valdés (2017) Haldrup, N. and Valdés, J. E. V. (2017). Long memory, fractional integration and cross-sectional aggregation. Journal of Econometrics, 199:1–11.
  • Hosking (1984) Hosking, J. R. M. (1984). Modeling persistence in hydrological time series using fractional differencing. Water Resources Research, 20:1898–1908.
  • Hurst (1951) Hurst, H. E. (1951). Long-term storage capacities of reservoirs. Transactions of the American Society for Civil Engineers, 116:770–799.
  • Koutsoyiannis (2002) Koutsoyiannis, D. (2002). The Hurst phenomenon and fractional Gaussian noise made easy. Hydrological Sciences, 47:573–595.
  • Levinson (1947) Levinson, N. (1947). The Wiener (root mean square) error criterion in filter design and prediction. Journal of Mathematics and Physics, 25:261–278.
  • Lindgren et al. (2011) Lindgren, F., Rue, H., and Lindström, J. (2011). An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach (with discussion). Journal of the Royal Statistical Society, Series B, 73:423–498.
  • Mandelbrot and Wallis (1969) Mandelbrot, B. B. and Wallis, J. R. (1969). Global dependence in geophysical records. Water Resources Research, 5:321–340.
  • Manley (1953) Manley, G. (1953). The mean temperature of central England, 1698 to 1952. Quarterly Journal of the Royal Meteorological Society, 79:242–261.
  • Manley (1974) Manley, G. (1974). Central England temperatures: Monthly means 1659 to 1973. Quarterly Journal of the Royal Meteorological Society, 100:389–405.
  • McLeod and Hipel (1978) McLeod, A. I. and Hipel, K. W. (1978). Preservation of the rescaled adjusted range: 1. A reassessment of the Hurst phenomenon. Water Resources Research, 14:491–508.
  • McLeod et al. (2007) McLeod, A. I., Yu, H., and Krougly, Z. L. (2007). Algorithms for linear time series analysis: With R Package. Journal of Statistical Software, 23:1–26.
  • Parker and Horton (2005) Parker, D. E. and Horton, E. B. (2005). Uncertainties in the central England temperature series since 1878 and some changes to the maximum and minimum series. International Journal of Climatology, 25:1173–1188.
  • Parker et al. (1992) Parker, D. E., Legg, T. P., and Folland, C. K. (1992). A new daily central England temperature series, 1772-1991. International Journal of Climatology, 12:317–342.
  • Rue (2001) Rue, H. (2001). Fast sampling of Gaussian Markov random fields. Journal of the Royal Statistical Society, Series B, 63:325–338.
  • Rue and Held (2005) Rue, H. and Held, L. (2005). Gaussian Markov Random Fields: Theory and Applications. Chapman & Hall/CRC, London.
  • Rue et al. (2009) Rue, H., Martino, S., and Chopin, N. (2009). Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations (with discussion). Journal of the Royal Statistical Society, Series B, 71:319–392.
  • Rypdal and Rypdal (2014) Rypdal, M. and Rypdal, K. (2014). Long-memory effects in linear response models of Earth’s temperature and implications for future global warming. Journal of Climate, 27:5240–5258.
  • Simpson et al. (2017) Simpson, D., Rue, H., Riebler, A., Martins, T. G., and Sørbye, S. H. (2017). Penalising model component complexity: A principled, practical approach to constructing priors. Statistical Science, 232:1–28.
  • Sørbye and Rue (2014) Sørbye, S. H. and Rue, H. (2014). Scaling intrinsic Gaussian Markov random field priors in spatial modelling. Spatial Statistics, 8:39–51.
  • Sørbye and Rue (2017) Sørbye, S. H. and Rue, H. (2017). Fractional Gaussian noise: Prior specification and model comparison. Environmetrics, doi:10.1002/env.2457.
  • Toussoun (1925) Toussoun, O. (1925). M’emoire sur l’histoire du Nil. In M’emoires a l’Institut d’Egypte, volume 18, pages 366–404.
  • Trench (1964) Trench, W. F. (1964). An algorithm for the inversion of finite Toeplitz matrices. Journal of the Society for Industrial and Applied Mathematics, 12:515–522.
  • Willinger et al. (1996) Willinger, W., Paxson, V., and Taqqu, M. S. (1996). Self-similarity and heavy tails: Structural modeling of network traffic. In A Practical Guide to Heavy Tails: Statistical Techniques and Applications, pages 27–53. Birkhauser, Boston.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
13843
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description