Environmental Time Series Interpolation Based on Spartan Random Processes
In many environmental applications, time series are either incomplete or irregularly spaced. We investigate the application of the Spartan random process to missing data prediction. We employ a novel modified method of moments (MMoM) for parameter inference. The CPU time of MMoM is shown to be much faster than that of maximum likelihood estimation and almost independent of the data size. We formulate an explicit Spartan interpolator for estimating missing data. The model validation is performed on both synthetic data and real time series of atmospheric aerosol concentrations. The prediction performance is shown to be comparable with that attained by the best linear unbiased (Kolmogorov-Wiener) predictor at reduced computational cost.
keywords:inference, precision matrix, gappy data, atmospheric aerosol, fine particulate, PM2.5
Time series have wide-ranging applications in environmental monitoring, such as air and water quality control. The series carry information about temporal autocorrelations (henceforth, correlations) in variables such as atmospheric pollutant concentrations, particulate matter, indicators of water clarity, salinity etc. Knowledge of the correlation structure enables the prediction of time-series, estimating the prediction uncertainty, and developing stochastic simulations that reconstruct (at least partially) the process of interest. In many applications, observations are irregularly spaced (due to difficulties and cost of data acquisition), and they include fully missing (due to measurement interruptions) or censored (lying below the equipment detection threshold) data. These gaps in the series prevent the use of standard analytical techniques that assume regular sampling. Recently, the European Union introduced a novel concept in environmental legislation, in an effort to establish common data quality objectives for air pollution (Ballesta, 2005). Hence, there is a need to harmonize data obtained by means of different measurement techniques and sampling conditions. Difficulties in quantifying uncertainty, due to incomplete time coverage, led to simplistic approaches that assume a random distribution of the missing data. The associated uncertainty is then reduced to a simple function of the percentage of the missing data. However, neglecting the frequency distribution and temporal correlations of the missing data can strongly affect the estimates. Furthermore, the data may be sampled at different temporal scales. By predicting time series at unmeasured instants, it is possible to fill in the missing values or to down-scale (refine) existing measurements.
Several recent papers evaluate methods of time series prediction for environmental applications. In (Houseman, 2005), a first-order autoregressive model was applied to irregularly spaced water clarity data. Multivariate models have been applied to the time series of pollutant concentrations in the Arctic, using the multiple imputation approach (Hopke et al., 2001). Time series of aerosol particle number concentrations were modeled on the basis of traffic-related air pollution and meteorological variables (Paatero et al., 2005). In (Junninen et al., 2004), the authors evaluate and compare various univariate and multivariate methods for missing data imputation in air quality data sets. Generally, the simple univariate models that either utilize only local information or make linearity assumptions, are fast but their scope is limited. Multivariate models provide higher accuracy and reliability at increased computational cost. In the current study, we present a novel linear predictor, which is based on the use of “pseudo-energy” functionals inspired from statistical physics. We apply this Spartan predictor in conjunction with computationally efficient parameter inference method. We show that the results obtained with the Spartan predictor are comparable with those obtained by the best linear unbiased estimator. However, the former is superior in terms of computational speed.
For Gaussian time series, the temporal structure is determined from the autocovariance (henceforth, covariance) matrix, which is estimated from the data. In the case of non-uniform sampling steps, the structure function (variogram) is typically estimated instead of the covariance for practical reasons. However, temporal correlations are also present (albeit on different physical scales) in models of statistical physics, e.g., in the Ising model and spin glass models. In these models, correlations are imposed by means of physical interactions embodied in the energy functional and thus do not need to be calculated from the data. Recently, the method of Spartan Spatial Random Fields (SSRF) (Hristopulos, 2003; Hristopulos and Elogne, 2007) was proposed as a general framework for geostatistical applications. SSRFs are parametrically flexible, do not rely on variogram estimation to determine the spatial structure, and allow incorporating physical constraints in the joint probability density function. In the present study, we define in the same spirit the Spartan Random Processes (SRP) and apply them to time series prediction.
The rest of the paper is organized as follows. Spartan random processes are introduced in Section 2. In Section 3, we discuss parameter inference using maximum likelihood estimation and the modified method of moments. In Section 4 we present the Spartan interpolator and compare it to the Kolmogorov Wiener predictor (simple kriging). In Sections 5 and 6, we compare the model inference and data prediction methods using both synthetic data as well as real time series of aerosol concentration. Finally, we summarize and present our conclusions in Section 7.
2 Spartan Random Processes
Herein we assume a Gaussian, second-order stationary (Yaglom, 1987), detrended time series, is an intrinsic time scale related to temporal resolution. The pdf can be expressed in terms of an energy functional , according to the familiar from statistical physics expression of Gibbs pdf’s:
where the partition function is the normalization factor. In the classical geostatistical framework the energy functional corresponds to:
where is the inverse of the covariance matrix (also known as the precision matrix), and is the number of the data points.
In analogy with Spartan spatial random fields (Hristopulos, 2003), we define the fluctuation-gradient-curvature (FGC) Spartan random process that involves four parameters with a well defined physical meaning: the scale coefficient , the shape coefficient , the characteristic time , and the cutoff circular frequency . A kernel function is used to implement the cutoff. Below we use a boxcar kernel with sharp spectral cutoff at If time is considered as a continuous variable, the FGC Spartan pdf is determined from the following energy functional:
where and the dots denote the first and second order time derivatives. The covariance spectral density is then given by the following expression
where if and otherwise. The covariance function is then obtained from the inverse Fourier transform, given by the following integral:
For a discrete time series, sampled at the times , , , the derivatives are approximated by finite forward differences and the energy functional takes the following form:
2.1 The Precision Matrix
In Eq. (2), the values of the process are coupled even at distant times through the precision matrix. In contrast, in Eq. (6) only values between neighboring times are coupled. The energy functional is then expressed as follows:
where is the precision matrix. Based on Eq. (6), the precision matrix can be expressed in closed form as follows:
where is the identity matrix, is the gradient precision sub-matrix,
and is the curvature precision sub-matrix
In the discrete representation, given by Eqs. (7)-(10), the Spartan random process is equivalent to a Markov random process, the only difference from the latter being the nonlinear dependence on the coefficients. However, in contrast with Markov processes, the Spartan random process can be generalized to non-uniform sampling patterns as shown in (Elogne et al., 2008).
3 Parameter Inference
Let be a set of sampling times and the vector of sample measurements. Let denote the likelihood of the parameter vector given the data. Let us also define the reduced parameter set and the scaled precision matrix Let the sample estimate of the energy functional be . The scaled energy functional is defined by .
For a complete (with no missing data) series is given from equation (7) by replacing with the data
In the case of series with missing data it is preferable to use the definition
where are sample estimates of the respective quantities , evaluated over all the time instants present in the series. The gaps do not influence the estimation of , which is performed over all the sample points. In the case of the average involves all compact clusters of data that include at least two nearest neighbors. Similarly, in the case of the average involves all compact clusters of data that include at least three nearest neighbors.
3.1 Maximum Likelihood Estimation
The maximum likelihood estimates (MLE) are obtained by minimizing numerically the negative log-likelihood (NLL), e.g. (Stein, 1999, pp. 169-175), which requires the evaluation and inversion of the covariance matrix. The numerical operations are computationally intensive, especially for large sample sizes. As we show below, the Spartan random process has a significant advantage in terms of computational speed. The NLL becomes:
The estimate follows from requiring the derivative of with respect to to vanish, leading to,
and by replacing with in equation (12), the NLL is given in terms of the scaled variables and as follows:
The constant is independent of and can be dropped from the minimization. The NLL is minimized using a numerical optimization method. In the case of the Spartan random process, the computational efficiency is gained by the fact that is estimated without the inversion of a full covariance matrix. Moreover, based on equations (9) and (10), is a pentadiagonal symmetric matrix, so that fast and accurate approximations can be used for the evaluation of its determinant (Reusken, 2002).
3.2 Modified Method of Moments
The modified method of moments (MMoM) is based on fitting stochastic constraints with their sample counterparts. The constraints are based on short-range correlations and are motivated from the terms , in the energy functional (6). The stochastic constraints are expressed as follows:
The stochastic constrains are functions of obtained from spectral integrals (Hristopulos, 2003). Herein we focus on time series with well defined temporal resolution and correlation times that exceed this resolution. Then, we can assume an infinite band limit () and suppress the subscript . The respective Spartan covariance function has been evaluated in Hristopulos and Elogne (2007), and it is given by the following:
where , , , and . Hence, the stochastic constraints given by Eqs. (15-17) can be expressed analytically in terms of the covariance function. The optimal values of the model parameters are estimated by minimizing the following objective functional (Hristopulos, 2003)
The application of this distance metric (DM) is based on the following assumptions: First, the sample averages , are accurate and precise estimators of the stochastic expectations of the underlying random process (where denotes the expectation with respect to the unknown probability density). This assumes that ergodic conditions are satisfied, e.g. (Adler, 1981). Second, the stochastic expectations of the Gibbs random process should approximate the expectations of the underlying process.
4 Prediction of Missing Data (Temporal Interpolation)
Let us assume that there are points where data are missing in the sampling set The missing points are included in the prediction set, , which is assumed to be disjoint from . We denote by the vector of estimates (temporal predictions). Finally, let and be the joint vector of measurements and prediction points.
Using the Kolmogorov-Wiener theory (Kitanidis, 1997; Wackernagel, 2003), the single-point prediction at the point is given as a linear superposition of the data values. The coefficients of the superposition are selected so as to ensure zero bias and minimize the mean square error of the prediction, leading to the following Kolmogorov-Wiener prediction (KWP) equation:
In the above, is the data covariance matrix, is its inverse, is the covariance matrix between the estimated point and the data, and denotes the transpose of the matrix
The Spartan family of covariance functions, Eq. (18), can be used in KWP algorithms to provide new types of spatial dependence. Within the SRP framework it is also possible to define a new type of linear predictor, which allows multiple-point prediction to be performed simultaneously over all points in .
4.1 The Spartan predictor
The Spartan predictor (SP) defined below, is based on the FGC model. It relies on maximizing the conditional probability density, . Considering the definition , the problem reduces to maximizing . The latter involves the energy functional over the combined set of sampling and prediction points, i.e.,
Maximizing the conditional probability with respect to the prediction values leads to the following linear system of equations
The predictions do not depend on , since the latter is an overall scaling factor for . If there are no interactions between the prediction points, i.e. , , the linear predictor is expressed explicitly by
where is the interaction neighborhood of , i.e., the set of the points in that interact with . In the case of the FGC functional the interaction neighborhood spreads up to the second-nearest neighbor.
The Spartan predictor is linear and unbiased. Unlike the KWP, it does not require computationally expensive calculations of the covariance matrix. Specifying an arbitrary search neighborhood for each prediction point is not necessary, since the SP uses only the data in the immediate interaction neighborhood. In KWP the estimation is performed sequentially point-by-point by solving a linear system of equations. In contrast, SP is a multi-point estimator and it can be formulated explicitly only if the interaction neighborhoods of the estimation points are disjoint. The numerical complexity of multipoint SP estimation is , while the numerical complexity of KWP is , where is average number of points inside the local search neighborhood of the estimation points. In some trivial cases (Fig. 1), the equivalence between the FGC SP and KWP can be shown analytically, using symmetry of the precision and covariance matrices of the regularly spaced data. For longer time series, the analytical comparison becomes cumbersome and hence, we resort to comparison by numerical calculations. In a general case, however, we cannot expect equivalence of the two methods, since SP only considers the data from the short-range interaction neighborhood while KWP considers all the data available (or those from the search neighborhood, generally different from the SP interaction neighborhood). Nevertheless, in the multipoint Spartan prediction, the local information propagates through the system via the interacting prediction points, .i.e, system of coupled equations (22), influencing predictions at distant locations and, hence, its performance compared to KWP is a priori not obvious.
5 Case Study I: Simulated Data
All the computations were performed in the Matlab environment, on a desktop computer with Pentium 4 CPU at 3 GHz and 1 GB of RAM. We evaluate the performance of the MLE and the MMoM methods for parameter inference. We also compare the performance of the SP and KWP missing data estimators. First, we use control time series of sizes based on the following covariance functions: (i) Gaussian (ii) exponential (iii) spherical for and for and (iv) Whittle-Matérn In the above, represents the variance, the correlation time, the normalized time lag, the smoothness parameter, the inverse length, and the modified Bessel function of index . The samples are generated using the multivariate normal simulation method (Johnson, 2004). The covariance parameters are: For the Gaussian , for the exponential and the spherical , and for the Whittle-Matérn: . For each covariance model and size , 100 time series () are generated in order to calculate optimization and computer time statistics.
5.1 Results: Parameter Inference
The optimization employs the Nelder-Mead simplex search algorithm (Press et al., 1992) for both the MLE and the MMoM cases. This algorithm is fast, because it does not require the computation of a Jacobian matrix. The optimization is terminated when both the model parameters and the objective function (NLL for the MLE or DM for the MMoM) change between consecutive steps less than the specified tolerance, . The initial guesses for the Spartan parameters are , (for the MLE different initial guesses are in order due to multimodality).
In Table 1, we compare the MLE and MMoM parameter inference methods, using time series with Gaussian correlations. The mean values of the estimated Spartan parameters, , and are listed, as well as their standard deviations , and . The number of the optimization iterations , the optimization CPU time , and the cost function mean value ( NLL for MLE and DM for MMoM) at termination are also tabulated. The large standard deviations of and for small (larger for the MMoM), decrease with increasing . The dependence on can be attributed to the shape of the objective functions.
In Fig. 2(a), the DM function of one realization () is projected onto the parameter space . The optimal values , obtained from the realizations are also marked on this plane. The DM has a single minimum in the parameter plane. The estimates of tend to spread away from the permissibility boundary at producing a skewed distribution with relatively large variance. A similar plot of the NLL function, Fig. 2(b), exhibits two local minima. The optimal values , shown in green are obtained using the initial guess , while those in yellow (corresponding to the global minimum) are obtained with .
For interpolation purposes, the differences in the parameter sets obtained by means of the two methods are practically negligible. In Fig. 3, the Spartan correlation functions obtained with the two sets of parameters are compared with the true (Gaussian) correlation. Near the origin and at intermediate distances, which are crucial for the interpolation, the correlation functions almost coincide. The negative values of causes a negative hole in the Spartan correlation at larger distances. This hole is only observed in the Gaussian case due to the short range of the constraints.
Considering the optimization statistics, the value of the DM function () means an excellent match between the sample and stochastic constraints. The MMoM needs more iterations than the MLE to converge. However, the MMoM CPU time is much lower and almost insensitive to the domain size, while the MLE CPU time grows rapidly with . Since the number of iterations is insensitive to in both MLE and MMoM, the CPU time per iteration shows similar behavior with increasing as the total CPU time. For example, for the MMoM is times faster than the MLE. In the case of the Gaussian model reasonable CPU times are attained for the domain sizes considered. However, the optimization slows down considerably for the other covariance models. This is due to the presence of a long flat valley in the DM and the NLL functions, which extends to large . Convergence requires a high number of iterations, , which, for larger is easily manageable by MMoM but too expensive computationally in the case of MLE. Hence, in the following we will use MMoM for parameter inference.
5.2 Results: Interpolation
In the following we investigate the problem of interpolation (missing data prediction). Let us denote by the degree of thinning, i.e., the percentage of the missing data. We select one time series with per covariance model and we generate different partitions into a validation set, containing points at random, and a training set, containing points (i.e., ) For comparison purposes, in Fig. 4, we show the estimates for (full data) and the distribution of the values for , calculated by both the MLE and the MMoM.
We use both KWP (with search neighborhood extending over the entire series) and the Spartan predictor. The covariance function is calculated using the asymptotic formulas given by Eqs. (18). For the analysis of prediction performance, the validation points are segregated into different categories, based on the number of nearest and next-nearest neighbors that belong in the training set. For example, the category includes those validation points for which nearest neighbors and second-nearest neighbors are in the training set.
The following statistics are calculated, for each category separately, and over all the validation points: mean absolute error (MAE), mean relative error (MRE), mean absolute relative error (MARE), and root mean square error (RMSE). The calculations are performed for the four covariance models and results for the Gaussian and the exponential cases are shown in Tables 2 and 3. Overall, validation points with more data in their interaction neighborhood display smaller errors than those with fewer data. In the Gaussian and the Whittle-Matérn cases the KWP performs slightly better than the SP, while in the exponential and the spherical cases there is no significant difference between the two methods.
6 Case Study II: Aerosol concentration data
Next, we consider an application of the Spartan predictor to a time series consisting of concentration measurements of atmospheric aerosol PM2.5 particles (i.e., particles of aerodynamic diameter smaller than ). These measurements were sampled on the grounds of the Technical University of Crete using the aerosol monitor DustTrak, over a period of almost days in June 2006. The original time series consists of mean concentration values, measured at minute intervals. The correlations in the observed series span over large distances (measured in the lag of minutes). In order to make the interpolation task more challenging, we reduced the length of the series by non-overlapping clustering the original data using averages over minute intervals. This resulted in a coarse-grained time series of data points that will be used below. The data fail the Kolmogorov-Smirnov normality test at the level. Nonetheless, in the following we will model the series with a second-order stationary FGC Spartan random process. The motivation for this approach is that reasonable interpolation performance can be obtained by matching the short-range correlations.
The Spartan parameters based on the complete time series are shown in Table 4). The number of iterations involved in the MMoM is more than twice as high as that for the MLE, but the CPU time is times faster. The final DM value of indicates excellent matching between the sample and stochastic constraints. In Fig. 5, we compare the correlation functions based on the estimated Spartan parameters with the experimental correlation. Very good agreement of the experimental and estimated functions at short time ranges is observed.
For the interpolation, the data are partitioned into a training set that involves randomly selected times, and a validation set including the remaining values. The parameter inference on the training set is performed by both MMoM and MLE. There is no multimodality in either NLL or DM surface. However, as shown in Fig. 6, there is considerable scatter in the parameter estimates obtained from different training set realizations.
We calculate the same interpolation error measures as for the synthetic data. The results are shown in Table 5, using MLE estimates of the Spartan parameters. The differences between the SP and the KWP are minimal. The estimates obtained using the MLE based Spartan parameters are slightly better than the ones based on the MMoM. For comparison, the total mean absolute relative error of the MLE-based estimates is only (using the Spartan predictor) and (using the KWP) smaller than the MMoM one.
In Fig. 7, we illustrate the missing data reconstruction based on the Spartan predictor. As the data in Table 5 already suggest, the largest deviations of the estimates from the actual data are seen in the cases of ”isolated” points with no training data in their interaction neighborhood, i.e., in the category (0,0). The contribution to the total errors from such points can be considerable if the missing data represent a large portion of the entire series. In Table 6, the total errors are shown versus the percentage of the missing data. The mean values of the correlation coefficient () between the estimated and actual values at the missing points is also included. High correlation values are observed even for .
We present a framework for the analysis of Gaussian, stationary time series based on Spartan random processes. In this framework, the temporal dependence is determined from ‘pseudo-energy’ functionals. The modified method of moments (MMoM) is proposed for Spartan parameter estimation. Its main advantages are low computational complexity (high speed) even for large sample sizes. Temporal interpolation is formulated by maximizing the conditional probability density function of the Spartan process, based on the available data. The method is tested with synthetic data and with a time series of atmospheric PM2.5 aerosol concentration. The Spartan interpolator is shown to perform similarly to the standard Kolmogorov-Wiener predictor at reduced computational cost.
In time series modeling, it is often necessary to treat the model parameters as time-dependent and to estimate them continuously in the ”moving window” fashion. This approach accounts for lack of stationarity, which is a common feature in meteorological data. The ”moving window” approach has been recently applied to the automatic mapping of rainfall data, using maximum likelihood for parameter inference and universal kriging as the interpolator (Pardo-Iguzqúiza et al., 2005). Windows are typically overlapping and of varying size. In each window, parameter inference and model selection (based on the Akaike Information Criterion) are conducted, followed by interpolation. Spartan random processes have definite advantages for application in the moving window framework. First, model selection can be bypassed thanks to the flexibility of the SRP (compared with two or three parameter covariance models). Second, the computational efficiency of Spartan parameter inference and interpolation would allow for iterative methods of window size adjustment that will properly account for the physical conditions at the local scale.
This research project has been supported by a Marie Curie Transfer of Knowledge Fellowship of the European Community’s Sixth Framework Programme under contract number MTKD-CT-2004-014135.
The atmospheric aerosol data have been kindly provided by Dr. J. Ondráček and Dr. M. Lazaridis (Department of Environmental Engineering, Technical University of Crete, Chania, GR 73100).
- Adler (1981) Adler, R.J., 1981. The Geometry of Random Fields. Wiley. New York.
- Ballesta (2005) Ballesta, P.P., 2005. The uncertainty of averaging a time series of measurements and its use in environmental legislation. Atmospheric Environment 39, 2003-2009.
- Bochner (1959) Bochner, S., 1959. Lectures on Fourier Integrals. Princeton University Press. Princeton, NJ.
- Byron et al. (1992) Byron, F.W. Jr., Fuller, R.W., 1992. Mathematics of Classical and Quantum Physics. Dover. NY.
- Elogne et al. (2008) Elogne, S. N., Hristopulos, D.T., and Varouchakis, E., 2008. An Application of Spartan Spatial Random Fields in Environmental Mapping: Focus on Automatic Mapping Capabilities. Stochastic Environmental Research and Risk Assessment 22(5), in press. [Online] DOI:10.1007/s00477-007-0167-5.
- Gradshteyn et al. (1980) Gradshteyn, I.S., Ryzhik, I.M., 1980. Table of Integrals, Series and Products, edition. Academic Press. San Diego.
- Hopke et al. (2001) Hopke, P.K., Liu, C., Rubin, D.B., 2001. Multiple Imputation for Multivariate Data with Missing and Below-Threshold Measurements: Time-Series Concentrations of Pollutants in the Arctic. Biometrics 57, 22-33.
- Houseman (2005) Houseman, E.A., 2005. A robust regression model for a first-order autoregressive time series with unequal spacing: application to water monitoring. Applied Statistics 54(4), 769-780.
- Hristopulos (2003) Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications. SIAM Journal in Scientific Computation 24, 2125-2162.
- Hristopulos and Elogne (2007) Hristopulos D. T., and Elogne, S. 2007. Analytic properties and covariance functions for a new class of generalized gibbs random fields. IEEE Transactions on Information Theory 53(12), 4467-4679.
- Johnson (2004) Johnson, M.E., 1987. Multivariate Statistical Simulation. John Wiley. NY.
- Junninen et al. (2004) Junninen, H., Niska, H., Tuppurainen, K., Ruuskanen J., Kolehmainen, M., 2004. Methods for imputation of missing values in air quality data sets. Atmospheric Environment 38, 2895-2907.
- Kitanidis (1997) Kitanidis, P.K., 1997. Introduction to Geostatistics: Applications to Hydrogeology. Cambridge.
- Paatero et al. (2005) Paatero, P., Aalto, P., Picciotto, S., Bellander, T., Castano-Vinyals, G., Cattani, G., Cyrys, J., Kulmala, M., Lanki, T., Nyberg, F., et al., 2005. Estimating time series of aerosol particle number concentrations in the five HEAPSS cities on the basis of measured air pollution and meteorological variables. Atmospheric Environment 39, 2261-2273.
- Pardo-Iguzqúiza et al. (2005) Pardo-Iguzqúiza, E., Dowd, P.A., Grimes, D.I.F., 2005. An automatic moving window approach for mapping meteorological data. International Journal of Climatology 25, 665-678.
- Press et al. (1992) Press, W.H., Teukolsky, S.A., Vettering, W.T., Flannery, B.P., 1992. Numerical Recipes in Fortran, Vol. 1. Cambridge University Press. New York.
- Reusken (2002) Reusken, A., 2002. Approximation of the determinant of large sparse symmetric positive definite matrices. SIAM J. Matrix Anal. Appl. 23, 799-818.
- Smelyanskiy et al. (2005) Smelyanskiy, V. N., Luchinsky, D. G., Timuçin, D. A., Bandrivskyy, A., 2005. Reconstruction of stochastic nonlinear dynamical models from trajectory measurements, Phys. Rev. E, 72(2), 026202.
- Stein (1999) Stein, M. L., 1999. Interpolation of Spatial Data. Some Theory for Kriging. Springer, New York.
- Wackernagel (2003) Wackernagel, H., 2003. Multivariate Geostatistics. Springer. Berlin.
- Yaglom (1987) Yaglom, M., 1987. Correlation Theory of Stationary and Related Random Functions I. Springer. New York.
List of figures
Fig 1: Some trivial time series, consisting of only 3 points, for which equivalence between SP and KWP can be shown analytically. The circles denote the known data and the crosses the prediction points.
Fig 2: Top view of (a) the distance metric and (b) the negative log-likelihood function of one realization of the time series with the Gaussian covariance dependence and the parameters values for , projected onto the plane. The yellow circles represent the locations of optimal values of (), obtained from 100 different realizations, calculated by (a) MMoM and (b) MLE. The green circles in (b) represent solutions stuck in local minima.
Fig 3: Plots of the true correlation function with the Gaussian dependence and the true parameters , (circles), and the Spartan ones using the parameter sets estimated by MMoM (diamonds) and MLE (squares).
Fig 4: Optimal estimates from complete series of 1 000 points calculated by MLE (filled star) and MMoM (filled circle). Also, distribution of the optimal estimates based on the training set of 340 points calculated by MLE (empty stars) and MMoM (empty circles).
Fig 5: Plots of the empirical correlation function of the aerosol concentration data (circles), and the Spartan ones using the parameters estimated by MMoM (diamonds) and MLE (squares). The lag is measured in units of the sampling step min.
Fig 6: Top view of (a) the distance metric and (b) the negative log-likelihood function of the complete aerosol concentration time series, projected onto the plane. The yellow circles represent the locations of optimal values of (), obtained from the complete data and the yellow stars those obtained from 100 different realizations of the training set of 121 points, calculated by (a) MMoM and (b) MLE (a few points are out of the scale range).
Fig 7: Aerosol concentration time series reconstruction for one training set realization of 121 points. The data in training set are marked by empty circles. The data in the validation set are marked by crosses. The estimates at the validation set locations are marked by filled circles. The time unit is minutes.