keyword
Abstract

Data assimilation plays a key role in large-scale atmospheric weather forecasting, where the state of the physical system is estimated from model outputs and observations, and is then used as initial condition to produce accurate future forecasts. The Ensemble Kalman Filter (EnKF) provides a practical implementation of the statistical solution of the data assimilation problem and has gained wide popularity as. This success can be attributed to its simple formulation and ease of implementation. EnKF is a Monte-Carlo algorithm that solves the data assimilation problem by sampling the probability distributions involved in Bayes’ theorem. Because of this, all flavors of EnKF are fundamentally prone to sampling errors when the ensemble size is small. In typical weather forecasting applications, the model state space has dimension , while the ensemble size typically ranges between members. Sampling errors manifest themselves as long-range spurious correlations and have been shown to cause filter divergence. To alleviate this effect covariance localization dampens spurious correlations between state variables located at a large distance in the physical space, via an empirical distance-dependent function. The quality of the resulting analysis and forecast is greatly influenced by the choice of the localization function parameters, e.g., the radius of influence. The localization radius is generally tuned empirically to yield desirable results. Optimal tuning of the localization function parameters is still an open problem. This work, proposes two adaptive algorithms for covariance localization in the EnKF framework, both based on a machine learning approach. The first algorithm adapts the localization radius in time, while the second algorithm tunes the localization radius in both time and space. Numerical experiments carried out with the Lorenz-96 model, and a quasi-geostrophic model, reveal the potential of the proposed machine learning approaches.

Computer Science Technical Report CSTR-1

Azam Moosavi, Ahmed Attia, Adrian Sandu

A Machine Learning Approach to Adaptive Covariance Localization

Computational Science Laboratory

Computer Science Department

Virginia Polytechnic Institute and State University

Blacksburg, VA 24060

Phone: (540)-231-2193

Fax: (540)-231-6075

Email: azmosavi@cs.vt.edu

Web: http://csl.cs.vt.edu

Innovative Computational Solutions
keyword

Data assimilation, Covariance localization, Machine learning, EnKF

1 Introduction

Predicting the behavior of complex dynamical systems by computer simulation is crucial in numerous fields such as oceanography, glaciology, seismology, nuclear fusion, medicine, and atmospheric sciences, including weather forecasting and meteorology. Data assimilation (DA) is the set of methodologies that combine multiple sources of information about a physical system, with the goal of producing an accurate description of the state of that system. These sources of information include computer model outputs, observable measurements, and probabilistic representations of errors or noise. DA generates a best representation of the state of the system, called the analysis, together with the associated uncertainty. In numerical weather prediction the analysis state that can be used to initialize subsequent computer model runs that produce weather forecasts.

DA algorithms generally fall into one of two main categories, variational and statistical approaches. The variational approach for DA solves an optimization problem to generate an analysis state that minimizes the mismatch between model-based output and collected measurements, based on the level of uncertainty associated with each [11, 13, 38]. Algorithms in the statistical approach apply Bayes’ theorem to describe the system state using a probability distribution conditioned by all available sources of information. A typical starting point for most of the algorithms in this approach is the Kalman filter (KF) [36, 37], which assumes that the underlying sources of errors are normally distributed, with known means and covariances. Applying KF in large-scale settings is not feasible due to the intrinsic requirement of dealing with huge covariance matrices.

The EnKF  [20, 25, 26, 33] follows a Monte-Carlo approach to propagate covariance information, which makes it a practical approach for large-scale settings. Various flavors of EnKF have been developed [21, 24, 27, 35, 47, 15, 3], and have been successfully used for sequential DA in oceanographic and atmospheric applications.

EnKF carries out two main computational stages in every assimilation cycle, while operating on an ensemble of the system states to represent probability distributions. The “forecast” (prediction) stage involves a forward propagation of the system state from a previous time instance to generate an ensemble of forecast states at the current time. The “analysis” stage uses the covariance of the system states to assimilate observations and to generate an analysis ensemble, i.e., a Monte-Carlo representation of the posterior distribution of the model state conditioned by measurements.

In typical atmospheric applications the model state space has dimension , and a huge ensemble is required to accurately approximate the corresponding covariance matrices. However, computational resources limit the number of ensemble members to , leading to “under-sampling” [33] and its consequences: filter divergence, inbreeding, and long-range spurious correlations [1, 5, 6, 48]. A lot of effort has been dedicated to solving the issue of under-sampling. Inbreeding and the filter divergence are alleviated by some form of inflation [7]. Long range spurious correlations are removed by covariance localization [32]. Covariance localization is implemented by multiplying the regression coefficient in the Kalman gain with a decaying distance-dependent function such as a Gaussian [6] or the Gaspari-Cohn fifth order piecewise polynomial  [30, 31].

Different localization techniques have been recently considered for different observation types, different type of state variables, or for an observation and a state variable that are separated in time. For horizontal localization, only a single GC function is used, and is tuned by finding the best value of its parameter (half the distance at which the GC function goes to zero). Tuning the localization for big atmospheric problems is very expensive. Similar challenges appear in the vertical localization [5].

Previous efforts for building adaptive algorithms for covariance localization includes the work of Anderson [4] based on a hierarchical ensemble filter. This approach adapts the localization radius by minimizing the root-mean-square difference of regression coefficients obtained by a group of ensemble filters. Adaptive localization by formulating the Kalman update step as a differential equation in terms of its ensemble members was proposed in [14], and computing localization as a power of smoothed ensemble sample correlation was discussed in [16, 17]. The approach proposed in [6] formulates localization as a function of ensemble size and correlation between the observation and state variable. Correlation factors are obtained and applied as traditional localization for each pair of observation and state variable during assimilation. An Observing System Simulation Experiment (OSSE) algorithm is developed in [1, 41]. OSSE computes the localization radius of influence from a set of observation-state pairs by minimizing the root-mean-square (RMS) of the posterior ensemble mean compared to true model state. A probabilistic approach proposed in [58] defines the optimal radius of influence as the one that minimizes the distance between the Kalman gain using the localized sampling covariance and the Kalman gain using the true covariance. Further, the authors generalized this method for the case when the true covariance is unknown but it can be estimated probabilistically based on the ensemble sampling covariance. The relation between the localization length for domain localization and observation localization is investigated in [39]. This study concluded that the optimal localization length is linearly dependent on an effective local observation dimension given by the sum of the observation weights. In [41] two techniques for estimating the localization function are compared. The first approach is the Global Group Filter (GGF) which minimizes the RMS difference between the estimated regression coefficients using a hierarchical ensemble filter. The second approach is the Empirical Localization Function (ELF) that minimizes the RMSE difference between the true values of the state variables and the posterior ensemble mean. The ELF has smaller errors than the hand-tuned filter, while the GGF has larger errors than the hand-tuned counterpart.

In this study we propose to adapt covariance localization parameters using machine learning algorithms. Two approaches are proposed and discussed. In the localization-in-time method the radius of influence is held constant in space, but it changes adaptively from one assimilation cycle to the next. In the localization-in-space-and-time method the localization radius is space-dependent, and is also adapted for each assimilation time instance. The learning process is conducted off-line based on historical records such as reanalysis data, and the trained model is subsequently used to predict the proper values of localization radii in future assimilation windows.

The paper is organized as follows. Section 2 reviews the EnKF algorithm, the under-sampling issue and typical solutions, and relevant machine learning models. Section 3 presents the new adaptive localization algorithms in detail, and discusses their computational complexity and implementation details. Section 4 discusses the setup of numerical experiments and the two test problems, the Lorenz and the quasi-Geostrophic (QG) models. Numerical results with the adaptive localization methodology are reported in Section 5. Conclusions and future directions are highlighted in Section 6.

2 Background

This section reviews the mathematical formulation of EnKF, and associated challenges such as under-sampling, filter divergence, and development of long-range spurious correlations. We discuss traditional covariance localization, a practical and successful ad-hoc solution to the problem of long-range spurious correlations, that requires an empirical tuning of the localization parameter, e.g., the radius of influence. The last subsection reviews the basic elements of a machine learning algorithm, with special attention being paid to random forests.

2.1 Ensemble Kalman filters

EnKF proceeds in a prediction-correction fashion and carries out two main steps in every assimilation cycle: forecast and analysis. Assume an analysis ensemble is available at a time instance . In the forecast step, an ensemble of forecasts is generated by running the numerical model forward to the next time instance where observations are available:

(1a)
where is a discretization of the model dynamics. To simulate the fact that the model is an imperfect representation of reality, random model error realizations are added. Typical assumption is that the model error is a random variable distributed according to a Gaussian distribution . In this paper we follow a perfect-model approach for simplicity, i.e., we set .
The generated forecast ensemble provides estimates of the ensemble mean and the flow-dependent background error covariance matrix at time instance :
(1b)
(1c)
In the analysis step each member of the forecast is analyzed separately using the Kalman filter formulas [21, 24]:
(1d)
(1e)

where is the linearized observation operator, e.g. the Jacobian, at time instance . The stochastic (“perturbed”) version of the EnKF [21] adds different realizations of the observation noise to each individual observation in the assimilation procedure. The same Kalman gain matrix is used to assimilate observations(s) to each member of the forecast ensemble.

Deterministic (“square root”) versions of EnKF [55] avoid adding random noise to observations, and thus avoid additional sampling errors. They also avoid the explicit construction of the full covariance matrices and work by updating only a matrix of state deviations from the mean. A detailed discussion of EnKF and variants can be found in [28, 8].

2.2 Inbreeding, filter divergence, and spurious correlations

EnKF is subject to sampling errors due to under-sampling whenever the number of ensembles is too small to be statistically representative of the large-dimensional model state. In practical settings, under-sampling leads to filter divergence, inbreeding, and the development of long-range spurious correlations [33].

Inbreeding and filter divergence

In inbreeding the background error is under-estimated, which causes the filter to put more emphasis on the background state and less emphasis on the observations. This means that the forecast state is influenced adequately by the observational data, and the filter fails to adjust an incorrectly estimated forecast estate. Inbreeding and the filter divergence can be resolved using covariance inflation [7]; this is not further considered in this work. However, the machine learning approach proposed here for covariance localization can be extended to inflation.

Long-range spurious correlations

The small number of ensemble members may result in a poor estimation of the true correlation between state components, or between state variables and observations. In particular, spurious correlations might develop between variables that are located at large physical distances, when the true correlation between these variables is negligible. As a result, state variables are artificially affected by observations that are physically remote  [3, 32]. This generally results in degradation of the quality of the analysis, and eventually leads to filter divergence.

2.3 Covariance localization

Covariance localization seeks to filter out the long range spurious correlations and enhance the estimate of forecast error covariance [32, 34, 56]. Standard covariance localization is typically carried out by applying a Schur (Hadamard) product [45, 52] between a correlation matrix with distance-decreasing entries and the ensemble estimated covariance matrix, resulting in the localized Kalman gain:

(2)

Localization can be applied to , and optionally to the projected into the observations space   [48]. Since the correlation matrix is a covariance matrix, the Schur product of the correlation function and the forecast background error covariance matrix is also a covariance matrix. Covariance localization has the virtue of increasing the rank of the flow-dependent background error covariance matrix , and therefore increasing the effective sample size.

A popular choice of the correlation function is defined by the Gaspari-Cohn (GC) fifth order piecewise polynomial  [30, 31] function that is non-zero only for a small local region and zero every other places [48]:

(3)

The correlation length scale is , [43] where is a characteristic physical distance. The correlation decreases smoothly from to zero at a distance more than twice the correlation length. Depending on the implementation, can be either the distance between an observation and grid point or the distance between grid points in the physical space.

2.4 Machine learning and random forests

Machine learning has found numerous applications in data science, data mining, and data analytics. However, the immense potential of applying machine learning to help solve computational science problems remains largely untapped to date. Recently, data-driven approaches to predict and model the approximation errors of low-fidelity and surrogate models have shown promising results [46, 57]. The multivariate predictions of local reduced-order-model method (MP-LROM) [46] proposes a multivariate model to compute the error of local reduced-order surrogates. In [9] a new filter in DA frame work is developed which is called Cluster Hybrid Monte Carlo sampling filter (CLHMC) for non-Gaussian data assimilation which relaxes the Gaussian assumption in the original HMC filter by employing a clustering step.

One of the fundamental algorithms in ML is regression analysis. Generally speaking, multivariate regression models [29] approximate the relationship between a set of dependent variables, and a set of independent variables. multivariate dependent variables. We review next a popular ML approach for multivariate regression that incorporates random forests for model fitting, and then apply it to adaptive covariance localization.

2.4.1 Random forests

Random forests (RFs) are ensemble-based learning methods [19, 18] based on the idea that a group of weak learners can come together to form a strong learner. The final decision or model is then built by some sort of averaging over the group of week learners forming a strong learner [18, 23]. RFs can efficiently solve various ML problems including classification and regression. RFs are superior to other similar algorithms and can run efficiently on large datasets [19]. Amongst the most special-purpose popular versions of RFs are Iterative Dichotomiser 3 (ID3) [49] and it’s successor (C4.5) [50], and conditional inference trees [54].

Specifically, RFs work by constructing an ensemble of decision trees, such that each tree builds a classification or regression model in the form of a tree structure. Instead of using the whole set of features available for the learning algorithm at once, each subtree uses a subset of features. The ensemble of trees is constructed using a variant of the bagging [18, 23] (bootstrap aggregation) technique, thus yielding a small variance of the learning algorithm [23]. Furthermore, to ensure robustness of the ensemble-based learner, each subtree is assigned a subset of features selected randomly in a way that minimizes the correlation between individual learners.

While RFs can efficiently handle large data sets with high feature dimensionality, they can suffer from over-fitting, a general problem faced by almost all machine learning algorithms. To avoid over-fitting in an RF one has to optimize the tuning parameter that governs the feature split, and the number of features assigned to each tree from the bootstrapped data [53]. Consider a dataset and a set of features to be used by the RF. For each tree in the forest, a bootstrap sample is randomly selected. Instead of examining all possible feature-splits, a subset of the features with is randomly selected [42]. Each node then splits on the best feature in the subset rather than . This approach has the advantage that the RFs can be efficiently constructed in parallel, and that the correlation between trees in the ensemble is reduced. Random sampling and bootstrapping can be efficiently applied to RFs to generate a parallel, robust, and very fast learner for high-dimensional data and features.

3 Machine Learning Approach for Adaptive Localization

This section develops two machine learning approaches for adaptive covariance localization. In the first approach the localization radius changes in time, meaning that the same localization radius is used at all spatial points, but at each assimilation cycle the localization radius differs. In the second approach the localization radius changes both in time and space, and is different for each assimilation cycle and for each state variable. In both approaches the localization radius is adjusted depending on the model behavior and overall state of the system. Here we study what features of the solution affect the most the optimal value of localization radius, such that using that localization radius the difference between analysis and the true state gets minimized. The random forest approach is used to construct the learning model that takes the impactful set of features as input and outputs the localization radius. We now describe in detail the features and the objective function of the proposed learning model.

3.1 Features and decision criteria

ML algorithms learn a function that maps input variables , the features of the underlying problem, onto output target variables. The input to the learning model is a set of features which describes the underlying problem. During the learning phase the algorithm finds the proper function using a known data set. This function is used to predict target outputs given new instances of input variables. In this work the target variables are the localization radii at each assimilation cycle. We consider atmospheric models that have numerous variables and parameters, and select the feature set that capture the characteristics of the important behavioral patterns of the dynamical system at each assimilation cycle. Specifically, the idea is to focus on the set features that best reflect the changes in analysis state with respect to changes in the localization radii.

Selection of the feature set

We now consider the selection of important features of the model results and data sets to be fed into the ML algorithms. Relying on the Gaussianity assumption of the prior distribution, natural features to consider are the first and second order moments of the prior distribution of the model state at each assimilation cycle. However, the large dimensionality of practical models can make it prohibitive to include the entire ensemble average vector (forecast state ) as a feature for ML. One idea to reduce the size of the model state information is to select only model states with negligible correlations among them, e.g., states that are physically located at distances larger than the radius of influence. Another useful strategy to reduce model features is to select descriptive summaries such as the minimum and the maximum magnitude of state components in the ensemble.

The correlations between different variables of the model are descriptive of the behavior of the system at each assimilation cycle, and therefore are desirable features. Of course, it is impractical to include the entire state error covariance matrix among the features. Following the same reasoning as for state variables, we suggest including blocks of the correlation matrix for variables located nearby in physical space, i.e., for subsets of variables that are highly correlated.

Decision criteria

Under the Gaussianity assumption the quality of the DA solution is given by the quality of its first two statistical moments. Each of these aspects is discussed below.

A first important metric for the quality of ensemble-based DA algorithms is how well does the analysis ensemble mean (analysis state) represent the true state of the system. To quantify the accuracy of the ensemble mean we use the root mean-squared error (RMSE), defined as follows:

(4)

where is the true system state, and is the Euclidian norm. Since the true state is not known in practical applications we also consider the deviation of the state from collected measurements as a useful indication of filter performance. The observation-state is defined as follows:

(5)

Replacing in  (4) and (5) with the forecast state or with the analysis state provides the formulas for the forecast or the analysis error magnitudes, respectively. The quality of the DA results measured by either (4) in case of perfect problem settings, or by (5) in case of real applications.

In this work we use the observation-analysis error metric (5), denoted by , as the first decision criterion.

A second important aspect that defines a good analysis ensemble is its spread around the true state. The spread can be visually inspected via the Talagrand diagram (rank histogram) [2, 22]. A quality analysis ensemble leads to a rank histogram that is close to a uniform distribution. Conversely, U-shaped and Bell-shaped rank histograms correspond to under-dispersion and over-dispersion of the ensemble, respectively. Ensemble based methods, especially with small ensemble sizes, are generally expected to yield U-shaped rank histograms, unless they are well-designed and well-tuned. The calculation of the rank statistics in model space requires the ordering the true state entries with respect to the generated ensemble members, which is not feasible in practical applications. A practical rank histogram can alternatively be constructed by ordering the entries of the observation vector with respect to the ensemble members entries projected into the observation space [2, 22].

In this work we use the uniformity of the analysis rank histogram, in observation space, as the second decision criterion.

We now discuss practical ways to quantify the level of uniformity of rank histograms. The level of uniformity of forecast rank histograms is used as a learning feature, and that of the analysis histogram as a decision criterion.

A reasonable approach is to quantify the level of uniformity by the similarity between a distribution fitted to the rank histogram and a uniform distribution. A practical measure of similarity between two probability distributions and is the Kullback-Leibler (KL) divergence [40]:

(6)

We first fit a beta distribution to the rank histogram (where the histogram domain is mapped to by a linear transformation). Considering that is a uniform distribution over the interval , we use the following measure of uniformity of the rank histogram:

(7)

3.2 ML-based adaptive localization algorithm

We have identified two useful, but complementary, decision criteria, one that measures the quality of ensemble mean, and the second one that measures the quality of the ensemble spread. For the practical implementation we combine them into a single criterion, as follows:

(8)

where the weighting parameters realize an appropriate scaling of the two metrics. The weights can be predefined, or can be learned from the data them as part of the ML procedure.

The best set of localization radii are those that that minimize the combined objective function (8).

Algorithm 1 summarizes the proposed adaptive localization methodology.

1:
2:For
3:If
4:
5:
6:For to
7:
8: Obtain
9:Evaluate (8)
10:If Then
11:
12:End If
13:End For loop
14:Obtain
15:
16:End If
17:Train the learning model with
18:If
19: Learning model predicts the localization radius
20:Perform localization with
21:Obtain
22:End If
23:End For loop
Algorithm 1 Adaptive localization algorithm

3.3 Adaptive localization in time

The first proposed learning algorithm uses the same (scalar) localization radius for all variables of the model. The value of this radius changes adaptively from one assimilation cycle to the next. Specifically, at the current cycle we perform the assimilation using all localization radii from the pool and for each case compute the cost function (8). After trying all possible radii from the pool, the radius associated with the minimum cost function is selected as winner. The analysis of the current assimilation cycle is the one computed using the winner radius.

At each assimilation cycle we collect a sample consisting of the features described in 3.1 as inputs, and the winner localization radius as output (target variable) of the learning model. During the training phase, at each assimilation cycle, the ML algorithm learns the best localization radius corresponding to the system state and behavior. During the test phase, the learning model uses the current system information to estimate the proper value of the localization radius, without trying all possible values of localization radii. Algorithm 1 summarizes the adaptive localization procedure.

3.4 Adaptive localization in time and space

The second proposed learning algorithm is to adapt the localization radii in both time and space. A different localization radius is used for each of the state variables, and these radii change at each assimilation cycle. Here the localization radius is a vector containing a scalar localization parameter for each state variable of the system. The pool of radii in this methodology contains multiple possible vectors. The pool can be large since it can include permutations of all possible individual scalar radii values. Similar to previous learning algorithm, at each time point we perform the assimilation with one of the vectors from the pool of radii, and select the one corresponding to the minimum cost function as the winner.

At each assimilation cycle we collect a sample consisting of the model features as inputs and the winner vector of localization radii as output of the learning model. In the training phase, the model learns the relation between system state and localization radii and during the test phase it estimates the proper value of localization radius for each state individually. The number of target variables the learning model could be as large as the number of state variables. This situation can be improved by imposing that the same scalar radii are used for multiple components, e.g., for entire areas of the physical model.

3.5 Computational considerations

During the training phase, the learning phase of the proposed algorithm needs to try all possible radii from the pool, and re-do the assimilation with that localization radius. This is computationally demanding, but the model can be trained off-line using historical data. The testing phase the learning model predicts a good value of the localization radius, which is then used in the assimilation; no additional costs are incurred except for the (relatively inexpensive) prediction made by the trained model.

4 Setup of the Numerical Experiments

In order to study the performance of the proposed adaptive localization algorithm we employ two test models, namely the Lorenz-96 model [44], and the QG-1.5 model [51]. All experiments are implemented in Python using the DATeS framework [12]. The performance of the proposed methodology is compared against the deterministic implementation of EnKF (DEnKF) with parameters empirically tuned as reported in [51].

4.1 Lorenz-96 model

The Lorenz-96 model is given by [44]:

(9)

with variables, periodic boundary conditions, and a forcing term . A vector of equidistant component values ranging from was integrated forward in time for steps, each of size [units], and the final state was taken as the reference initial condition for the experiments. The background uncertainty is set to of average magnitude of the reference solution. All state vector components are observed, i.e., with the identity operator. To avoid filter collapse, the analysis ensemble is inflated at the end of each assimilation cycle, with the inflation factor set to .

4.2 Quasi-geostrophic (QG-1.5) model

The QG-1.5 model described by Sakov and Oke [51] is a numerical approximation of the following equations:

(10)

where is surface elevation or the stream function and is the potential vorticity. We use the following values of the model coefficients  (10) from [51],: , , and . Boundary conditions used are . The domain of the model is a [space units] square, with , and is discretized by a grid of size (including boundaries). A standard linear operator to observe components of is used [10]. The observation error variance is [units squared] and synthetic observations are obtained by adding white noise to measurements of the see height level (SSH) extracted from a model run with lower viscosity [10]. Here, the inflation factor is set to , and the localization function is GC (3) with the empirically-tuned optimal radius .

5 Numerical Results

5.1 Lorenz model with adaptive localization in time

The adaptive localization in time approach uses the same localization radius for all variables, and adapts the localization radius value at each assimilation cycle. This experiment has 100 assimilation cycles, where the first are dedicated to the training phase and the last to the testing phase. The pool of radii for this experiment covers all possible values for the Lorenz model: .

We compare the performance of the adaptive localization algorithms against the best hand-tuned fixed localization radius value of which is obtained through testing all possible localization radii . Figure 1 shows the logarithm of RMSE between analysis ensemble and the true (reference) state. The performance of adaptive localization methodology is evaluated for different weights , . The results indicate that increasing the weight of the KL distance measure increases the performance. For the best choices of the weights the overall performance of the adaptive localization is slightly better than that of the fixed, hand-tuned radius.

(a) Training phase
(b) Testing phase
Figure 1: Data assimilation with the Lorenz-96 model (9). EnKF is applied with a Gaussian covariance localization function. EnKF results with adaptive covariance localization are shown for different choices of the weighting factors , . The localization is adaptive in time, and is compared to results with fixed localization radius. The training phase consists of 80 assimilation cycles, followed by the testing phase with 20 assimilation cycles. The overall performance of the adaptive localization is better than that of the hand-tuned radius.

Figure 2 shows the variability in the tuned localization radius over time for both training and test phase. The weights of the adaptive localization criterion are and for this experiment. The adaptive algorithm changes the radius considerably over the simulation.

Figure 2: Data assimilation with the Lorenz-96 model (9). Shown is the evolution of the localization radius in time over all 100 assimilation cycles. The weights of the adaptive localization criterion are and .

Using the RF methodology we were able to recognize and select the most important features affecting the target variable prediction. Figure 3 shows the 35 most important features of the Lorenz model which we included in our experiments.

Figure 3: The 35 most important features of the Lorenz-96 model (9)

5.2 QG model with adaptive localization in time

We use 100 assimilation cycles of the QG model, with dedicated to the training phase, and to the test phase. The pool of radii for this experiment is . EnKF is used with ensemble members, inflation factor , and GC localization function. An empirically optimal localization radius with these configurations was found by hand-tuning to be . We use it as a comparison benchmark for the performance of the adaptive localization.

Figure 4 shows the logarithm of RMSE between analysis obtained at each assimilation cycle and the true analysis. The performance of adaptive localization with different weights , is evaluated against the fixed localization with radius . With higher weights for the KL distance measure, the performance of adaptive localization also increases. The analysis results with adaptive localization outperform those obtained with the hand-tuned radius.

(a) Training phase
(b) Testing phase
Figure 4: Data assimilation with the QG model (10). EnKF is applied with a GC covariance localization function. The (log) RMSE is shown on the vertical axis, and the time (assimilation cycles) is shown on the horizontal axis. EnKF results with adaptive covariance localization are shown for different choices of the weighting factors . The localization is adaptive in time, and is compared to results with fixed localization radius. The training phase consists of 80 assimilation cycles, followed by the testing phase with 20 assimilation cycles. The analysis results with adaptive localization outperform those obtained with the hand-tuned radius.

Figure 5 shows the variability of the localization radius in time for the weights and . The adaptive algorithm changes the radius considerably over the course of the simulation.

Figure 5: Data assimilation with the QG model (10). Shown is the evolution of the localization radius in time over all 100 assimilation cycles. The weights of the adaptive localization criterion are and

Figure 6 shows the 35 most important features of the QG model which we included in our experiments.

Figure 6: The 35 most important features of the QG model (10)

5.3 Lorenz model with adaptive localization in time and space

Here the localization radius changes at each assimilation cycle and it is also changing for each individual state variable of the system. The pool of radii for this experiment consists of random vectors of size where each component of the vector can have value in the range of all possible radii for the Lorenz model i.e . Each component of the vectors in the pool can have different permutations of values in the range of . The total number of all possible vectors is huge, and testing all in the training phase is infeasible. One way to limit the number of trials is to test randomly selected vectors of radii in the pool. For this experiments we set the number of trials to and at each trial we randomly pick a vector of radii from the pool. The localization radius of each state variable is the corresponding component in the vector, the cost function of using each of the trials is obtained at each assimilation cycle. The number of target variables to estimate at each assimilation cycle in the test phase is and hence we need more samples for the training phase. The number of assimilation cycles for this experiment is , from which dedicated to the training phase, and to the testing phase. The EnKF uses ensemble members, the inflation factor of and the localization function is Gaussian.

Figure 7 shows the logarithm of RMSE between analysis and the reference state. The performance of adaptive localization with different weights , is evaluated against the fixed localization radius . In the testing phase the results with the adaptive radii are slightly better than those with the optimal fixed radius.

(a) Training phase
(b) Testing phase
Figure 7: Data assimilation with the Lorenz-96 model (9). EnKF is applied with a Gaussian covariance localization function. EnKF results with adaptive covariance localization are shown for different choices of the weighting factors , . The localization is adaptive in time and space, and is compared to results with fixed localization radius. The training phase consists of 800 assimilation cycles, followed by the testing phase with 200 assimilation cycles.

Figure 8 shows the statistical variability in localization radii for the Lorenz model over time with the weights and . Figure 8(a) shows the average of localization radius variability in time for each state variable of the Lorenz model and Figure 8(b) shows the standard deviation of localization radius change in time for each state variable. The average and standard deviations are taken over the state variables; we see that the adaptive values chosen by the algorithm can vary considerably.

(a) Average of localization radii
(b) Standard deviation of localization radii
Figure 8: Statistical variability in localization radii for different state variables of the Lorenz-96 model (9). The total 1000 assimilation cycles include both training and testing phases. The weights of the adaptive localization criterion are and .

This variability can be further seen in Figure 9, which shows the evolution of localization radii in both time and space for the Lorenz model. The first 100 cycles of training phase and the last 100 cycles of the testing phase are selected. The weights of the adaptive localization criterion are and .

(a) First 100 assimilation cycles
(b) Last 100 assimilation cycles
Figure 9: Evolution of localization radii in both time and space for the Lorenz-96 model (9). The weights of the adaptive localization criterion are and .

5.4 QG model with adaptive localization in time and space

The pool of localization radii for this experiment consists of random vectors of size , where each component of the vector can have values in the range of proper radii for the QG model. One practical restriction is that the localization radius used for neighboring grid points should not be too different. We noticed that having to much variability in the choice of localization radii for grid points located nearby in space may lead to physical imbalances and filter divergence. One remedy is to narrow down the range of possible radii to a limited range. Here for example, we restricted the localization radius possible values to , , or . EnKF uses ensemble members, an inflation factor of , and the GC localization function.

Figure 10 shows the RMSE of the analysis error at each assimilation cycle. The time and space adaptive radii results are not as good as those obtained with the fixed, hand-tuned radius. This is likely due to the very limited range of radii that the algorithm was allowed to test in each experiment.

(a) Training phase
(b) Testing phase
Figure 10: Data assimilation with the QG model (10). EnKF is applied with a GC covariance localization function. The (log) RMSE is shown on the vertical axis, and the time (assimilation cycles) is shown on the horizontal axis. EnKF results with adaptive covariance localization are shown for different choices of the weighting factors , . The localization is adaptive in time, and is compared to results with a fixed localization radius. The training phase consists of 80 assimilation cycles, and testing phase follows with 20 assimilation cycles. The time and space adaptive radii results are not as good as those obtained with the fixed, hand-tuned radius.

Figure 8 shows the statistical variability in localization radii for the QG model, with the adaptive criterion weights and . The variability is computed across all state vector components. The limited range of values from which the radius selection is made leads to a small variability of the radii.

(a) Average of localization radius
(b) Standard deviation of localization radius
Figure 11: Statistical variability in the localization radius for each state variable of QG model (10). The total assimilation cycles are 100 including both training and testing phases. The weights of the adaptive localization criterion are and .

The changes made by the adaptive algorithm are shown in Figure 12 for the first 10 cycles of training phase and for the last 10 cycles of test phase. The weights of the adaptive localization criterion are and . We notice that the radii chosen for different state variables seem to be uncorrelated in space or time.

(a) First 10 assimilation cycles
(b) Last 10 assimilation cycles
Figure 12: Variability of localization radius in both time and space for QG model (10). The pool of radii is The weights of the adaptive localization criterion are and .

5.5 Discussion of numerical results

Several points can be concluded from the experimental results. Firstly, one needs to consider different decision criterion weights for different problems. Here the best weights are not the same for both models. For the Lorenz model, a combination with a larger weight for KL distance has a positive affect. A more balanced set of weights works better for the QG model. Secondly, adaptivity leads to a considerable variability of the localization radii in both time and space. As the feature importance plots show, the values of state variables have a significant bearing on radius predictions. Moreover, the importance of all state variables is not the same, and some variables in the model have a higher impact on the prediction of localization radii. Finally, the training of the localization algorithms in both time and space with the current methodology is computationally expensive. Future research will focus on making the methodology truly practical for very large models.

6 Concluding Remarks and Future Work

This study proposes an adaptive covariance localization approach for the EnKF family of data assimilation methods. Two methodologies are presented and discussed, namely adaptivity in time and adaptivity in space and time. The adaptive localization approach is based on random forests, a machine learning regression technique. The learning model can be trained off-line using historical records, e.g., reanalysis data. Once it is successfully trained, the regression model is used to estimate the values of localization radii in future assimilation cycles. Numerical results carried out using two standard models suggest that the proposed automatic approach performs at least as good as the traditional EnKF with empirically hand-tuned localization parameters.

In order to extend the use of machine learning techniques to support data assimilation, an important question that will be addressed in future research concerns the optimal choice of features in large-scale numerical models. Specifically, one has to select sufficient aspects of the model state to carry the information needed to train a machine learning algorithm. In the same time, the size of the features vector needs to be relatively small, even when the model state is extremely large. Next, the computational expense of the training phase is due to the fact that the analysis needs to be repeated with multiple localization radii. Future work will seek to considerably reduce the computational effort by intelligently narrowing the pool of possible radii to test, and by devising assimilation algorithms that reuse the bulk of the calculations when computing multiple analyses with multiple localization radii.

Acknowledgments

This work was supported in part by the projects AFOSR DDDAS 15RT1037 and AFOSR Computational Mathematics FA9550-17-1-0205.

References

References

  • [1] Jeffrey Anderson and Lili Lei. Empirical localization of observation impact in ensemble Kalman filters. Monthly Weather Review, 141(11):4140–4153, 2013.
  • [2] Jeffrey L Anderson. A method for producing and evaluating probabilistic forecasts from ensemble model integrations. Journal of Climate, 9(7):1518–1530, 1996.
  • [3] Jeffrey L Anderson. An ensemble adjustment Kalman filter for data assimilation. Monthly weather review, 129(12):2884–2903, 2001.
  • [4] Jeffrey L Anderson. An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus A, 59(2):210–224, 2007.
  • [5] Jeffrey L Anderson. Exploring the need for localization in ensemble data assimilation using a hierarchical ensemble filter. Physica D: Nonlinear Phenomena, 230(1):99–111, 2007.
  • [6] Jeffrey L Anderson. Localization and sampling error correction in ensemble Kalman filter data assimilation. Monthly Weather Review, 140(7):2359–2371, 2012.
  • [7] Jeffrey L Anderson and Stephen L Anderson. A monte carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Monthly Weather Review, 127(12):2741–2758, 1999.
  • [8] Mark Asch, Marc Bocquet, and Maëlle Nodet. Data assimilation: methods, algorithms, and applications. SIAM, 2016.
  • [9] Ahmed Attia, Azam Moosavi, and Adrian Sandu. Cluster sampling filters for non-gaussian data assimilation. arXiv preprint arXiv:1607.03592, 2016.
  • [10] Ahmed Attia, Azam Moosavi, and Adrian Sandu. Cluster sampling filters for non-Gaussian data assimilation. arXiv preprint arXiv:1607.03592, 2016.
  • [11] Ahmed Attia, Vishwas Rao, and Adrian Sandu. A Hybrid Monte Carlo sampling smoother for four dimensional data assimilation. International Journal for Numerical Methods in Fluids, 2016. fld.4259.
  • [12] Ahmed Attia and Adrian Sandu. Dates: A highly-extensible data assimilation testing suite. arXiv preprint arXiv:1704.05594, 2017.
  • [13] Ahmed Attia, Razvan Stefanescu, and Adrian Sandu. The reduced-order Hybrid Monte Carlo sampling smoother. International Journal for Numerical Methods in Fluids, 2016. fld.4255.
  • [14] Kay Bergemann and Sebastian Reich. A mollified ensemble Kalman filter. Quarterly Journal of the Royal Meteorological Society, 136(651):1636–1643, 2010.
  • [15] Craig H Bishop, Brian J Etherton, and Sharanya J Majumdar. Adaptive sampling with the ensemble transform Kalman filter. part I: Theoretical aspects. Monthly weather review, 129(3):420–436, 2001.
  • [16] Craig H Bishop and Daniel Hodyss. Flow-adaptive moderation of spurious ensemble correlations and its use in ensemble-based data assimilation. Quarterly Journal of the Royal Meteorological Society, 133(629):2029–2044, 2007.
  • [17] Craig H Bishop and Daniel Hodyss. Ensemble covariances adaptively localized with eco-rap. part 2: a strategy for the atmosphere. Tellus A, 61(1):97–111, 2009.
  • [18] Leo Breiman. Bagging predictors. Machine learning, 24(2):123–140, 1996.
  • [19] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  • [20] Gerrit Burgers, Peter Jan van Leeuwen, and Geir Evensen. Analysis scheme in the ensemble Kalman filter. Monthly weather review, 126(6):1719–1724, 1998.
  • [21] Gerrit Burgers, Peter J van Leeuwen, and Geir Evensen. Analysis scheme in the Ensemble Kalman Filter. Monthly Weather Review, 126:1719–1724, 1998.
  • [22] Guillem Candille and Olivier Talagrand. Evaluation of probabilistic prediction systems for a scalar variable. Quarterly Journal of the Royal Meteorological Society, 131(609):2131–2150, 2005.
  • [23] Thomas G Dietterich et al. Ensemble methods in machine learning. Multiple classifier systems, 1857:1–15, 2000.
  • [24] Geir Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forcast error statistics . Journal of Geophysical Research, 99(C5):10143–10162, 1994.
  • [25] Geir Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5):10143–10162, 1994.
  • [26] Geir Evensen. The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean dynamics, 53(4):343–367, 2003.
  • [27] Geir Evensen. The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dynamics, 53, 2003.
  • [28] Geir Evensen. Data assimilation: the ensemble Kalman filter. Springer Science & Business Media, 2009.
  • [29] David A Freedman. Statistical models: theory and practice. cambridge university press, 2009.
  • [30] Gregory Gaspari and Stephen E Cohn. Construction of correlation functions in two and three dimensions. Quarterly Journal of the Royal Meteorological Society, 125(554):723–757, 1999.
  • [31] Gregory Gaspari and Stephen E Cohn. Construction of correlation functions in two and three dimensions. Quarterly Journal of the Royal Meteorological Society, 125:723–757, 1999.
  • [32] Thomas M Hamill, Jeffrey S Whitaker, and Chris Snyder. Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Monthly Weather Review, 129(11):2776–2790, 2001.
  • [33] Peter L Houtekamer and Herschel L Mitchell. Data assimilation using an ensemble Kalman filter technique. Monthly Weather Review, 126(3):796–811, 1998.
  • [34] Peter L Houtekamer and Herschel L Mitchell. A sequential ensemble Kalman filter for atmospheric data assimilation. Monthly Weather Review, 129(1):123–137, 2001.
  • [35] Pieter L Houtekamer and Herschel L Mitchell. Data assimilation using an ensemble Kalman filter technique . Monthly Weather Review, 126:796–811, 1998.
  • [36] Rudolph E Kalman and Richard S Bucy. New results in linear filtering and prediction theory. Journal of basic engineering, 83(1):95–108, 1961.
  • [37] Rudolph Emil Kalman et al. A new approach to linear filtering and prediction problems. Journal of basic Engineering, 82(1):35–45, 1960.
  • [38] Eugenia Kalnay. Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2002.
  • [39] Paul Kirchgessner, Lars Nerger, and Angelika Bunse-Gerstner. On the choice of an optimal localization radius in ensemble Kalman filter methods. Monthly Weather Review, 142(6):2165–2175, 2014.
  • [40] Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86, 1951.
  • [41] Lili Lei and Jeffrey L Anderson. Comparisons of empirical localization techniques for serial ensemble Kalman filters in a simple atmospheric general circulation model. Monthly Weather Review, 142(2):739–754, 2014.
  • [42] Andy Liaw, Matthew Wiener, et al. Classification and regression by randomforest. R news, 2(3):18–22, 2002.
  • [43] Andrew C Lorenc. The potential of the ensemble Kalman filter for nwp—a comparison with 4d-var. Quarterly Journal of the Royal Meteorological Society, 129(595):3183–3203, 2003.
  • [44] Edward N Lorenz. Predictability: A problem partly solved. In Proc. Seminar on predictability, volume 1, 1996.
  • [45] Elizabeth Million. The hadamard product. Course Notes, 3:6, 2007.
  • [46] Azam Moosavi, Razvan Stefanescu, and Adrian Sandu. Multivariate predictions of local reduced-order-model errors and dimensions. International Journal for Numerical Methods in Engineering, pages n/a–n/a, 2017. nme.5624.
  • [47] Edward Ott, Brian R Hunt, Istvan Szunyogh, Aleksey V Zimin, Eric J Kostelich, Matteo Corazza, Eugenia Kalnay, DJ Patil, and James A Yorke. A local ensemble Kalman filter for atmospheric data assimilation. Tellus A, 56(5):415–428, 2004.
  • [48] R Petrie. Localization in the ensemble Kalman filter. MSc Atmosphere, Ocean and Climate University of Reading, 2008.
  • [49] J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986.
  • [50] J Ross Quinlan. C4. 5: programs for machine learning. Elsevier, 2014.
  • [51] Pavel Sakov and Peter R Oke. A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters. Tellus A, 60(2):361–371, 2008.
  • [52] Jssai Schur. Bemerkungen zur theorie der beschränkten bilinearformen mit unendlich vielen veränderlichen. Journal für die reine und Angewandte Mathematik, 140:1–28, 1911.
  • [53] Mark R Segal. Machine learning benchmarks and random forest regression. Center for Bioinformatics & Molecular Biostatistics, 2004.
  • [54] Carolin Strobl, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin, and Achim Zeileis. Conditional variable importance for random forests. BMC bioinformatics, 9(1):307, 2008.
  • [55] M.K. Tippett, J.J. Anderson, and C.H. Bishop. Ensemble square root filters. Monthly Weather Review, 131:1485–1490, 2003.
  • [56] Jeffrey S Whitaker and Thomas M Hamill. Ensemble data assimilation without perturbed observations. Monthly Weather Review, 130(7):1913–1924, 2002.
  • [57] Jin-Long Wu, Jian-Xun Wang, Heng Xiao, and Julia Ling. Physics-informed machine learning for predictive turbulence modeling: A priori assessment of prediction confidence. arXiv preprint arXiv:1607.04563, 2016.
  • [58] Yicun Zhen and Fuqing Zhang. A probabilistic approach to adaptive covariance localization for serial ensemble square root filters. Monthly Weather Review, 142(12):4499–4518, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
29019
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description