Nonlinear Data Assimilation

# Nonlinear Continuous Data Assimilation

Adam Larios Department of Mathematics, University of Nebraska–Lincoln, Lincoln, NE 68588-0130, USA  and  Yuan Pei Department of Mathematics, University of Nebraska–Lincoln, Lincoln, NE 68588-0130, USA
July 24, 2019
###### Abstract.

We introduce three new nonlinear continuous data assimilation algorithms. These models are compared with the linear continuous data assimilation algorithm introduced by Azouani, Olson, and Titi (AOT). As a proof-of-concept for these models, we computationally investigate these algorithms in the context of the 1D Kuramoto-Sivashinsky equation. We observe that the nonlinear models experience super-exponential convergence in time, and converge to machine precision significantly faster than the linear AOT algorithm in our tests.

###### Key words and phrases:
MSC 2010 Classification:

## 1. Introduction

Recently, a promising new approach to data assimilation was pioneered by Azouani, Olson, and Titi [3, 4] (see also [9, 30, 51] for early ideas in this direction). This new approach, which we call AOT data assimilation or the linear AOT algorithm, is based on feedback control at the partial differential equation (PDE) level, described below. In the present work, we propose several nonlinear data assimilation algorithms based on the AOT algorithm, that exhibit significantly faster convergence in our simulations; indeed, the convergence rate appears to be super-exponential.

Let us describe the general idea of the AOT algorithm. Consider a dynamical system in the form,

 (1.1) ddtu=F(u),u(0)=u0.

For example, this could represent a system of partial differential equations modeling fluid flow in the atmosphere or the ocean. A central difficulty is that, even if one were able to solve the system exactly, the initial data is largely unknown. For example, in a weather or climate simulation, the initial data may be measured at certain locations by weather stations, but the data at locations in between these stations may be unknown. Therefore, one might not have access to the complete initial data , but only to the observational measurements, which we denote by . (Here, is assumed to be a linear operator that can be taken, for example, to be an interpolation operator between grid points of maximal spacing , or as an orthogonal projection onto Fourier modes no larger than .) Moreover, the data from measurements may be streaming in moment by moment, so in fact, one often has the information , for a range of times . Data assimilation is an approach that eliminates the need for complete initial data and also incorporates incoming data into simulations. Classical approaches to data assimilation are typically based on the Kalman filter. See, e.g., [13, 38, 43] and the references therein for more information about the Kalman filter. In 2014, an entirely new approach to data assimilation—the AOT algorithm—was introduced in [3, 4]. This new approach overcomes some of the drawbacks of the Kalman filter approach (see, e.g.,  for further discussion). Moreover, it is implemented directly at the PDE level. The approach has been the subject of much recent study in various contexts, see, e.g., [1, 2, 5, 7, 14, 15, 16, 17, 18, 22, 26, 36, 48, 50].

The following system was proposed and studied in [3, 4]:

 (1.2) ddtv=F(v)+μ(Ih(u)−Ih(v)),v(0)=v0.

This system, used in conjunction with (1.1), is the AOT algorithm for data assimilation of system (1.1). In the case where the dynamical system (1.1) is the 2D Navier-Stokes equations, it was proven in [3, 4] that, for any divergence-free initial data , exponentially in time. In particular, even without knowing the initial data , the solution can be approximately reconstructed for large times. We emphasize that, as noted in , the initial data for (1.2) can be any function, even . Thus, no information about the initial data is required to reconstruct the solution asymptotically in time.

The principal aim of this article is to develop a new class of nonlinear algorithms for data assimilation. The main idea is to use a nonlinear modification of the AOT algorithm for data assimilation to try to drive the algorithm toward the true solution at a faster rate. In particular, for a given, possibly nonlinear function , we consider a modification of (1.2) in the form:

 (1.3) ddtv=F(v)+μN(Ih(u)−Ih(v)),v(0)=v0.

To begin, we first focus on the following form of the nonlinearity:

 (1.4) N(x)=N1(x):=x|x|−γ,x≠0,0<γ<1,

with .

###### Remark 1.1.

Note that by formally setting , one recovers the linear AOT algorithm (1.2). The main idea behind using such a nonlinearity is that, when is close to , the solution is driven toward the true solution more strongly than in the linear AOT algorithm. In particular, for any , if is small enough, then , so no matter how large is chosen in the linear AOT algorithm, the nonlinear method with will always penalize small errors more strongly.

As a preliminary test of the effectiveness of this approach, in this work we demonstrate the nonlinear data assimilation algorithm (1.3) on a one-dimensional PDE; namely, the Kuramoto-Sivashinky equation (KSE), given in dimensionless units by:

 (1.5) ut+uux+λuxx+uxxxx=0,u(x,0)=u0(x),

in a periodic domain of length . Here, is a dimensionless parameter. For simplicity, we assume that the initial data is sufficiently smooth (made more precise below) and mean-free, i.e., , which implies for all . This equation has many similarities with the 2D Navier-Stokes equations. It is globally well-posed; it has chaotic large-time behavior; and it has a finite-dimensional global attractor, making it an excellent candidate for studying large-time behavior. It governs various physical phenomena, such as the evolution of flame-fronts, the flow of viscous fluids down inclined planes, and certain types of crystal growth (see, e.g., [41, 55, 56]). Much of the theory of the 1D Kuramoto-Sivashinsky equation was developed in the periodic case in [11, 12, 29, 57, 58, 57, 33] (see also [3, 8, 10, 21, 23, 24, 27, 28, 32, 34, 35, 37, 41, 44, 45, 49, 52, 55, 56, 31]). For a discussion of other boundary conditions for (1.5), see, e.g., [41, 55, 54, 53, 42]. Discussions about the numerical simulations of the KSE, can be found in, e.g., [19, 25, 20, 46, 39]. Data assimilation in several different contexts for the 1D Kuramoto-Sivashinsky equation was investigated in [34, 47], who also recognized its potential as an excellent test-bed for data assimilation.

Using the nonlinear data assimilation algorithm (1.3) in the setting of the Kuramoto-Sivashinsky equation, with the choice (1.4) for the nonlinearity for some , we arrive at:

 (1.6) vt+vvx+vxx+vxxxx=μsign(Ih(u)−Ih(v))|Ih(u)−Ih(v)|1−γ,v(x,0)=v0(x).

We take to be orthogonal projection onto the first Fourier modes, for some constant . Other physically relevant choices of , such as a nodal interpolation operator, have been considered in the case of the linear AOT algorithm (see, e.g., ).

###### Remark 1.2.

In view of the ODE example , , which has multiple solutions, one might wonder about the well-posedness of equation (1.6). While lack of well-posedness is a possibility, our simulations do not appear to be strongly affected by such a hindrance. In any case, we believe the greatly reduced convergence time we observe makes the equations worth studying. A similar remark can be made for an equation we examine in a later section with power larger than one, in view of the ODE example , , which develops a singularity in finite time. A study of the well-posedness of (1.3), in various settings, will be the subject of a forthcoming work.

## 2. Preliminaries

In this work, we compute norms of the error given by the difference between the data assimilation solution and the reference solution. We focus on the and norms, defined by

 ∥u∥2L2=1L∫L/2−L/2|u(x)|2dx,∥u∥H1=∥∇u∥L2.

(Note that, by Poincaré’s inequality, , which holds on any bounded domain, is indeed a norm.)

We briefly mention the scaling arguments used to justify the form (1.5). For , , , consider an equation in the form

 (2.1) ut+uux+auxx+buxxxx=0,x∈[−L/2,L/2]

Choose time scale , characteristic velocity , and dimensionless number . Write , , , where the prime denotes a dimensionless variable. Then

 (2.2) UTu′t′+U2Lu′u′x′+aUL2u′x′x′+bUL4u′x′x′x′x′=0,x′∈[0,1]

Multiply by . The equation in dimensionless form then becomes

 (2.3) u′t′+u′u′x′+λu′x′x′+u′x′x′x′x′=0,x′∈[0,1].

Thus, acts as a parameter which influences the dynamics, in the same way that the Reynolds number influences dynamics in turbulent flows.

Another approach is to set , , and . Then define dimensionless quantities (denoted again by primes) , , . The equation now becomes

 (2.4) UTu′t′+U2ℓu′u′x′+aUℓ2u′x′x′+bUℓ4u′x′x′x′x′=0,x′∈[0,Lℓ]

Multiplying by yields

 (2.5) u′t′+u′u′x′+u′x′x′+u′x′x′x′x′=0,x′∈[0,Lℓ].

Thus, equation (2.4) is similar to equation (2.2), with , except that the dynamics are influenced by the dimensionless parameter . In particular, the dynamics can be thought of as influenced by parameter with fixed, or equivalently influenced by the length of the domain with fixed, where . In this work, for the sake of matching the initial data used in , we choose the domain to be , so is fixed, and we let be the parameter affecting the dynamics.

## 3. Computational Results

In this section, we demonstrate some computational results for the nonlinear data assimilation algorithm given by (1.6).

### 3.1. Numerical Methods

It was observed in  that no higher-order multi-stage Runge-Kutta-type method exists for solving (1.6) due to the need to evaluate at fractional time steps, for which the data is not available. Therefore, we use a semi-implicit spectral method with Euler time stepping. The linear terms are treated via a first-order exponential time differencing (ETD) method (see, e.g.,  for a detailed description of this method). The nonlinear term is computed explicitly, and in the standard way, i.e., by computing the derivatives in spectral space, and products in physical space, respecting the usual 2/3’s dealiasing rule. We use spatial grid points on the interval , so . We use a fixed time-step respecting the advective CFL; in fact, we choose . For simplicity, we choose , however, the results reported here are qualitatively similar for a wide range of values. For example, when , convergence times are shorter for all methods, but the error plots are qualitatively similar. In , the case is examined. However, to examine a slightly more chaotic setting, we take , which is still well-resolved with . Our results are qualitatively similar for smaller values of .

Here, we let be the projection onto the lowest Fourier modes. In this work, we set (i.e, ); so only the lowest 32 modes of are passed to the assimilation equation via . One can consider a more general interpolation operator as well, such as nodal interpolation, but we focus on projection onto low Fourier modes.

To fix ideas, in this paper we mainly use the initial data used in  to simulate (1.5); namely

 (3.1) u0(x)=cos(x/16)(1+sin(x/16));

on the interval . However, we also investigated several other choices of initial data. In all cases, the results were qualitatively similar to the ones reported here. We present one such test near the end of this paper.

Note that explicit treatment of the term imposes a constraint on the time step, namely (which follows from a standard stability analysis for Euler’s method). This is not a series restriction in this work, since we choose .

All the simulations in the present work are well-resolved. In Figure (3.1) we show plots of time-averaged spectra of all the PDEs simulated in the present work. One can see that all relevant wave-modes are captured to within machine precision.

### 3.2. Simple Power Nonlinearity

We compare the error in the nonlinear data assimilation algorithm (1.3) with the error in the linear AOT algorithm. We first focus on nonlinearity given by a power according to (1.4); i.e., we consider equation (1.6) together with equation (1.5). In Figure 3.2(a), the solution to (1.5) (which we call the “reference” solution) evolves from the smooth, low-mode initial condition (3.1) to a chaotic state after about time . In Figure 3.2(b), the difference between this solution and the AOT data assimilation solution is plotted. It rapidly decays to zero in a short time.

We observe in Figure 3.3 that errors in the linear AOT algorithm (1.2) and the nonlinear algorithm (1.3) solutions both decay. The error in the nonlinear algorithm has oscillations for roughly which are not present in the error for the AOT algorithm. However, by tracking norms of the difference of the solutions, one can see in Figure 3.3 that the nonlinear algorithm reaches machine precision significantly faster than the linear AOT algorithm, for a range of values.

When , our simulations appear to no longer converge (not shown here). The error in the linear AOT algorithm (i.e., ) reaches machine precision at roughly time . For , there seems to be an optimal choice in our simulations around , reaching machine precision around , a speedup factor of roughly . Moreover, the shape of the curves with indicate super-exponential convergence, indicated by the concave curve on the log-linear plot in Figure 3.3, while for the linear AOT algorithm, the convergence is only exponential, indicated by the linear shape on the log-linear plot. Currently, the super exponential convergence is only an observation in simulations. An analytical investigation of the convergence rate will be the subject of a forthcoming work.

### 3.3. Hybrid Linear/Nonlinear Methods

In this subsection, we investigate a family of hybrid linear/nonlinear data assimilation algorithms. One can see from Figure 3.3) in the previous subsection that, although the nonlinear methods converge to machine precision at earlier times than the linear method, the nonlinear method suffers from larger errors than the linear method for short times. This motivates the possibility of using a hybrid linear/nonlinear method. For example, one could look for an optimal time to switch between the models, say, perhaps around time , according to Figure 3.3), but this seems highly situationally dependent and difficult to implement in general. Instead, the approach we consider here is to let be given by (1.4) for but let it be linear for . The idea is that, when the error is small, deviations are strongly penalized, as in Remark (1.1). However, where the error is large, the linear AOT algorithm should give the greater penalization (i.e., when ). Therefore, we consider algorithm (1.3) with the following choice of nonlinearity, for some choice of , (we take in all following simulations).

 (3.2) N(x)=N2(x):=⎧⎪⎨⎪⎩x,|x|≥1,x|x|−γ,0<|x|<1,0,x=0.

In Figure 3.4,we compare the linear algorithm (1.2) with the nonlinear algorithm (1.3) with pure-power nonlinearity , given by (1.4), and also with hybrid nonlinearity , given by (3.2). The convergence to machine precision happens approximately at (for AOT), (for ), and (for ), respectively. In addition, one can see that the hybrid algorithm remains close to the linear AOT algorithm for short times. Moreover, after a short time, the hybrid algorithm undergoes super-exponential convergence, converging faster than every algorithm analyzed so far. The benefits of this splitting of the nonlinearity between and seem clear. Moreover, this approach can be exploited further, which is the topic of the next subsection.

### 3.4. Concave-Convex Nonlinearity

Inspired by the success of the hybrid method, in this subsection, we further exploit the effect of the feedback control term by accentuating the nonlinearity for . We consider the following nonlinearity in conjunction with (1.3) for the Kuramoto-Sivashinky equation.

 (3.3) N(x)=N3(x):=⎧⎪⎨⎪⎩x|x|γ,|x|≥1,x|x|−γ,0<|x|<1,0,x=0.

Note that this choice of is concave for , and convex for . The convexity for serves to more strongly penalize large deviations from the reference solution. In Figure 3.5, we see that at every positive time this method has significantly smaller error than the linear AOT method, and the methods involving and . Convergence to machine precision happens at roughly , a speedup factor of roughly compared to the linear AOT algorithm.

### 3.5. Comparison of All Methods

Let us also consider the error at every Fourier mode. In Figure 3.6, one can see these errors at various times. We examine a time before the transition to fully-developed chaos (), at a time during the transition (), a time after the solution has settled down to an approximately statistically steady state (), and a later time (). At each mode, and at each positive time, the error in the solution with nonlinearity (3.3) is the smallest.

Next, we point out that our results hold qualitatively with different choices of initial data for the reference equation (1.5). Therefore, we wait until the solution to (1.5) with initial data (3.1) has reached an approximately statistically steady state (this happens roughly at ). Then, we use this data to re-initialize the solution to (1.5) (in fact, we use the solution at to be well within the time interval of fully developed chaos). We still initialize (1.6) with . Norms of the errors are shown in Figure 3.7. We observe that, although convergence time is increased for all methods, the qualitative observations discussed above still hold.

## 4. Conclusions

Our results indicate that advantages might be gained by looking at nonlinear data assimilation. We used the Kuramoto-Sivashinky equation as a proof-of-concept for this method; however, in a forthcoming work, we will extend the method to more challenging equations, including the Navier-Stokes equations of fluid flow. Mathematical analysis of these methods will also be subject of future work.

We note that other choices of nonlinearity may very well be useful to consider. Indeed, one may imagine a functional given by

 (4.1) F(N)=t∗

where is the time of convergence to within a certain error tolerence, such as to within machine precision. (One would need to show that admissible functions are in some sense independent of the parameters and initial data, say, after some normalization.) One could also consider a functional whose value at is given by a particular norm of the error. By minimizing such functionals, one might discover even better data-assimilation methods.

## References

•  D. A. Albanez, H. J. Nussenzveig Lopes, and E. S. Titi. Continuous data assimilation for the three-dimensional Navier–Stokes- model. Asymptotic Anal., 97(1-2):139–164, 2016.
•  M. U. Altaf, E. S. Titi, T. Gebrael, O. M. Knio, L. Zhao, M. F. McCabe, and I. Hoteit. Downscaling the 2D Bénard convection equations using continuous data assimilation. Computational Geosciences, pages 1–18, 2017.
•  A. Azouani, E. Olson, and E. S. Titi. Continuous data assimilation using general interpolant observables. J. Nonlinear Sci., 24(2):277–304, 2014.
•  A. Azouani and E. S. Titi. Feedback control of nonlinear dissipative systems by finite determining parameters—a reaction-diffusion paradigm. Evol. Equ. Control Theory, 3(4):579–594, 2014.
•  H. Bessaih, E. Olson, and E. S. Titi. Continuous data assimilation with stochastically noisy data. Nonlinearity, 28(3):729–753, 2015.
•  A. Biswas, J. Hudson, A. Larios, and Y. Pei. Continuous data assimilation for the magneto-hydrodynamic equations in 2d using one component of the velocity and magnetic fields. (submitted).
•  A. Biswas and V. R. Martinez. Higher-order synchronization for a data assimilation algorithm for the 2D Navier–Stokes equations. Nonlinear Anal. Real World Appl., 35:132–157, 2017.
•  J. C. Bronski and T. N. Gambill. Uncertainty estimates and bounds for the Kuramoto-Sivashinsky equation. Nonlinearity, 19(9):2023–2039, 2006.
•  C. Cao, I. G. Kevrekidis, and E. S. Titi. Numerical criterion for the stabilization of steady states of the Navier–Stokes equations. Indiana Univ. Math. J., 50(Special Issue):37–96, 2001. Dedicated to Professors Ciprian Foias and Roger Temam (Bloomington, IN, 2000).
•  A. Cheskidov and C. Foias. On the non-homogeneous stationary Kuramoto-Sivashinsky equation. Phys. D, 154(1-2):1–14, 2001.
•  P. Collet, J.-P. Eckmann, H. Epstein, and J. Stubbe. A global attracting set for the Kuramoto-Sivashinsky equation. Comm. Math. Phys., 152(1):203–214, 1993.
•  P. Constantin, C. Foias, B. Nicolaenko, and R. Temam. Integral manifolds and inertial manifolds for dissipative partial differential equations, volume 70 of Applied Mathematical Sciences. Springer-Verlag, New York, 1989.
•  R. Daley. Atmospheric Data Analysis. Cambridge Atmospheric and Space Science Series. Cambridge University Press, 1993.
•  A. Farhat, M. S. Jolly, and E. S. Titi. Continuous data assimilation for the 2D Bénard convection through velocity measurements alone. Phys. D, 303:59–66, 2015.
•  A. Farhat, E. Lunasin, and E. S. Titi. Abridged continuous data assimilation for the 2d navier–stokes equations utilizing measurements of only one component of the velocity field. J. Math. Fluid Mech., 18(1):1–23, 2016.
•  A. Farhat, E. Lunasin, and E. S. Titi. Data assimilation algorithm for 3d bénard convection in porous media employing only temperature measurements. Journal of Mathematical Analysis and Applications, 438(1):492–506, 2016.
•  A. Farhat, E. Lunasin, and E. S. Titi. On the Charney conjecture of data assimilation employing temperature measurements alone: The paradigm of 3d planetary geostrophic model. arXiv:1608.04770, 2016.
•  A. Farhat, E. Lunasin, and E. S. Titi. Continuous data assimilation for a 2d Bénard convection system through horizontal velocity measurements alone. Journal of Nonlinear Science, pages 1–23, 2017.
•  C. Foias, M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. Dissipativity of numerical schemes. Nonlinearity, 4(3):591–613, 1991.
•  C. Foias, M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. On some dissipative fully discrete nonlinear Galerkin schemes for the Kuramoto-Sivashinsky equation. Phys. Lett. A, 186(1-2):87–96, 1994.
•  C. Foias and I. Kukavica. Determining nodes for the Kuramoto-Sivashinsky equation. J. Dynam. Differential Equations, 7(2):365–373, 1995.
•  C. Foias, C. F. Mondaini, and E. S. Titi. A discrete data assimilation scheme for the solutions of the two-dimensional Navier-Stokes equations and their statistics. SIAM J. Appl. Dyn. Syst., 15(4):2109–2142, 2016.
•  C. Foias, B. Nicolaenko, G. R. Sell, and R. Temam. Variétés inertielles pour l’équation de Kuramoto-Sivashinski. C. R. Acad. Sci. Paris Sér. I Math., 301(6):285–288, 1985.
•  C. Foias, B. Nicolaenko, G. R. Sell, and R. Temam. Inertial manifolds for the Kuramoto-Sivashinsky equation and an estimate of their lowest dimension. J. Math. Pures Appl. (9), 67(3):197–226, 1988.
•  C. Foias and E. S. Titi. Determining nodes, finite difference schemes and inertial manifolds. Nonlinearity, 4(1):135–153, 1991.
•  M. Gesho, E. Olson, and E. S. Titi. A computational study of a data assimilation algorithm for the two-dimensional Navier-Stokes equations. Commun. Comput. Phys., 19(4):1094–1110, 2016.
•  M. Goldman, M. Josien, and F. Otto. New bounds for the inhomogenous Burgers and the Kuramoto-Sivashinsky equations. Comm. Partial Differential Equations, 40(12):2237–2265, 2015.
•  A. A. Golovin, S. H. Davis, A. A. Nepomnyashchy, and M. A. Zaks. Convective Cahn-Hilliard models for kinetically controlled crystal growth. In International Conference on Differential Equations, Vol. 1, 2 (Berlin, 1999), pages 1281–1283. World Sci. Publ., River Edge, NJ, 2000.
•  J. Goodman. Stability of the Kuramoto-Sivashinsky and related systems. Comm. Pure Appl. Math., 47(3):293–306, 1994.
•  K. Hayden, E. Olson, and E. S. Titi. Discrete data assimilation in the Lorenz and 2D Navier-Stokes equations. Phys. D, 240(18):1416–1425, 2011.
•  J. M. Hyman and B. Nicolaenko. The Kuramoto-Sivashinsky equation: a bridge between PDEs and dynamical systems. Phys. D, 18(1-3):113–126, 1986. Solitons and coherent structures (Santa Barbara, Calif., 1985).
•  J. M. Hyman, B. Nicolaenko, and S. Zaleski. Order and complexity in the Kuramoto-Sivashinsky model of weakly turbulent interfaces. Phys. D, 23(1-3):265–292, 1986. Spatio-temporal coherence and chaos in physical systems (Los Alamos, N.M., 1986).
•  J. S. Il’yashenko. Global analysis of the phase portrait for the Kuramoto-Sivashinsky equation. J. Dynam. Differential Equations, 4(4):585–615, 1992.
•  M. Jardak, I. M. Navon, and M. Zupanski. Comparison of sequential data assimilation methods for the Kuramoto-Sivashinsky equation. Internat. J. Numer. Methods Fluids, 62(4):374–402, 2010.
•  M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. Approximate inertial manifolds for the Kuramoto-Sivashinsky equation: analysis and computations. Phys. D, 44(1-2):38–60, 1990.
•  M. S. Jolly, V. R. Martinez, and E. S. Titi. A data assimilation algorithm for the subcritical surface quasi-geostrophic equation. Adv. Nonlinear Stud., 17(1):167–192, 2017.
•  M. S. Jolly, R. Rosa, and R. Temam. Evaluating the dimension of an inertial manifold for the Kuramoto-Sivashinsky equation. Adv. Differential Equations, 5(1-3):31–66, 2000.
•  E. Kalnay. Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 2003.
•  A. Kalogirou, E. E. Keaveny, and D. T. Papageorgiou. An in-depth numerical study of the two-dimensional Kuramoto-Sivashinsky equation. Proc. A., 471(2179):20140932, 20, 2015.
•  A.-K. Kassam and L. N. Trefethen. Fourth-order time-stepping for stiff PDEs. SIAM J. Sci. Comput., 26(4):1214–1233, 2005.
•  Y. Kuramoto and T. Tsuzuki. Persistent propagation of concentration waves in dissipative media far from equilibrium. Prog. Theor. Phys, 55(2):365–369, 1976.
•  A. Larios and E. S. Titi. Global regularity versus finite-time singularities: some paradigms on the effect of boundary conditions and certain perturbations. 430:96–125, 2016.
•  K. Law, A. Stuart, and K. Zygalakis. A Mathematical Introduction to Data Assimilation, volume 62 of Texts in Applied Mathematics. Springer, Cham, 2015.
•  P.-L. Lions, B. Perthame, and E. Tadmor. A kinetic formulation of multidimensional scalar conservation laws and related equations. J. Amer. Math. Soc., 7(1):169–191, 1994.
•  X. Liu. Gevrey class regularity and approximate inertial manifolds for the Kuramoto-Sivashinsky equation. Phys. D, 50(1):135–151, 1991.
•  M. Á. López Marcos. Numerical analysis of pseudospectral methods for theKuramoto-Sivashinsky equation. IMA J. Numer. Anal., 14(2):233–242, 1994.
•  E. Lunasin and E. S. Titi. Finite determining parameters feedback control for distributed nonlinear dissipative systems - a computational study. arXiv:1506.03709 [math.AP], (submitted).
•  P. A. Markowich, E. S. Titi, and S. Trabelsi. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model. Nonlinearity, 29(4):1292–1328, 2016.
•  L. Molinet. Local dissipativity in for the Kuramoto-Sivashinsky equation in spatial dimension 2. J. Dynam. Differential Equations, 12(3):533–556, 2000.
•  C. F. Mondaini and E. S. Titi. Postprocessing Galerkin method applied to a data assimilation algorithm: a uniform in time error estimate. 2016. (arXiv 1612.06998).
•  E. Olson and E. S. Titi. Determining modes for continuous data assimilation in 2D turbulence. J. Statist. Phys., 113(5-6):799–840, 2003. Progress in statistical hydrodynamics (Santa Fe, NM, 2002).
•  F. Otto. Optimal bounds on the Kuramoto-Sivashinsky equation. J. Funct. Anal., 257(7):2188–2245, 2009.
•  S. I. Pokhozhaev. On the blow-up of solutions of the Kuramoto-Sivashinsky equation. Mat. Sb., 199(9):97–106, 2008.
•  J. C. Robinson. Infinite-Dimensional Dynamical Systems. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2001. An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors.
•  G. I. Sivashinsky. Nonlinear analysis of hydrodynamic instability in laminar flames. I. Derivation of basic equations. Acta Astronaut., 4(11-12):1177–1206, 1977.
•  G. I. Sivashinsky. On flame propagation under conditions of stoichiometry. SIAM J. Appl. Math., 39(1):67–82, 1980.
•  E. Tadmor. The well-posedness of the Kuramoto-Sivashinsky equation. SIAM J. Math. Anal., 17(4):884–893, 1986.
•  R. Temam. Infinite-Dimensional Dynamical Systems In Mechanics and Physics, volume 68 of Applied Mathematical Sciences. Springer-Verlag, New York, second edition, 1997.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters   