Nonlinear Data Assimilation

Nonlinear Continuous Data Assimilation

Adam Larios Department of Mathematics, University of Nebraska–Lincoln, Lincoln, NE 68588-0130, USA [  and  Yuan Pei Department of Mathematics, University of Nebraska–Lincoln, Lincoln, NE 68588-0130, USA [
July 24, 2019
Abstract.

We introduce three new nonlinear continuous data assimilation algorithms. These models are compared with the linear continuous data assimilation algorithm introduced by Azouani, Olson, and Titi (AOT). As a proof-of-concept for these models, we computationally investigate these algorithms in the context of the 1D Kuramoto-Sivashinsky equation. We observe that the nonlinear models experience super-exponential convergence in time, and converge to machine precision significantly faster than the linear AOT algorithm in our tests.

Key words and phrases:
MSC 2010 Classification:

Adam Larios]alarios@unl.edu Yuan Pei]ypei4@unl.edu

1. Introduction

Recently, a promising new approach to data assimilation was pioneered by Azouani, Olson, and Titi [3, 4] (see also [9, 30, 51] for early ideas in this direction). This new approach, which we call AOT data assimilation or the linear AOT algorithm, is based on feedback control at the partial differential equation (PDE) level, described below. In the present work, we propose several nonlinear data assimilation algorithms based on the AOT algorithm, that exhibit significantly faster convergence in our simulations; indeed, the convergence rate appears to be super-exponential.

Let us describe the general idea of the AOT algorithm. Consider a dynamical system in the form,

(1.1)

For example, this could represent a system of partial differential equations modeling fluid flow in the atmosphere or the ocean. A central difficulty is that, even if one were able to solve the system exactly, the initial data is largely unknown. For example, in a weather or climate simulation, the initial data may be measured at certain locations by weather stations, but the data at locations in between these stations may be unknown. Therefore, one might not have access to the complete initial data , but only to the observational measurements, which we denote by . (Here, is assumed to be a linear operator that can be taken, for example, to be an interpolation operator between grid points of maximal spacing , or as an orthogonal projection onto Fourier modes no larger than .) Moreover, the data from measurements may be streaming in moment by moment, so in fact, one often has the information , for a range of times . Data assimilation is an approach that eliminates the need for complete initial data and also incorporates incoming data into simulations. Classical approaches to data assimilation are typically based on the Kalman filter. See, e.g., [13, 38, 43] and the references therein for more information about the Kalman filter. In 2014, an entirely new approach to data assimilation—the AOT algorithm—was introduced in [3, 4]. This new approach overcomes some of the drawbacks of the Kalman filter approach (see, e.g., [6] for further discussion). Moreover, it is implemented directly at the PDE level. The approach has been the subject of much recent study in various contexts, see, e.g., [1, 2, 5, 7, 14, 15, 16, 17, 18, 22, 26, 36, 48, 50].

The following system was proposed and studied in [3, 4]:

(1.2)

This system, used in conjunction with (1.1), is the AOT algorithm for data assimilation of system (1.1). In the case where the dynamical system (1.1) is the 2D Navier-Stokes equations, it was proven in [3, 4] that, for any divergence-free initial data , exponentially in time. In particular, even without knowing the initial data , the solution can be approximately reconstructed for large times. We emphasize that, as noted in [3], the initial data for (1.2) can be any function, even . Thus, no information about the initial data is required to reconstruct the solution asymptotically in time.

The principal aim of this article is to develop a new class of nonlinear algorithms for data assimilation. The main idea is to use a nonlinear modification of the AOT algorithm for data assimilation to try to drive the algorithm toward the true solution at a faster rate. In particular, for a given, possibly nonlinear function , we consider a modification of (1.2) in the form:

(1.3)

To begin, we first focus on the following form of the nonlinearity:

(1.4)

with .

Remark 1.1.

Note that by formally setting , one recovers the linear AOT algorithm (1.2). The main idea behind using such a nonlinearity is that, when is close to , the solution is driven toward the true solution more strongly than in the linear AOT algorithm. In particular, for any , if is small enough, then , so no matter how large is chosen in the linear AOT algorithm, the nonlinear method with will always penalize small errors more strongly.

As a preliminary test of the effectiveness of this approach, in this work we demonstrate the nonlinear data assimilation algorithm (1.3) on a one-dimensional PDE; namely, the Kuramoto-Sivashinky equation (KSE), given in dimensionless units by:

(1.5)

in a periodic domain of length . Here, is a dimensionless parameter. For simplicity, we assume that the initial data is sufficiently smooth (made more precise below) and mean-free, i.e., , which implies for all . This equation has many similarities with the 2D Navier-Stokes equations. It is globally well-posed; it has chaotic large-time behavior; and it has a finite-dimensional global attractor, making it an excellent candidate for studying large-time behavior. It governs various physical phenomena, such as the evolution of flame-fronts, the flow of viscous fluids down inclined planes, and certain types of crystal growth (see, e.g., [41, 55, 56]). Much of the theory of the 1D Kuramoto-Sivashinsky equation was developed in the periodic case in [11, 12, 29, 57, 58, 57, 33] (see also [3, 8, 10, 21, 23, 24, 27, 28, 32, 34, 35, 37, 41, 44, 45, 49, 52, 55, 56, 31]). For a discussion of other boundary conditions for (1.5), see, e.g., [41, 55, 54, 53, 42]. Discussions about the numerical simulations of the KSE, can be found in, e.g., [19, 25, 20, 46, 39]. Data assimilation in several different contexts for the 1D Kuramoto-Sivashinsky equation was investigated in [34, 47], who also recognized its potential as an excellent test-bed for data assimilation.

Using the nonlinear data assimilation algorithm (1.3) in the setting of the Kuramoto-Sivashinsky equation, with the choice (1.4) for the nonlinearity for some , we arrive at:

(1.6)

We take to be orthogonal projection onto the first Fourier modes, for some constant . Other physically relevant choices of , such as a nodal interpolation operator, have been considered in the case of the linear AOT algorithm (see, e.g., [26]).

Remark 1.2.

In view of the ODE example , , which has multiple solutions, one might wonder about the well-posedness of equation (1.6). While lack of well-posedness is a possibility, our simulations do not appear to be strongly affected by such a hindrance. In any case, we believe the greatly reduced convergence time we observe makes the equations worth studying. A similar remark can be made for an equation we examine in a later section with power larger than one, in view of the ODE example , , which develops a singularity in finite time. A study of the well-posedness of (1.3), in various settings, will be the subject of a forthcoming work.

2. Preliminaries

In this work, we compute norms of the error given by the difference between the data assimilation solution and the reference solution. We focus on the and norms, defined by

(Note that, by Poincaré’s inequality, , which holds on any bounded domain, is indeed a norm.)

We briefly mention the scaling arguments used to justify the form (1.5). For , , , consider an equation in the form

(2.1)

Choose time scale , characteristic velocity , and dimensionless number . Write , , , where the prime denotes a dimensionless variable. Then

(2.2)

Multiply by . The equation in dimensionless form then becomes

(2.3)

Thus, acts as a parameter which influences the dynamics, in the same way that the Reynolds number influences dynamics in turbulent flows.

Another approach is to set , , and . Then define dimensionless quantities (denoted again by primes) , , . The equation now becomes

(2.4)

Multiplying by yields

(2.5)

Thus, equation (2.4) is similar to equation (2.2), with , except that the dynamics are influenced by the dimensionless parameter . In particular, the dynamics can be thought of as influenced by parameter with fixed, or equivalently influenced by the length of the domain with fixed, where . In this work, for the sake of matching the initial data used in [40], we choose the domain to be , so is fixed, and we let be the parameter affecting the dynamics.

3. Computational Results

In this section, we demonstrate some computational results for the nonlinear data assimilation algorithm given by (1.6).

3.1. Numerical Methods

It was observed in [26] that no higher-order multi-stage Runge-Kutta-type method exists for solving (1.6) due to the need to evaluate at fractional time steps, for which the data is not available. Therefore, we use a semi-implicit spectral method with Euler time stepping. The linear terms are treated via a first-order exponential time differencing (ETD) method (see, e.g., [40] for a detailed description of this method). The nonlinear term is computed explicitly, and in the standard way, i.e., by computing the derivatives in spectral space, and products in physical space, respecting the usual 2/3’s dealiasing rule. We use spatial grid points on the interval , so . We use a fixed time-step respecting the advective CFL; in fact, we choose . For simplicity, we choose , however, the results reported here are qualitatively similar for a wide range of values. For example, when , convergence times are shorter for all methods, but the error plots are qualitatively similar. In [40], the case is examined. However, to examine a slightly more chaotic setting, we take , which is still well-resolved with . Our results are qualitatively similar for smaller values of .

Here, we let be the projection onto the lowest Fourier modes. In this work, we set (i.e, ); so only the lowest 32 modes of are passed to the assimilation equation via . One can consider a more general interpolation operator as well, such as nodal interpolation, but we focus on projection onto low Fourier modes.

To fix ideas, in this paper we mainly use the initial data used in [40] to simulate (1.5); namely

(3.1)

on the interval . However, we also investigated several other choices of initial data. In all cases, the results were qualitatively similar to the ones reported here. We present one such test near the end of this paper.

Note that explicit treatment of the term imposes a constraint on the time step, namely (which follows from a standard stability analysis for Euler’s method). This is not a series restriction in this work, since we choose .

All the simulations in the present work are well-resolved. In Figure (3.1) we show plots of time-averaged spectra of all the PDEs simulated in the present work. One can see that all relevant wave-modes are captured to within machine precision.

(a) Time-averaged spectrum of the reference solution to the 1D-Kuramoto-Sivashinsky equation.
(b) Time-averaged spectrum of the data assimilation solution with nonlinear pure-power () algorithm.
(c) Time-averaged spectrum of the data assimilation solution with nonlinear hybrid () algorithm.
(d) Time-averaged spectrum of the data assimilation solution with nonlinear concave-convex () algorithm.
Figure 3.1. Log-log plots of the spectra for the above scenarios. Plots are averaged over all time steps between times and . ()

3.2. Simple Power Nonlinearity

We compare the error in the nonlinear data assimilation algorithm (1.3) with the error in the linear AOT algorithm. We first focus on nonlinearity given by a power according to (1.4); i.e., we consider equation (1.6) together with equation (1.5). In Figure 3.2(a), the solution to (1.5) (which we call the “reference” solution) evolves from the smooth, low-mode initial condition (3.1) to a chaotic state after about time . In Figure 3.2(b), the difference between this solution and the AOT data assimilation solution is plotted. It rapidly decays to zero in a short time.

(a) A chaotic solution to the Kuramoto-Sivashinsky equation evolving in time.
(b) Error in data assimilation solution using linear AOT algorithm ().
(c) Error in data assimilation solution using nonlinear algorithm (1.3) ().
Figure 3.2. Data Assimilation for the Kuramoto-Sivashinsky equation () using linear and nonlinear algorithms. The difference rapidly decays to zero in time, and visually the errors look similar. The assimilation equations were initialized with . is the orthogonal projection onto the lowest Fourier modes. Similar results appear in tests of a wide variety of initial data, and for .

We observe in Figure 3.3 that errors in the linear AOT algorithm (1.2) and the nonlinear algorithm (1.3) solutions both decay. The error in the nonlinear algorithm has oscillations for roughly which are not present in the error for the AOT algorithm. However, by tracking norms of the difference of the solutions, one can see in Figure 3.3 that the nonlinear algorithm reaches machine precision significantly faster than the linear AOT algorithm, for a range of values.

(a) Errors in -norm vs. time.
(b) Errors in -norm vs. time.
Figure 3.3. Error for the linear AOT () solution and the nonlinear (1.3) () solution for various values of . Resolution 8192. (Log-linear scale.)

When , our simulations appear to no longer converge (not shown here). The error in the linear AOT algorithm (i.e., ) reaches machine precision at roughly time . For , there seems to be an optimal choice in our simulations around , reaching machine precision around , a speedup factor of roughly . Moreover, the shape of the curves with indicate super-exponential convergence, indicated by the concave curve on the log-linear plot in Figure 3.3, while for the linear AOT algorithm, the convergence is only exponential, indicated by the linear shape on the log-linear plot. Currently, the super exponential convergence is only an observation in simulations. An analytical investigation of the convergence rate will be the subject of a forthcoming work.

3.3. Hybrid Linear/Nonlinear Methods

In this subsection, we investigate a family of hybrid linear/nonlinear data assimilation algorithms. One can see from Figure 3.3) in the previous subsection that, although the nonlinear methods converge to machine precision at earlier times than the linear method, the nonlinear method suffers from larger errors than the linear method for short times. This motivates the possibility of using a hybrid linear/nonlinear method. For example, one could look for an optimal time to switch between the models, say, perhaps around time , according to Figure 3.3), but this seems highly situationally dependent and difficult to implement in general. Instead, the approach we consider here is to let be given by (1.4) for but let it be linear for . The idea is that, when the error is small, deviations are strongly penalized, as in Remark (1.1). However, where the error is large, the linear AOT algorithm should give the greater penalization (i.e., when ). Therefore, we consider algorithm (1.3) with the following choice of nonlinearity, for some choice of , (we take in all following simulations).

(3.2)
(a) Error in -norm vs. time.
(b) Error in -norm vs. time.
Figure 3.4. Error in linear (), nonlinear (), and hybrid () algorithms (). Resolution 8192. (Log-linear scale.)

In Figure 3.4,we compare the linear algorithm (1.2) with the nonlinear algorithm (1.3) with pure-power nonlinearity , given by (1.4), and also with hybrid nonlinearity , given by (3.2). The convergence to machine precision happens approximately at (for AOT), (for ), and (for ), respectively. In addition, one can see that the hybrid algorithm remains close to the linear AOT algorithm for short times. Moreover, after a short time, the hybrid algorithm undergoes super-exponential convergence, converging faster than every algorithm analyzed so far. The benefits of this splitting of the nonlinearity between and seem clear. Moreover, this approach can be exploited further, which is the topic of the next subsection.

3.4. Concave-Convex Nonlinearity

Inspired by the success of the hybrid method, in this subsection, we further exploit the effect of the feedback control term by accentuating the nonlinearity for . We consider the following nonlinearity in conjunction with (1.3) for the Kuramoto-Sivashinky equation.

(3.3)

Note that this choice of is concave for , and convex for . The convexity for serves to more strongly penalize large deviations from the reference solution. In Figure 3.5, we see that at every positive time this method has significantly smaller error than the linear AOT method, and the methods involving and . Convergence to machine precision happens at roughly , a speedup factor of roughly compared to the linear AOT algorithm.

(a) Error in -norm vs. time.
(b) Error in -norm vs. time.
Figure 3.5. Error in linear AOT () algorithm, and the non-linear algorithm with nonlinearities , , and (each with ). () Resolution 8192. (Log-linear scale.)

3.5. Comparison of All Methods

Let us also consider the error at every Fourier mode. In Figure 3.6, one can see these errors at various times. We examine a time before the transition to fully-developed chaos (), at a time during the transition (), a time after the solution has settled down to an approximately statistically steady state (), and a later time (). At each mode, and at each positive time, the error in the solution with nonlinearity (3.3) is the smallest.

(a)
(b)
(c)
(d)
Figure 3.6. Error in spectrum (mode amplitude vs. wave number) at different times for all methods.

Next, we point out that our results hold qualitatively with different choices of initial data for the reference equation (1.5). Therefore, we wait until the solution to (1.5) with initial data (3.1) has reached an approximately statistically steady state (this happens roughly at ). Then, we use this data to re-initialize the solution to (1.5) (in fact, we use the solution at to be well within the time interval of fully developed chaos). We still initialize (1.6) with . Norms of the errors are shown in Figure 3.7. We observe that, although convergence time is increased for all methods, the qualitative observations discussed above still hold.

(a) Error in -norm vs. time.
(b) Error in -norm vs. time.
Figure 3.7. Error in all algorithms with chaotic initialization for reference solution. () Resolution 8192. (Log-linear scale.)

4. Conclusions

Our results indicate that advantages might be gained by looking at nonlinear data assimilation. We used the Kuramoto-Sivashinky equation as a proof-of-concept for this method; however, in a forthcoming work, we will extend the method to more challenging equations, including the Navier-Stokes equations of fluid flow. Mathematical analysis of these methods will also be subject of future work.

We note that other choices of nonlinearity may very well be useful to consider. Indeed, one may imagine a functional given by

(4.1)

where is the time of convergence to within a certain error tolerence, such as to within machine precision. (One would need to show that admissible functions are in some sense independent of the parameters and initial data, say, after some normalization.) One could also consider a functional whose value at is given by a particular norm of the error. By minimizing such functionals, one might discover even better data-assimilation methods.

References

  • [1] D. A. Albanez, H. J. Nussenzveig Lopes, and E. S. Titi. Continuous data assimilation for the three-dimensional Navier–Stokes- model. Asymptotic Anal., 97(1-2):139–164, 2016.
  • [2] M. U. Altaf, E. S. Titi, T. Gebrael, O. M. Knio, L. Zhao, M. F. McCabe, and I. Hoteit. Downscaling the 2D Bénard convection equations using continuous data assimilation. Computational Geosciences, pages 1–18, 2017.
  • [3] A. Azouani, E. Olson, and E. S. Titi. Continuous data assimilation using general interpolant observables. J. Nonlinear Sci., 24(2):277–304, 2014.
  • [4] A. Azouani and E. S. Titi. Feedback control of nonlinear dissipative systems by finite determining parameters—a reaction-diffusion paradigm. Evol. Equ. Control Theory, 3(4):579–594, 2014.
  • [5] H. Bessaih, E. Olson, and E. S. Titi. Continuous data assimilation with stochastically noisy data. Nonlinearity, 28(3):729–753, 2015.
  • [6] A. Biswas, J. Hudson, A. Larios, and Y. Pei. Continuous data assimilation for the magneto-hydrodynamic equations in 2d using one component of the velocity and magnetic fields. (submitted).
  • [7] A. Biswas and V. R. Martinez. Higher-order synchronization for a data assimilation algorithm for the 2D Navier–Stokes equations. Nonlinear Anal. Real World Appl., 35:132–157, 2017.
  • [8] J. C. Bronski and T. N. Gambill. Uncertainty estimates and bounds for the Kuramoto-Sivashinsky equation. Nonlinearity, 19(9):2023–2039, 2006.
  • [9] C. Cao, I. G. Kevrekidis, and E. S. Titi. Numerical criterion for the stabilization of steady states of the Navier–Stokes equations. Indiana Univ. Math. J., 50(Special Issue):37–96, 2001. Dedicated to Professors Ciprian Foias and Roger Temam (Bloomington, IN, 2000).
  • [10] A. Cheskidov and C. Foias. On the non-homogeneous stationary Kuramoto-Sivashinsky equation. Phys. D, 154(1-2):1–14, 2001.
  • [11] P. Collet, J.-P. Eckmann, H. Epstein, and J. Stubbe. A global attracting set for the Kuramoto-Sivashinsky equation. Comm. Math. Phys., 152(1):203–214, 1993.
  • [12] P. Constantin, C. Foias, B. Nicolaenko, and R. Temam. Integral manifolds and inertial manifolds for dissipative partial differential equations, volume 70 of Applied Mathematical Sciences. Springer-Verlag, New York, 1989.
  • [13] R. Daley. Atmospheric Data Analysis. Cambridge Atmospheric and Space Science Series. Cambridge University Press, 1993.
  • [14] A. Farhat, M. S. Jolly, and E. S. Titi. Continuous data assimilation for the 2D Bénard convection through velocity measurements alone. Phys. D, 303:59–66, 2015.
  • [15] A. Farhat, E. Lunasin, and E. S. Titi. Abridged continuous data assimilation for the 2d navier–stokes equations utilizing measurements of only one component of the velocity field. J. Math. Fluid Mech., 18(1):1–23, 2016.
  • [16] A. Farhat, E. Lunasin, and E. S. Titi. Data assimilation algorithm for 3d bénard convection in porous media employing only temperature measurements. Journal of Mathematical Analysis and Applications, 438(1):492–506, 2016.
  • [17] A. Farhat, E. Lunasin, and E. S. Titi. On the Charney conjecture of data assimilation employing temperature measurements alone: The paradigm of 3d planetary geostrophic model. arXiv:1608.04770, 2016.
  • [18] A. Farhat, E. Lunasin, and E. S. Titi. Continuous data assimilation for a 2d Bénard convection system through horizontal velocity measurements alone. Journal of Nonlinear Science, pages 1–23, 2017.
  • [19] C. Foias, M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. Dissipativity of numerical schemes. Nonlinearity, 4(3):591–613, 1991.
  • [20] C. Foias, M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. On some dissipative fully discrete nonlinear Galerkin schemes for the Kuramoto-Sivashinsky equation. Phys. Lett. A, 186(1-2):87–96, 1994.
  • [21] C. Foias and I. Kukavica. Determining nodes for the Kuramoto-Sivashinsky equation. J. Dynam. Differential Equations, 7(2):365–373, 1995.
  • [22] C. Foias, C. F. Mondaini, and E. S. Titi. A discrete data assimilation scheme for the solutions of the two-dimensional Navier-Stokes equations and their statistics. SIAM J. Appl. Dyn. Syst., 15(4):2109–2142, 2016.
  • [23] C. Foias, B. Nicolaenko, G. R. Sell, and R. Temam. Variétés inertielles pour l’équation de Kuramoto-Sivashinski. C. R. Acad. Sci. Paris Sér. I Math., 301(6):285–288, 1985.
  • [24] C. Foias, B. Nicolaenko, G. R. Sell, and R. Temam. Inertial manifolds for the Kuramoto-Sivashinsky equation and an estimate of their lowest dimension. J. Math. Pures Appl. (9), 67(3):197–226, 1988.
  • [25] C. Foias and E. S. Titi. Determining nodes, finite difference schemes and inertial manifolds. Nonlinearity, 4(1):135–153, 1991.
  • [26] M. Gesho, E. Olson, and E. S. Titi. A computational study of a data assimilation algorithm for the two-dimensional Navier-Stokes equations. Commun. Comput. Phys., 19(4):1094–1110, 2016.
  • [27] M. Goldman, M. Josien, and F. Otto. New bounds for the inhomogenous Burgers and the Kuramoto-Sivashinsky equations. Comm. Partial Differential Equations, 40(12):2237–2265, 2015.
  • [28] A. A. Golovin, S. H. Davis, A. A. Nepomnyashchy, and M. A. Zaks. Convective Cahn-Hilliard models for kinetically controlled crystal growth. In International Conference on Differential Equations, Vol. 1, 2 (Berlin, 1999), pages 1281–1283. World Sci. Publ., River Edge, NJ, 2000.
  • [29] J. Goodman. Stability of the Kuramoto-Sivashinsky and related systems. Comm. Pure Appl. Math., 47(3):293–306, 1994.
  • [30] K. Hayden, E. Olson, and E. S. Titi. Discrete data assimilation in the Lorenz and 2D Navier-Stokes equations. Phys. D, 240(18):1416–1425, 2011.
  • [31] J. M. Hyman and B. Nicolaenko. The Kuramoto-Sivashinsky equation: a bridge between PDEs and dynamical systems. Phys. D, 18(1-3):113–126, 1986. Solitons and coherent structures (Santa Barbara, Calif., 1985).
  • [32] J. M. Hyman, B. Nicolaenko, and S. Zaleski. Order and complexity in the Kuramoto-Sivashinsky model of weakly turbulent interfaces. Phys. D, 23(1-3):265–292, 1986. Spatio-temporal coherence and chaos in physical systems (Los Alamos, N.M., 1986).
  • [33] J. S. Il’yashenko. Global analysis of the phase portrait for the Kuramoto-Sivashinsky equation. J. Dynam. Differential Equations, 4(4):585–615, 1992.
  • [34] M. Jardak, I. M. Navon, and M. Zupanski. Comparison of sequential data assimilation methods for the Kuramoto-Sivashinsky equation. Internat. J. Numer. Methods Fluids, 62(4):374–402, 2010.
  • [35] M. S. Jolly, I. G. Kevrekidis, and E. S. Titi. Approximate inertial manifolds for the Kuramoto-Sivashinsky equation: analysis and computations. Phys. D, 44(1-2):38–60, 1990.
  • [36] M. S. Jolly, V. R. Martinez, and E. S. Titi. A data assimilation algorithm for the subcritical surface quasi-geostrophic equation. Adv. Nonlinear Stud., 17(1):167–192, 2017.
  • [37] M. S. Jolly, R. Rosa, and R. Temam. Evaluating the dimension of an inertial manifold for the Kuramoto-Sivashinsky equation. Adv. Differential Equations, 5(1-3):31–66, 2000.
  • [38] E. Kalnay. Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 2003.
  • [39] A. Kalogirou, E. E. Keaveny, and D. T. Papageorgiou. An in-depth numerical study of the two-dimensional Kuramoto-Sivashinsky equation. Proc. A., 471(2179):20140932, 20, 2015.
  • [40] A.-K. Kassam and L. N. Trefethen. Fourth-order time-stepping for stiff PDEs. SIAM J. Sci. Comput., 26(4):1214–1233, 2005.
  • [41] Y. Kuramoto and T. Tsuzuki. Persistent propagation of concentration waves in dissipative media far from equilibrium. Prog. Theor. Phys, 55(2):365–369, 1976.
  • [42] A. Larios and E. S. Titi. Global regularity versus finite-time singularities: some paradigms on the effect of boundary conditions and certain perturbations. 430:96–125, 2016.
  • [43] K. Law, A. Stuart, and K. Zygalakis. A Mathematical Introduction to Data Assimilation, volume 62 of Texts in Applied Mathematics. Springer, Cham, 2015.
  • [44] P.-L. Lions, B. Perthame, and E. Tadmor. A kinetic formulation of multidimensional scalar conservation laws and related equations. J. Amer. Math. Soc., 7(1):169–191, 1994.
  • [45] X. Liu. Gevrey class regularity and approximate inertial manifolds for the Kuramoto-Sivashinsky equation. Phys. D, 50(1):135–151, 1991.
  • [46] M. Á. López Marcos. Numerical analysis of pseudospectral methods for theKuramoto-Sivashinsky equation. IMA J. Numer. Anal., 14(2):233–242, 1994.
  • [47] E. Lunasin and E. S. Titi. Finite determining parameters feedback control for distributed nonlinear dissipative systems - a computational study. arXiv:1506.03709 [math.AP], (submitted).
  • [48] P. A. Markowich, E. S. Titi, and S. Trabelsi. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model. Nonlinearity, 29(4):1292–1328, 2016.
  • [49] L. Molinet. Local dissipativity in for the Kuramoto-Sivashinsky equation in spatial dimension 2. J. Dynam. Differential Equations, 12(3):533–556, 2000.
  • [50] C. F. Mondaini and E. S. Titi. Postprocessing Galerkin method applied to a data assimilation algorithm: a uniform in time error estimate. 2016. (arXiv 1612.06998).
  • [51] E. Olson and E. S. Titi. Determining modes for continuous data assimilation in 2D turbulence. J. Statist. Phys., 113(5-6):799–840, 2003. Progress in statistical hydrodynamics (Santa Fe, NM, 2002).
  • [52] F. Otto. Optimal bounds on the Kuramoto-Sivashinsky equation. J. Funct. Anal., 257(7):2188–2245, 2009.
  • [53] S. I. Pokhozhaev. On the blow-up of solutions of the Kuramoto-Sivashinsky equation. Mat. Sb., 199(9):97–106, 2008.
  • [54] J. C. Robinson. Infinite-Dimensional Dynamical Systems. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2001. An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors.
  • [55] G. I. Sivashinsky. Nonlinear analysis of hydrodynamic instability in laminar flames. I. Derivation of basic equations. Acta Astronaut., 4(11-12):1177–1206, 1977.
  • [56] G. I. Sivashinsky. On flame propagation under conditions of stoichiometry. SIAM J. Appl. Math., 39(1):67–82, 1980.
  • [57] E. Tadmor. The well-posedness of the Kuramoto-Sivashinsky equation. SIAM J. Math. Anal., 17(4):884–893, 1986.
  • [58] R. Temam. Infinite-Dimensional Dynamical Systems In Mechanics and Physics, volume 68 of Applied Mathematical Sciences. Springer-Verlag, New York, second edition, 1997.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
271759
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description