NonMarkovian Closure Models for Large Eddy Simulations using the MoriZwanzig formalism
Abstract
This work uses the MoriZwanzig (MZ) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarsegrained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation (Stinis, P., “MoriZwanzig reduced models for uncertainty quantification I: Parametric uncertainty,” arXiv:1211.4285, 2012.), the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating nonMarkovian effects. Numerical experiments on the Burgers equation demonstrate that the MZbased models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phasespace is accurately predicted for cases where the coarsegraining is moderate. LES of homogeneous isotropic turbulence and the Taylor Green Vortex show that the MZbased models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Lastly, LES of fully developed channel flow demonstrate the applicability of MZbased models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarsegraining, highlighting the potential of MZbased techniques to define LES closures.
I Introduction
The pursuit of efficient and accurate simulation of turbulent flows continues to present great challenges to the scientific computing community. The continuous cascade of scales makes Direct Numerical Simulation (DNS) methodologies prohibitively expensive for most practical flows. Generally, the solution of high Reynolds number turbulent flows is made tractable by developing a set of surrogate equations that display a reduced range of scales. The reduction of scales in Large Eddy Simulations (LES) can be viewed as a type of coarsegraining (CG) technique for the NavierStokes equations. Typically associated with atomistic simulations, the coarsegraining approach removes the degrees of freedom associated with the microscopic scales and attempts to only compute the macroscopic dynamics CGMethods (). The effects of the microscopic scales on the macroscopic scales are expressed through a constitutive relation. This relation is central to a coarsegrained model PrinciplesOfMultiscaleModeling (). In a continuum solution of a turbulent flow, the macroscopic processes are the large scale energycontaining flow structures while the microscopic processes can be taken to be the small scale turbulent fluctuations that occur at the Kolmogorov scales.
The prediction of macroscopic quantities in the absence of explicit microscopic information constitutes a classical problem of multiscale modeling. For systems that display some degree of separation between the large and small scales, elegant mathematical treatments have been established and significant progress has been made OrtizHierarchical (). Systems that exhibit a continuous cascade of scales present a greater challenge as the coarsegraining process leads to nonMarkovian effects that are challenging to understand and model. In turbulent flows, the subgrid effects are generally accounted for through a subgrid model. The majority of subgrid scale models for LES are Markovian and are based on scale invariance, homogeneity, and rapid equilibration of small scales. These models are inadequate in many problems and warrant improvement.
The MoriZwanzig (MZ) formalism provides a mathematical procedure for the development of coarsegrained models of systems that lack scale separation. Originating from irreversible statistical mechanics, the MZ formalism provides a method for recasting a dynamical system into an equivalent, lowerdimensional system. In this reduced system, which is commonly referred to as the generalized Langevin equation (GLE), the effect of the microscopic scales on the macroscopic scales appears as a convolution integral (which is sometimes referred to as memory) and a noise term. The appearance of the memory term in the GLE demonstrates that, in a general setting, the coarsegraining procedure leads to nonlocal memory effects. The MZ formalism alone does not lead to a reduction in computational complexity as it requires the solution of the orthogonal (unresolved) dynamics equation. In the past decade, however, the MZ formalism has gained attention in the setting of model order reduction. The optimal prediction framework developed by the Chorin group ProblemReduction (); Chorin_predictwithmem (); StatisticalMechanics (); ChorinOptimalPrediction (); BarberThesis () uses the GLE to obtain a set of equations that describe the evolution of the macroscopic scales conditioned on the knowledge of the microscopic scales. The framework additionally begins to address how the memory convolution integral can be approximated without solving the orthogonal dynamics.
Constructing an appropriate surrogate to the memory integral requires an understanding of the structure of the orthogonal dynamics and its impact on the convolution integral. Obtaining insight into this structure is challenging as it requires the solution of the orthogonal dynamics equation, which is a highdimensional partial differential equation. Givon et al. GivonOrthogonal () prove the existence of classical solutions to the orthogonal dynamics equation for a simple set of projection operators and show the existence of weak solutions in the general case. Hald et al. HaldPredictFromData () demonstrate that the memory consists of convolving a sum of temporal covariances of functions in the noise subspace. Chorin et al. Chorin_predictwithmem () make use of the fact that the Hermite polynomials are orthogonal with respect to the conditional expectation inner product and develop a set of Volterra integral equations that can be used to approximate the memory integrand. This finiterank projection is shown to provide a reasonably accurate representation of the memory kernel for a system of two oscillators with a nonlinear coupling, but the process is intractable for highdimensional problems unless a lowdimensional basis is used. Bernstein bernsteinBurgers () applies this methodology and uses a first order basis to attempt to gain information about the memory kernel for Burgers equation. The simplicity of the basis functions, however, limits the insight gained from this process.
Despite the challenges in understanding the structure of the memory, various surrogate models exist. The most common approximation to the memory term is the tmodel; so named because time appears explicitly in the closure. The tmodel results from both a long memory and a short time approximation bernsteinBurgers () and has been applied with varying success to numerous systems. Bernstein bernsteinBurgers () applied the tmodel to FourierGalerkin solutions of Burgers equation. Numerical experiments showed that the tmodel accurately characterized the decay of energy resulting from the formation of shocks. Hald and Stinis Hald & Stinis (2007) applied the tmodel to two and threedimensional simulations of the Euler equations. The tmodel correctly preserved energy for twodimensional simulations and provided results consistent with theory for threedimensional simulations of the TaylorGreen Vortex. Chandy and Frankel ChandyFrankelLES () used the tmodel in Large Eddy Simulations of homogeneous turbulence and the TaylorGreen vortex. The model was found to be in good agreement with Direct Numerical Simulation and experimental data for low Reynolds number cases, but discrepancies were seen for higher Reynolds numbers. Additionally, the Large Eddy Simulations of the TaylorGreen vortex were performed with high enough resolution that simulations not utilizing any subgrid model are more accurate than Smagorinskytype models. Stinis StinisrMZ (); RenormalizedMZ () introduced a set of renormalized models based on expansions of the memory integrand. These models were applied to Burgers equation and were found to be in good agreement with the full order model. For all of these problems, the reason for the relative success of the tmodel has remained a mystery and is an outstanding question in the literature.
Stinis stinisHighOrderEuler (); stinis_finitememory () proposed a class of approximations to the memory integral that utilize the concept of a finite memory. The models involve the integration of additional differential equations that describe the evolution of the memory integral. The models were applied to Burgers equation and the 3D Euler equations. In these numerical experiments it was found that a finite memory length was required for stability, but the validity of the finite memory assumption was not addressed. This class of models appears to be more capable and robust than the tmodel and will be a main consideration of this work.
While we have restricted our discussion to fluiddynamics applications, a large body of research regarding the MoriZwanzig formalism exists in the molecular dynamics community, for instance, Refs. 18; 19. Additionally, the MoriZwanzig approach can be used in contexts outside of coarsegraining. Uncertainty quantification, for example, is one such field where the MoriZwanzig formalism has attracted recent attention MZ_for_UQ (); stinis_UQ2 ().
The objective of this work is to extend the applicability of MoriZwanzigbased closures to FourierGalerkin simulations of turbulent flows and to rigorously investigate their performance in a number of problems. Emphasis will be placed on Stinis’ finite memory models stinisHighOrderEuler (); stinis_finitememory (). This class of closures has not gained substantial exposure in the fluids community. The organization of this paper will be as follows: Section 2 will present an introduction to the MoriZwanzig formalism. The mechanics of the convolution memory integral and construction of surrogate models will be discussed. In Section 3, the models will be applied to the viscous Burgers equation. In Sections 4 and 5, the models will be applied to the incompressible NavierStokes equations, where homogeneous isotropic turbulence, the Taylor Green vortex, and fully developed channel flow are considered. In Section 6, conclusions and perspectives will be provided.
Ii MoriZwanzig Formalism
A brief description of the MoriZwanzig formalism is provided in this section. A demonstrative example is first provided to introduce the unfamiliar reader to the MoriZwanzig formalism. Consider a twostate linear system given by
(1) 
(2) 
Suppose that one wants to created a ‘reducedorder’ model of the system given in Eqns. 1 and 2 by creating a surrogate system that depends only on . For example,
(3) 
The challenge of the numerical modeler is to then construct the function that accurately represents the effect of the unresolved variable on the resolved variable . For this simple linear system, can be exactly determined by solving Eq. 2 for in terms of a general . Through this process, the twocomponent Markovian system can be cast as a onecomponent nonMarkovian system that has the form
(4) 
Equation 4 has no dependence on and hence is closed. This reduction of a Markovian set of equations to a lowerdimensional, nonMarkovian set of equations is the essence of the MoriZwanzig formalism. The formalism provides a framework to perform this reduction for systems of nonlinear differential equations.
A formal presentation of the MoriZwanzig formalism is now provided. The discussion here is adapted from Refs. 5; 6. Consider the semidiscrete nonlinear ODE
(5) 
where , with being the relevant or resolved modes, and being the unresolved modes. The initial condition is . The nonlinear ODE can be posed as a linear partial differential equation by casting it in the Liouville form,
(6) 
with . The Liouville operator is defined as
where . It can be shown that the solution to Eq. 6 is given by
(7) 
The semigroup notation is now used, i.e. ). One can show that the evolution operator commutes with the Liouville operator, i.e., . Consider the initial conditions to be random variables drawn from some probability distribution . Given this distribution, the expected value of a function is given by
where is the probability density. Assume , where is an Hilbert space endowed with an inner product . Consider now the coarsegrained simulation of Eq. 5, where the variables in are resolved and the variables in are unresolved. By taking , an equation for the trajectory of a resolved variable can be written as
(8) 
The term on the right hand side is a function of both the resolved and unresolved variables. To proceed, define the space of the resolved variables by . Further, define , as well as . An example of a possible projection operator would be, for a function , . Using the identity , the right hand side of Eq. 8 can be split into a component that depends on the resolved variables and one that depends on the unresolved variables,
(9) 
At this point the Duhamel formula is utilized,
Inserting the Duhamel formula into Eq. 8, the generalized Langevin equation is obtained,
(10) 
Equation 10 is the MoriZwanzig identity. The system described in Eq. 10 is exact and is an alternative way of expressing the original system. Equation 10 makes a profound statement: coarsegraining leads to memory effects. Note that the convolution integral represents numerical memory, as opposed to physical memory. One can expect the timescales of this numerical memory to depend both on the physics of the problem at hand and the level of coarsegraining.
For notational purposes, define
(11) 
By definition, satisfies
(12) 
where . Equation 12 is referred to as the orthogonal dynamics equation. It can be shown that solutions to the orthogonal dynamics equation live in the null space of for all time, meaning . Using the notation in Eq. 11, Eq. 10 can be written as
(13) 
A simplification comes from projecting Eq. 13 to eliminate the dependence on the noise term,
(14) 
Eq. 14 provides a set of equations for , the best possible approximation for on given knowledge about the initial density of . The evaluation of the memory kernel is, in general, not tractable as it requires the solution of the orthogonal dynamics equation. It does, however, provide a starting point to derive closure models. Additionally, note that the final projection does not necessarily imply a reduction in computational complexity. Although Eq. 14 has no dependence on the noise term, one is left with the challenge of projecting the potentially nonlinear Markovian term. For general nonlinear functions, the projection operator does not commute ().
A The Orthogonal Dynamics Equation and the Memory Kernel
The challenge of the MZ procedure for model order reduction is to construct an accurate, computationally tractable approximation to the memory integral. The construction of such an approximation requires an understanding of the form of the integrand and the underlying mechanics of the memory kernel. In its primitive form, the orthogonal dynamics equation (Eq. 12) is a partial differential equation that has dimension (with being the number of resolved variables and being the number of unresolved variables). Pursuing solutions to the orthogonal dynamics in this form is not tractable. Recall, however, that the generalized Langevin equation itself is a PDE with dimension , but its solution may be obtained by solving a set of ordinary differential equations. It can be shown that, when the original dynamical system is linear, that solutions to the orthogonal dynamics equation can be obtained by solving a corresponding set of auxiliary ordinary differential equations. This methodology was suggested in ParishAIAA2016 () as a procedure to approximate the orthogonal dynamics in nonlinear systems and was further developed and investigated in detail in GouasmiMZ1 (). To gain insight into the orthogonal dynamics, we consider a simple linear system.
Consider a linear system governed by
(15) 
with . The linear system can be split into resolved and unresolved components,
(16) 
It can be shown that solutions to the orthogonal dynamics equation can be obtained by solving the auxiliary system
(17) 
with . Assuming to be diagonalizable, the linear system has the solution
(18) 
where and are the eigenvalues and eigenvectors of . The memory is then given by
(19) 
For cases where the eigenvectors and eigenvalues can be obtained analytically, or are independent of the initial conditions, the memory kernel can be directly evaluated.
Significant insight is gained from Eq. 18. It is seen that the convolution memory integrand contains the exponential term . When the eigenvalues are negative (a general characteristic of stable systems), the exponential operator leads to the integrand having finite support. The timescale of this support is proportional to the inverse of the eigenvalues. To explain this, a pictorial representation of the evaluation of a simple convolution integral is given in Figure 1. The figures show the graphical evaluation of the convolution integral with and . It is seen that the exponential operator limits the support of the integrand. The time scale of this support is related to the argument of the exponential operator, which is related to the eigenvalues of the auxiliary system.
B Modeling of the Memory Kernel
To obtain a reduction in complexity, a surrogate to the memory must be devised. The tmodel is perhaps the simplest approximation to the memory term and can be obtained by expanding the memory integrand in a Taylor series about and retaining only the leading term,
(20) 
Various interpretations of the tmodel exist, the most common of which is that the memory is infinitely long. The tmodel can be expected to be accurate when minimal flow is invoked by the orthogonal dynamics, i.e. . Its accuracy additionally requires that the flow invoked by the true dynamics is slow to evolve in time. Despite its simplicity, the tmodel has been successfully applied to Burgers equation and as a subgrid model in Large Eddy Simulations ChandyFrankelLES ().
The insight gained from Section A suggests that the memory integrand has a finite support. Based on this, a more general class of models that assume a finite support of the memory integrand is now considered. The models derived here were first considered by Stinis Hald & Stinis (2007); stinis_finitememory (). For notational purposes, define
(21) 
where . Note the change of variables . For clarity of presentation, the dependence of on the initial conditions and time will be implicitly assumed throughout the rest of this section. Setting (in which case Eq. 21 is simply the memory term) and differentiating Eq. 21 with respect to time yields
(22) 
Note that the first term on the right hand side does not require the solution of the orthogonal dynamics equation. The second term on the right hand side is dependent on the orthogonal dynamics. This dependence can be eliminated by using a discrete integration scheme to express the memory integral. In the case of the trapezoidal rule
In order to handle the case where the memory length is not necessarily small, the memory integral in Eq. 21 is partitioned into subintervals,
(23) 
where . Define
for . Applying the trapezoidal integration scheme to each subinterval yields the general form
(24) 
The right hand side of Eq. 24 is now closed with the exception of the memory term. Assume that the new memory term has a finite support from to . A differential equation for can be developed by again differentiating the convolution integral in Eq. 24 with respect to time and using the trapezoidal rule. The differentiation process can be continued to build an infinite hierarchy of Markovian equations. The general form obtained is
(25) 
The infinite hierarchy of equations must be truncated at some point. This can be done by modeling the effects of or, more simply, by neglecting it. Neglecting can be justified if the support or magnitude of the integrand decreases with the repeated application of . The derivation above can be carried out using higher order quadrature stinis_finitememory (). In this work, models with a constant memory length and one subinterval are considered. In this case the models simplify to
(26) 
Iii Application to Burgers Equation
The viscous Burgers equation (VBE) is a onedimensional equation that serves as a toymodel for turbulence. The VBE has been wellstudied and is a canonical problem to test the performance of subgrid models. It should be noted that solutions of the VBE are not chaotic, a property that is one of the defining features of turbulence. The VBE in Fourier space is given by
(27) 
with . The Fourier modes are contained within the union of two sets, and . In the construction of the reduced order model, the resolved modes are and the unresolved modes are . Partitioning Eq. 27 into the resolved and unresolved sets, the evolution equation for the resolved variables is written as
(28) 
Eq. 28 is equivalent to the LES form of the VBE with a sharp spectral cutoff filter. Note that the subgrid stress in Fourier space is written as
The last term on the RHS of Eq. 28 represents the effect of the unresolved scales on the resolved scales and must be modeled.
Traditional subgrid models make an isotropic eddy viscosity approximation, where the subgrid stress is assumed to be linearly proportional to the strainrate of the resolved scales. In such models the subgrid stress is modeled in the physical domain by
with being the resolved strainrate tensor. The eddy viscosity is determined from the filtered flow field. The Smagorinsky Smagorinsky () model is perhaps the most notable subgrid model and uses
The static and dynamic Smagorinsky models will be used as a reference to compare the MoriZwanzig based closures.
A Construction of MoriZwanzig Models
MoriZwanzig closures based on the expectation projection are considered. Let . The projection of onto is given by
(29) 
The density of the initial conditions is assumed to consist of independent Gaussian distributions in the zerovariance limit as in Ref. 11. For this density, the expectation projection sets all unresolved modes to be zero, i.e.
The high fidelity model is taken to have support , while the reduced order model has support for . Evaluating Eq. 14 for the VBE and casually commuting the nonlinear Markovian term yields
(30) 
Evaluation of the convolution integral in Eq. 30 is not tractable, and it is estimated with the models previously discussed. For the VBE, finite memory models for are considered; as well as the tmodel. One subinterval is used in the trapezoidal approximation. For the VBE the first order term is found to be
(31) 
The form of the required terms in the higher order models are given in the Appendix. Throughout the remainder of this manuscript, the finite memory model based on the first order expansion (Eq. 31) will be referred to as the first order finite memory model, FM1. Similarly, the second and third order expansions will be referred to as FM2 and FM3.
B Numerical Implementation
The VBE is solved numerically using a FourierGalerkin spectral method. The FFT calculations are padded by the 3/2 rule. An explicit low storage 4th order RungeKutta method is used for time integration.
Numerical simulations of the VBE are performed with the initial condition ZJWangBurgers ()
(32) 
where if and for . Eq. 32 initializes an energy spectrum with a 5/3 slope for . The phase angle is a random number in the domain . A constant seed value is used in all of the simulations. No energy is added to the flow after a cutoff frequency , such that LES simulations are initially fully resolved.
1 Selection of the Memory Length
To determine the memory length, one would ideally like to directly compute the memory kernel. Computing the memory kernel involves solving the orthogonal dynamics equation and is not directly tractable. In this work, an intuitive approach is considered. In Section A it was demonstrated that, in the linear case, the memory length could be related to the eigenvalues of the auxiliary dynamical system that solves the orthogonal dynamics. Since Burgers equation does not exhibit scale separation, a logical hypothesis is that a mean time scale can be related to the spectral radius of the Jacobian of the resolved variables
To provide evidence for this argument, a parametric study was performed involving 60 cases. The simulations were initialized with Eq. 32 and operated over a range of Reynolds numbers and resolutions. The cases considered were permutations of the following parameters: . The DNS simulations were carried out using 4096 resolved modes. For each case, the time constant in the first order model is found by solving an inverse problem. For simplicity, is taken to be constant in time. The optimal time constant in the least squares sense was found by minimizing the difference of the total kinetic energy dissipation rate between the reduced order model solution and a high resolution DNS solution. The solution was minimized for using data at discrete timesteps spaced by intervals of The discrete penalty function is given by
where . The penalty function was minimized using SciPy’s optimization suite. A downhill simplex algorithm was used. It is noted that the inferred results were similar to a penalty function that minimized the difference in total kinetic energy. Figure 2 shows the inferred time constants plotted against the temporal mean of the spectral radius of the Jacobian of the resolved variables. The and cases are seen to collapse to the same line. The cases also collapse to a linear line, but with a slightly greater slope. Given the wide range of cases considered, the collapse of the data is quite good. This result suggests that the properties of the Jacobian of the resolved variables can be used as a good indicator of the memory length. A fit of the above data yields A more rigorous fitting process and the effect of uncertainty in the time constant is provided in Appendix B.
Before proceeding further, we make several comments. First, the results of this section should be viewed as evidence rather than proof that a finite memory approximation is appropriate for the Burgers equation. The first order model has errors that are due to the trapezoidal integration as well as neglecting . We note that the inferred memory length should not be viewed as a physical parameter. Nonetheless, the memory length has a physical meaning and the inferred results are consistent with that assertion. An increased Reynolds number leads to a decreased time scale, while a decrease in numerical resolution leads to an increase in time scale. This makes the time constant different than a heuristic tuning constant.
C Numerical Results
Direct numerical simulations of the VBE were performed with resolved modes () on a spatially periodic domain of length for . The initial condition given in Eq. 32 is used with and . LES is performed with the SGS models described above. The LES simulations are initialized with the DNS solution for such that the initial condition of the LES is fully resolved. The simulations were performed with 32 resolved modes, corresponding to a cutoff frequency at . The memory length was selected by the procedure described in the previous section. A formal estimation procedure was not used for the memory lengths of the higher order models, which were simply chosen to be . A summary of the relevant computational details is given in Table 1.
DNS  Smagorinsky  tmodel  FM1  FM2  FM3  
1e4  1e3  1e3  1e3  1e3  1e3  
Constants  NA 
Figures 3 and 4 compare the MoriZwanzig based models to the filtered DNS data, the Smagorinsky model, and a simulation ran on a 32 point mesh without any subgrid model. Figure 2(a) shows the temporal evolution of the total kinetic energy and rate of kinetic energy decay. Figure 2(b) shows the temporal evolution of the mean magnitude of (note ) and the energy spectrum at . Figure 4 shows the trajectories of the 8th (Fig. 3(a)) and 15th (Fig. 3(b)) modes of and in the complex plane. A brief discussion of the results of each simulation will now be provided.
The simulation performed without an SGS model severely underpredicts the rate of energy decay at early time, leading to an overprediction in total kinetic energy. As expected, the simulation underpredicts the dissipation rate. With no subgrid mechanism present to remove energy from high wave numbers, a pileup of energy is seen for high , as evidenced in Figure 2(b). This phenomena indicates that the simulation is underresolved and a subgrid model is indeed required. The evolution of the individual modes of contain significant error, particularly around the cutoff frequency. The evolution of the 15th mode, as shown in Figure 3(a), is an excellent example of the error present in high wave numbers.
The Smagorinsky model offers improvements. The simulation utilizing the basic SGS model provides decent predictions for both the total kinetic energy and the dissipation of kinetic energy. The energy spectrum at and trajectories of the individual modes are additionally much improved. However, the Smagorinsky model is unable to differentiate the resolved scales from the unresolved scales. Despite being completely resolved at , the Smagorinsky model predicts that the subgrid content is maximum at . The resulting predictions for are both quantitatively and qualitatively incorrect. In particular, the individual trajectories of show no similarity to that of the DNS. It is recognized that the Smagorinsky model was developed for homogeneous turbulent flows in the physical domain, so a critical evaluation of the model on the Burgers equation is not completely appropriate.
Simulations performed using the tmodel provide improved predictions. The largest error is present in the prediction for , where it is seen that the tmodel slightly over predicts subgrid content (especially for ), but the predictions are still qualitatively correct. The model correctly predicts the initial peak in and around and qualitatively shows the presence of the second peak around . The trajectories of the individual modes in and are improved, but become less accurate for late time. The prediction for the energy spectrum at is not noticably better than that predicted by the Smagorinsky model. The explicit presence of in the model leads to substantial error for large time. The performance of the tmodel for the VBE shows the merit in the MZbased models. To reiterate, the tmodel contains no heuristics or coefficients. Work by Stinis RenormalizedMZ () and our own numerical experiments show that renormalization of the tmodel can lead to more accurate results.
The finite memory models provide relatively accurate predictions for all quantities. The evolution of total kinetic energy, dissipation of kinetic energy, and mean subgrid predictions are in good agreement with the DNS. The first order finite memory model accurately predicts the instantaneous energy spectrum at for low wave numbers, while the second and third order models provide accurate predictions for all wave numbers. The trajectories of the individual modes are close to that of the DNS, as are the trajectories for the subgridterms.




Results of the first order finite memory model are shown in Figure 5 for two additional cases. The first is run at a resolution of , a viscosity of , and a scaling of . The second case is run at a resolution of , a viscosity of , and a scaling of . The time constants were again selected by the scaling of the spectral radius of the Jacobian. Both of these cases are significantly underresolved. The low viscosity case in particular has numerous shocks and required 4096 resolved modes for the DNS calculation.
Iv Application to the Triply Periodic NavierStokes Equations
Coarsegrained simulations of the NavierStokes equations are now considered. The incompressible NavierStokes equations in Fourier space are given by
(33) 
where the Fourier modes again belong to the union of two sets. Separating the modes into the resolved and unresolved sets yields,
(34) 
where are the resolved modes and are the unresolved modes. Note that Eq. 34 is the LES equations one obtains if they apply a sharp spectral cutoff filter to the NavierStokes equations. The modes in are the unresolved modes that are filtered out, while the modes in are retained. The objective is to solve for the modes in as accurately as possible. The subgrid stress is written as
Note that, in Fourier space, the pressure term appears as a projection. This projection leads to additional nonlinear interactions between the resolved and unresolved scales.
A Construction of the MoriZwanzig Models
For the incompressible NavierStokes equations, the tmodel and the first order finite memory model are considered. The expectation projection and Gaussian density in the zero variance limit are again used. Casually commuting the nonlinear Markovian term, the projected MoriZwanzig identity reads
(35) 
The evaluation of Eq. 35 is made tractable by approximating the memory integral. Here only first order models are considered, which require the evaluation of . After much tedious algebra it can be shown that
(36) 
where
Eq. 36, along with Eqns. 20 and 22, can be used to write equations for the tmodel and the first order finite memory model. It is noted that the terms are simply the right hand side of the filtered equations without any subgrid model and are already computed. It is additionally noted that, when expanded, several terms in can be combined for a faster evaluation.
B Numerical Implementation
The NavierStokes equations are solved using a Galerkin spectral method with an explicit low storage RK4 time integration scheme. The main solver is written in Python and interfaces with the FFTW discrete Fourier transform library FFTW () via the pyFFTW wrapper. FFT calculations are padded by the 3/2 rule. For the MoriZwanzig models, a 2x padding is used such that and . Convolutions of the form
which have a support of , are padded by construction stinisHighOrderEuler ().
C Homogeneous Isotropic Turbulence
The simulation of decaying Homogeneous Isotropic Turbulence (HIT) is considered. HIT has been a baseline case for the development of subgrid models. In this study, the spectrum used by Rogallo Rogallo () is used for initialization. The velocity field is given by
(37) 
where and are mutually orthogonal unit vectors in the plane orthogonal to the wave vector. The initial spectrum is taken to be
where is the wave number at which the energy spectra is maximum, is a parameter set to 4, and . DNS data from a simulation initialized with Eq. 37 are used as an initial condition for LES simulations. The Taylor microscalebased Reynolds number of the filtered field is The LES simulations are performed using resolved modes with a time step of . The MZbased models are compared to filtered DNS data, both the dynamic and static Smagorinsky models, and an LES with no subgrid model. An alternate heuristic to select the memory length is to scale the time step in the LES with the ratio of the grid size to the estimated Kolmogorov scale. The relevant computational details are given in Table 2.
HIT  DNS  Smagorinsky  tmodel  Finite Memory 

0.005  0.005  0.005  0.005  
Constants  NA  NA 
Figure 6 shows the energy, dissipation, resolved transfer spectra, and subgrid transfer spectra at . Results are compared to filtered DNS data. The resolved transfer spectra is computed by
The subgrid energy transfer is extracted from the DNS data by
For the large eddy simulations, the subgrid energy transfer is computed by
The simulation ran with no subgrid model has a pileup of energy at high frequencies, indicating that the simulation is underresolved. Both the static and dynamic Smagorinsky models provide good predictions for the energy, dissipation, and resolved spectra. The subgridcontribution to the energy spectra is qualitatively correct for both models, but error is present. The tmodel performs well for low wave numbers, but under predicts the energy content at high wave numbers. The performance of the finite memory model is comparable to the dynamic Smagorinsky model and provides good predictions for all wave numbers. In particular, the finite memory model provides excellent predictions for the subgridcontributions to the transfer term.
For the HIT case, it is perhaps not prudent to conclude that the MZbased models performed significantly better than the Smagorinsky models. The derivation of the Smagorinsky models, however, proceeds directly from equilibrium assumptions that are most relevant specifically in homogeneous isotropic turbulence. Further, there is an implicit assumption that the simulation has resolution into the inertial subrange. The MZ models, on the other hand, are insensitive to any assumptions about the state of the flow. This generality allows the models to be used in a variety of flow regimes, including ones where use of the Smagorinsky model is inappropriate. This robustness is evident in the next example, where the Taylor Green vortex is considered.


D Taylor Green Vortex
The Taylor Green Vortex (TGV) is a canonical problem with deterministic initial conditions that is often used to study the accuracy of computational methods. The flow is characterized by a breakdown of organized coherent structures into finescale features. The initial conditions are
in a periodic domain . The Reynolds number of the flow is given by the inverse of viscosity. Traditional LES subgrid models do not perform well on the TGV since they are generally designed for fully developed turbulent flows. Two cases are considered, one at and the other at . All comparisons are made to filtered DNS quantities. Tables 3 and 4 summarize the relevant simulation details. Note that the tmodel required a lower time step for stability.
Re=800  DNS  Smagorinsky  tmodel  Finite Memory 

0.005  0.02  0.005  0.02  
Constants  NA  NA 
Re=1600  DNS  Smagorinsky  tmodel  Finite Memory 

0.005  0.02  0.005  0.02  
Constants  NA  NA 
The results of the case are shown in Figure 7. The large eddy simulations are performed with resolved modes, while the DNS simulation uses resolved modes. At this Reynolds number and resolution, the system is only slightly underresolved, as evidenced by the reasonable performance of the simulation with no subgrid model. Figure 6(a) shows the temporal evolution of the kinetic energy integrated over the whole domain as well as the temporal evolution of the kinetic energy dissipation rate (computed by ). The performance of the Smagorinsky model is poor. As was previously discussed, this is due to the fact that the model is designed for fully turbulent flows. The Smagorinsky model is unable to account for the fact that the solution is fully resolved at early time, and incorrectly removes energy from the resolved scales at . The MoriZwanzig models, particularly the finite memory model, perform notably better. The tmodel recognizes that the simulation is completely resolved at early times, but is seen to remove too much energy from the resolved modes soon after. The finite memory model provides good predictions for the total energy and dissipation. In particular, the model is able to capture the double peaked structure of the kinetic energy dissipation rate around . The energy and dissipation spectra of the simulations at and are shown in Figures 6(b) and 6(c). The finite memory model is again in good agreement with the DNS. The spectra predicted by the tmodel is in good agreement with the DNS for early time, but the dissipative nature of the model is evident at .
The results of the case are shown in Figure 8. The LES is again performed with resolved modes, while the DNS used modes. The subgrid models in the coarsegrained simulations are more active at this higher Reynolds number. The finite memory model performs well. The temporal evolution of the total kinetic energy and dissipation of kinetic energy are well characterized, showing that the model is removing energy from the resolved scales at an appropriate rate. The predicted spectra are also in good agreement. The tmodel and Smagorinsky models are again highly dissipative.






V Application to Fully Developed Channel Flow
An MZbased finite memory model is now applied to turbulent channel flow. Unlike the previously considered cases, the channel flow is a steadystate nondecaying problem. For such problems, a finite memory assumption is critical for the application of MZbased models. The construction of a first order finite memory MZbased model for fully developed turbulent channel flow is now outlined. The flow is taken to be streamwise (x) and spanwize (z) periodic. The model is constructed by coarsegraining in the periodic directions.
A Construction of the Finite Memory MoriZwanzig Model
Fourier transforming the incompressible NavierStokes equations in the and direction yields
(38) 
(39) 
where
Unlike in the triply periodic case, the continuity equation can not be implicitly satisfied by a simple solution to the pressure Poisson equation in Fourier space. Solution of the pressure Poisson equation is complicated by inhomogeneity in the direction and boundary conditions. This makes the derivation of MZ models difficult as the framework is formulated for time varying dynamical systems. However, the effect of the pressure projection on the subgrid scale models for triply periodic problems has been observed to be minimal. As such, the MZ models are formulated by neglecting the effects induced by coarsegraining the pressure. In this case one finds
(40) 
B Numerical Implementation
The NavierStokes equations are solved in skewsymmetric form via a FourierChebyshev pseudospectral method. A coupled semiimplicit AdamsBashforth scheme is used for time integration, as in MoinChannel1 (). The continuity equation is directly enforced at each timestep, bypassing the need for pressure boundary conditions. The main solvers are written in Python and utilize mpi4py for parallelization. All FFT calculations (including the Chebyshev transforms) are dealiased by the 3/2 rule.
C Numerical Results
Large Eddy Simulations of channel flow are now discussed. The solutions are compared to the dynamic Smagorinsky model. Simulations at are considered. The LES simulations are evolved using resolved modes in the and directions respectively. Simulation parameters are given in Table 5.
4  2  180  32  64  32  0.01 
Statistical properties of the LES solutions are compared to DNS data from MoserChannel () in Figure 9. The MZbased models are seen to offer improved solutions similar to that produced from the dynamic Smagorinsky model. In particular, the mean velocity profiles are much improved and the model correctly reduces the Reynolds stresses.
Vi Conclusions and Perspectives
The MoriZwanzig formalism provides a mathematically consistent framework for model order reduction. Recasting a highorder dynamical system into the generalized Langevin equation (GLE) provides both a starting point for the development of coarsegrained models as well as insight into the effects of coarsegraining. Utilizing insight gained from solutions of the orthogonal dynamics equation for linear dynamical systems, a class of models based on the assumption that the memory convolution integral in the GLE has a finite, timedependent support (Stinis, 2012) was presented. The appeal of these models is that they are formally derived from the governing equations and require minimal heuristic arguments.
Coarsegrained simulations of the viscous Burgers equation and the incompressible NavierStokes equations were considered. The closures derived under the assumption of a finite memory proved to be robust and accurate for the cases considered. For varying mesh resolutions, the MZbased models were shown to accurately predict the subgrid contribution to the energy transfer. The trajectory of each resolved mode in phasespace was accurately predicted for cases where the coarsegraining was moderate. The models provided accurate results for Burgers equation, transitional and fully turbulent periodic incompressible flows, and fully developed channel flow. This accuracy is believed to be due to the close link between the mathematics of the coarsegraining process and the derivation of the closure model. An analysis of the memory term for Burgers equation demonstrated the need for the finite memory length and provided further insight into the performance of the tmodel and finite memory models. The models used in this work should