Estimating Uncertainties in Statistics Computed from DNS

Estimating Uncertainties in Statistics Computed from DNS

Todd A. Oliver oliver@ices.utexas.edu Center for Predictive Engineering and Computational Sciences,
Institute for Computational Engineering and Sciences,
The University of Texas at Austin,
Austin, TX 78712, USA
   Nicholas Malaya nick@ices.utexas.edu Center for Predictive Engineering and Computational Sciences,
Institute for Computational Engineering and Sciences,
The University of Texas at Austin,
Austin, TX 78712, USA
   Rhys Ulerich rhys@ices.utexas.edu Center for Predictive Engineering and Computational Sciences,
Institute for Computational Engineering and Sciences,
The University of Texas at Austin,
Austin, TX 78712, USA
   Robert D. Moser rmoser@ices.utexas.edu Center for Predictive Engineering and Computational Sciences,
Institute for Computational Engineering and Sciences,
The University of Texas at Austin,
Austin, TX 78712, USA
Department of Mechanical Engineering,
The University of Texas at Austin,
Austin, TX 78712, USA
Abstract

Rigorous assessment of uncertainty is crucial to the utility of DNS results. Uncertainties in the computed statistics arise from two sources: finite statistical sampling and the discretization of the Navier–Stokes equations. Due to the presence of non-trivial sampling error, standard techniques for estimating discretization error (such as Richardson extrapolation) fail or are unreliable. This work provides a systematic and unified approach for estimating these errors. First, a sampling error estimator that accounts for correlation in the input data is developed. Then, this sampling error estimate is used as part of a Bayesian extension of Richardson extrapolation in order to characterize the discretization error. These methods are tested using the Lorenz equations and are shown to perform well. These techniques are then used to investigate the sampling and discretization errors in the DNS of a wall-bounded turbulent flow at . Both small () and large () domain sizes are investigated. For each case, a sequence of meshes was generated by first designing a “nominal” mesh using standard heuristics for wall-bounded simulations. These nominal meshes were then coarsened to generate a sequence of grid resolutions appropriate for the Bayesian Richardson extrapolation method. In addition, the small box case is computationally inexpensive enough to allow simulation on a finer mesh, enabling the results of the extrapolation to be validated in a weak sense. For both cases, it is found that while the sampling uncertainty is large enough to make the order of accuracy difficult to determine, the estimated discretization errors are quite small. This indicates that the commonly used heuristics provide adequate resolution for this class of problems. However, it is also found that, for some quantities, the discretization error is not small relative to sampling error, indicating that the conventional wisdom that sampling error dominates discretization error for this class of simulations needs to be reevaluated.

I Introduction

Direct numerical simulation (DNS) of turbulence is a valuable tool for the study of turbulent flows. Statistical quantities computed from DNS results are commonly used both to further understanding of flow physics and test hypotheses regarding turbulence [Moin and Mahesh, 1998, Jiménez and Moser, 2007, Alfonsi, 2011] as well as to calibrate and validate engineering turbulence models [Pope, 2000, Wilcox, 2006, Durbin and Reif, 2011, Cheung et al., 2011, Oliver and Moser, 2011]. DNS data are thus commonly used like experimental data. Therefore, as with experimental data, to have confidence in the interpretation of a DNS or in the meaning of any comparison with DNS data, one must understand the uncertainty in that data. However, it is not common in the DNS literature to report these uncertainties because uncertainties in the data are generally not systematically evaluated. Instead, it is common for expert practitioners to determine grid spacing requirements, required simulation time, etc. based on a combination of knowledge gained from previous experience and observations of simulation outputs. The goal of this work is to improve upon this practice by providing a systematic method for estimating uncertainty in the statistics computed from DNS data.

Uncertainty estimation for DNS is an example of solution verification for the DNS statistical quantities. The goal of solution verification is to ensure that numerical solutions of a mathematical model are sufficiently accurate approximations to the exact solution of the model on Standards [1998], 10 [2006]. Solution verification techniques for computational fluid dynamics (CFD) have been the topic of a large body of research Roache [1998], Roy [2005], Oberkampf and Roy [2010]. The simplest techniques in this domain are based on Richardson extrapolation for estimating discretization error given a sequence of simulations on successively finer meshes. However, these developments have had little impact on the practice of DNS due to the fact that the outputs are generally statistical quantities that are contaminated not only by discretization error but also by sampling error. Since the goal of DNS is to resolve all relevant physical scales, it is generally expected that errors due to finite sampling are significant relative to discretization errors. Thus, simple methods for estimating discretization error that are common for other CFD calculations, like Richardson extrapolation, are not directly applicable to DNS results, because the estimated discretization error is greatly affected by sampling error. The result is that, while systematic mesh resolution studies have been performed[Donzis et al., 2008], it is not common to actually estimate discretization error.

To address this issue, it is of primary importance to estimate sampling errors. Of course, if the data used to compute the statistics were samples from independent, identically distributed random variables, the central limit theorem allows easy estimation of the sampling error. However, the samples used to generate DNS statistics are drawn from a time history and/or spatial field and are generally not independent. To reduce the correlation, the samples used to compute statistics are sometimes taken “far” apart in time and then treated as independent [Donzis et al., 2008]. While this procedure has intuitive appeal, it can lead to underestimated uncertainty if the snapshots are not sufficiently separated. Alternatively, if the snapshots are taken too far apart, it leads to fewer samples and overestimates of sampling error.

Instead of restricting the samples in this way, it is preferable to use all the available data and account for correlations. One approach to accounting for the correlations in DNS statistics, which was proposed by Hoyas and Jiménez[Hoyas and Jiménez, 2008a], uses a sequence of “coarse grainings” of the data. However, our experience has been that it is difficult to automate this procedure because the presence of noise often requires both user intervention and interpretation.

A more promising approach based on direct estimation of the correlations in the data has been used to estimate sampling errors in many fields, including the weather and climate communities [Trenberth, 1984, Zwiers and von Storch, 1995]. In this approach, the autocorrelation of the data, which is not known a priori, must be estimated from the data, which presents its own challenges. Here, we follow the work of Broersen [2002, 2006] and fit autoregressive models from which the autocorrelation function is then computed.

Given an estimate of the sampling error, discretization errors are estimated using data from simulations with different resolution levels. As noted earlier, because sampling uncertainty is generally expected to be of the same magnitude as the discretization error, at least for grid spacing and time steps used for production DNS, standard Richardson extrapolation generally fails to correctly estimate the discretization error. Here, a Bayesian extension of the standard Richardson extrapolation that accounts for both statistical uncertainty and prior information (e.g., the expected asymptotic order of accuracy) is formulated. This Bayesian statistical formulation effectively regularizes the Richardson extrapolation problem to decrease the sensitivity of the estimated discretization error to finite sampling effects.

The performance of these estimators is tested using the Lorenz equations. They are then applied to the problem of assessing uncertainties in statistics from the DNS of incompressible, turbulent channel flow at . The resulting discretization error estimates are assessed using a small domain case where it is feasible to run a simulation with twice the resolution of the nominal simulation, which is designed according to typical DNS heuristics.

For many quantities, including the mean velocity, Reynolds shear stress, and skin friction coefficient, the discretization error model is validated, meaning that its predictions agree with the observations at higher resolution. For these quantities, the model is then used to predict the discretization error present in a large domain simulation with resolution again set by the usual heuristics. The results demonstrate that, for these quantities, the discretization errors are small, generally much less than one percent. Thus, the usual mesh heuristics appear to be adequate. It should be pointed out however that the estimated discretization error is often similar to or larger than the estimated sampling error. This result violates the conventional wisdom that sampling error dominates, indicating that it is important to systematically estimate discretization error effects as well.

Unfortunately, for other quantities, including the streamwise velocity variance and the vorticity variances, our simple discretization error model is invalidated by the high resolution small domain simulation results. While the observed changes between the nominal and high resolution simulation are small, and so there is no indication that the nominal resolution is inadequate, this invalidation precludes the use of the model to predict the discretization error with any confidence. Thus, no discretization error estimates are presented for these quantities.

The remainder of the paper is organized as follows. The full error estimation methodology is presented in §II, including the sampling error estimation (§II.1), the Bayesian Richardson extrapolation procedure (§II.2), and the illustrative Lorenz example (§II.3). Results for DNS of channel flow are given in §III, and §IV provides conclusions.

Ii Methodology

This work addresses two major sources of uncertainty in statistics computed from DNS: finite sampling error and discretization error. The sampling error estimator is described briefly in §II.1. This estimate is then used in a Bayesian extension of Richardson extrapolation to determine probabilistic estimates of the discretization error and the exact value of the statistic of interest, as described in §II.2. To assess the characteristics of these procedures in a simple setting where different regimes can easily be explored, both estimators are used to evaluate simulations of the Lorenz equations in §II.3.

ii.1 Sampling Error

This section outlines a method for estimating the variance of a sample average computed from correlated data. To fix notation, let denote a scalar flow quantity (e.g., a velocity component). Assume that the DNS produces a sample from a statistically stationary sequence of random variables for . Of course, the simulation can only run for finite time, so only the first components of this sequence are known. The average of the available samples,

is then an approximation of the true mean , where here is the expected value. Then, the sampling error is simply the difference between the sample average and the true mean:

Extensions of the central limit theorem (CLT) valid for sequences in which independence is approached for large separations, as is expected for turbulence time series, imply that for large , converges to a normal distribution with zero mean (see Appendix A). The variance of is thus all that is required to completely characterize the sampling error. The estimator for the variance used here is motivated by this same generalization of the CLT, as described in Appendix A.

Following Trenberth [1984], the sampling error is estimated as

(1)

where

(2)

and is the decorrelation separation distance. Specifically,

(3)

where is an estimate of the unknown true autocorrelation function . The possibly unexpected factor is a common artifact [Thiébaux and Zwiers, 1984, Zwiers and von Storch, 1995] of choosing a biased estimator, which is used here because it possesses more desirable properties than the “unbiased” version in this context [Trenberth, 1984, Percival, 1993]. The expression (1) for is the same as the estimate that would be obtained if independent samples were used, making a measure of the effective size of a sample.

The fundamental challenge in estimating the variance of the sample average is the approximation of the autocorrelation . While can be approximated directly from the definition, such a naive approximation tends to be noisy, which can lead to bad estimates of  [Percival, 1993]. Obtaining a useful estimate of requires more sophisticated techniques, especially for modest sample sizes. Here, we follow Storch and Zwiers [2001, §17.1.3] and fit an autoregressive (AR) time series model [see, e.g., Box et al., 2008, Priestley, 1981] to the observed sequence . An AR process of order takes the following form:

(4)

where indicates that is a Gaussian random variable with mean and variance . The process parameters and noise variance completely define the process, and thus, given these parameters, the exact autocorrelation function of the AR process may be computed. This autocorrelation function is then used as to compute according to (3).

Thus, estimating the autocorrelation reduces to estimating the parameters of an AR model. However, because the “true” process order is unknown, a hierarchy of models with increasing order are simultaneously estimated [Broersen, 2002, 2006]. From these candidates, the best model is chosen using an information-theoretic, finite sampling model selection criterion [Broersen, 2000].

Fitting such models to observed data has been studied extensively Burg [1975], Andersen [1978], Faber [1986], Andersen [1974], Broersen [2002, 2006], and there are a number of available algorithms. Here, classical Burg recursion [Andersen, 1974] is used to compute the parameters because it is less susceptible to round-off error accumulation than the more efficient recursive denominator variant [Andersen, 1978, Faber, 1986]. An open source, header-only C++ reference implementation is available at http://rhysu.github.com/ar/. Convenient wrappers for GNU Octave [Eaton et al., 2008] and Python [Drake and Rossum, 2011] are also provided. While this implementation is sufficient for the results shown in §II.3 and §III, it can fail in some circumstances. The authors have observed stability-related problems when processing large data sets that have extremely low noise or very long decorrelation times relative to the sampling rate. These issues are related to accumulation of round-off error [Andersen, 1974].

ii.2 Discretization Error

In addition to the sampling error, discretization error contributes to the error in statistics computed from DNS. As part of a typical calculation, statistics computed from multiple levels of mesh resolution are available because course meshes are often used to speed convergence to a statistically stationary state. In principle, this information can be used to estimate discretization error. However, the standard procedure, Richardson extrapolation, does not account for sampling error, which can lead to misleading results. This procedure and issues introduced by sampling error are described in §II.2.1. An extension of this method that accounts for the sampling error through a Bayesian calibration procedure is described in §II.2.2.

ii.2.1 Assessing Order of Accuracy without Sampling Error

Given simulations using at least three distinct resolutions, the convergence rate of a discrete approximation to an unknown continuum value may be assessed Roache [1998], Roy [2005], assuming that all three resolutions are in the asymptotic convergence range. Let denote the exact value of some output quantity and denote the discrete approximation of at resolution level . Assuming that

(5)

gives rise to the classical Richardson extrapolation procedure. The input data are a sequence of outputs , , and resulting from computations for successively finer discrete approximations , , and . Given this data and neglecting contributions, one can estimate the leading error order by solving

(6)

for , where and .

Unfortunately, when the computed discrete approximation is a statistical quantity that is contaminated by sampling error, this procedure can give misleading results. For instance, when the sampling error is large, the computed order may be very far from the true that would be obtained if sampling error were eliminated, making it appear that the discretization error is either much larger or much smaller than the true error. If the sampling error is large enough, it can make the implied negative, making it appear that the solution is diverging. Or, (6) may have no solution at all, making it impossible to assess or the discretization error. Thus, this procedure is insufficient when significant sampling error is expected.

ii.2.2 Accounting for Sampling Error

To account for sampling error, a probabilistic model of the true mean that includes both the discretization error described in §II.2.1 and the sampling error estimate described in §II.1 is needed. Using this model, the parameters of the discretization error model (e.g., the constants and ) are then estimated using Bayesian inference. This formulation is advantageous relative to a deterministic procedure (e.g., least-squares or maximum likelihood estimation) in the current context because it naturally assesses the uncertainty in the discretization error estimate, eliminating the flaw in the standard procedure described in §II.2.1. Since the Bayesian approach to inverse problems is described in more detail by many authors Cox [1961], Christian [2001], Jaynes [2003], Kaipio and Somersalo [2005], Calvetti and Somersalo [2007], additional background information is omitted here.

To develop a probabilistic model for the true mean , let denote the sampling error for the sample average computed from correlated samples at resolution . That is,

where is the true mean at resolution and is the sample average computed from samples. Further, letting denote the discretization error, we have

(7)

Using the sampling error estimator from §II.1 for and the form of from (5), one has a complete probabilistic model of the true mean . Specifically,

where and is the estimate from (2) computed at resolution . Note that, while is a deterministic quantity, our knowledge of is incomplete. Since Bayesian probability is a representation of incomplete knowledge, it is appropriate that is represented by a probabilistic model. Neglecting the terms gives

(8)

This model forms the basis of the Bayesian inverse problem formulated later in this section, and we use it exclusively in this work. However, with appropriate modifications of the likelihood function defined below, any discretization error model may be used here in place of .

For brevity, let from now forward. Then, given sample averages computed using distinct mesh sizes , Bayes’ theorem implies that

(9)

where denotes the probability density function (PDF) for conditioned on . The right hand side of (9) is composed of two factors: the prior PDF and the likelihood function. The prior PDF encodes any available information about the parameters , , and that is independent of the observations . For instance, one may have strong prior information regarding because the formal order of accuracy of the numerical scheme is known. The likelihood function assesses the consistency of the model with particular values of the parameters , , and and the computed values . It is derived from the probabilistic model (7). Assuming that sampling errors for different resolutions are independent,

Then, from (8), it is clear that

where is the standard normal density , and .

Note that, for , as the likelihood PDF approaches the distribution centered at the observed values, and thus, this Bayesian procedure reduces to the deterministic Richardson extrapolation approach described in §II.2.1.

To complete the specification of the Bayesian inverse problem, one must set priors on , , and . For simplicity, we take , , and to be independent in the prior. Further, we choose , where is the result at the finest resolution, for some moderate . Then,

(10)

In principle, may take any real value, but it is algorithmically convenient to limit the probable range of by choosing for some large from which

(11)

Because is expected for most convergent numerical schemes but detecting pathologically-slow convergence when is desirable, we select a prior distribution that goes to zero at , is maximum near the expected convergence order (if known), and has a broad range of plausible . The Gamma distribution with meets these requirements for suitable values of the parameters and , so the prior on is given by

(12)

Substituting these priors into (9) gives

(13)

where the dependence on prior parameters , , , and has been noted. While the posterior PDF is simple to write down according the Bayes’ theorem, working with this PDF can be difficult. In general it is not possible to compute statistics for the posterior analytically because the necessary integrals cannot be evaluated in closed form. Instead, it is common to use Markov chain Monte Carlo (MCMC) algorithms to sample the posterior Robert and Casella [2010]. In this work, we use a Python [Drake and Rossum, 2011] implementation relying on the emcee implementation [Foreman-Mackey et al., 2012] of Goodman and Weare’s affine invariant MCMC sampling technique [Goodman and Weare, 2010].

ii.3 Illustrative Example: The Lorenz Equations

To illustrate the application of the sampling and discretization error estimation techniques discussed here, they are applied to estimates of the means computed from solutions of the Lorenz equations. The Lorenz equations are a system of three ordinary differential equations: \cref@addtoresetequationparentequation

(14a)
(14b)
(14c)

Depending on the values of the parameters , , and , the system exhibits chaotic behavior. The methods described in §II.1 and §II.2 are therefore applicable to estimating errors in statistical quantities, such as the mean of , computed from discrete approximations. In the results presented here, the parameters are set to their typical values: , , . Further, (14) are discretized using fourth-order Runge-Kutta time discretization (RK4).

ii.3.1 Sampling Error Estimator Performance

First, we examine the performance of the sampling error estimator. Estimates of the standard deviation of , the average of over a time period , were determined using the techniques in §II.1 for several averaging periods . To assess the reliability of these estimates, they were repeated for each of a set of 10,085 different Lorenz simulations, which were started with randomly selected initial conditions so that the variability in these estimates could be assessed. In each simulation, was computed by sampling the solution every time units, which was every third RK4 time step ().

The decorrelation separation distance computed as in §II.1 varied somewhat but was approximately 7.76 time units (103.5 samples), making the effective sample size . The distributions of obtained from the ensembles of Lorenz simulations are shown in Fig. 5 for four different averaging periods . An estimate of the “true” value of is also shown in the figure. It is determined directly from the sample variance of the ensemble of estimates :

(15)

where

and denotes the sample average of for the th simulation.

(a)
(b)
(c)
(d)
Figure 5: PDFs of the estimate of the standard deviation of the sample average of computed according to (2) for varying averaging periods computed from an ensemble of 10,085 Lorenz simulations. For comparison, the vertical dashed lines indicate the standard deviation of the sample average computed directly from the ensemble according to (15).

In all cases, the estimated true value is well within the support of the distribution of , indicating that the estimate is consistent with the true value. Additional details for comparison are provided in Table 1.

7.393e-02 7.389e-02 4.232e-05 1.491e-03 6.565e-02 8.686e-02
5.207e-02 5.236e-02 2.926e-04 7.079e-04 4.843e-02 5.550e-02
3.660e-02 3.705e-02 4.503e-04 3.491e-04 3.454e-02 3.877e-02
2.595e-02 2.620e-02 2.453e-04 1.813e-04 2.515e-02 2.695e-02
Table 1: Statistics of the estimator for varying simulation duration.

The table shows that the error estimate is quite consistent with the standard deviation of the sample average, indicating that it is a good estimator. For instance, the difference between and the sample average of is never more than of . Further, even for the maximum and minimum the errors are generally around or less, and the maximum error is of . Finally, note that the results show the correct scaling, as expected.

All of the results shown in Figure 5 and Table 1 were computed using a very high sampling frequency (sample every third time step). However, it is common in DNS to sample much less frequently. To examine the impact of coarse sampling and as well as the behavior as , the sampling step was varied while the RK4 time step remained constant at . Results of this study are shown in Figure 6.

When the sampling period is large, the samples are less correlated. However, information is still discarded by neglecting even highly correlated samples, leading to larger uncertainty in the sample average of . Beginning with large , as is decreased, the estimated and both decrease. However, for , this trend reverses for small enough . While appears to converge, the estimated begins to grow when become smaller than about 0.20. When is less than about 0.01, the algorithm used to compute the breaks down due to the effects of round-off error. We hypothesize that the increase in with decreasing below 0.20 is also due to accumulation of double precision round-off error. Repeating this study using both lower and higher floating point precision (not shown) produced behavior consistent with that hypothesis.

Figure 6: Convergence behavior of autoregressive procedure on Lorenz data gathered from simulations. Each simulation was of duration with samples taken every time units. Kernel density estimates for the PDF for appear as curves. Empirically obtained from the many realizations are marked with vertical lines.

ii.3.2 Bayesian Richardson Extrapolation Results

Here we explore the performance of the Bayesian Richardson extrapolation procedure described in (§II.2) in three regimes: small sampling error, medium sampling error, and large sampling error relative to the discretization error. In DNS, it is expected that sampling errors will generally be larger than or comparable to the discretization error. The small sampling error regime is also considered here for completeness.

The data input to the Bayesian Richardson extrapolation algorithm is listed in Table 2, where is the time step used, is the total simulation time, is the observed sample average, and is the estimated standard deviation of the sample average. In all cases, the sampling period was .

Large Medium Small
T T T
0.075 10 23.6081 0.457 23.1873 0.0290 23.1911 0.000661
0.05 10 23.3747 0.546 23.4942 0.0325 23.4874 0.000813
0.025 10 23.3432 0.724 23.5718 0.0382 23.5486 0.000884
Table 2: Conditions for cases across large, medium and small sampling uncertainty at varying time step.

Marginal prior and posterior PDFs for both the order of accuracy and the true value of the mean of (denoted ) were obtained using the Bayesian Richard extrapolation procedure and are shown in Figure 10.

(a) Large statistical uncertainty.
(b) Medium statistical uncertainty.
(c) Small statistical uncertainty.
Figure 10: Bayesian Richardson extrapolation results from three different regimes: large, medium and small statistical uncertainty in comparison to discretization error.

In addition, the plots for also show the PDF for the sample average of with the finest time step (i.e., a Gaussian with mean equal to the observed sample average and standard deviation of ).

When the averaging time duration is sufficiently small so that the sampling error is large, as shown in Figure (a)a, the sampling error effectively masks the discretization error. In this case, the data contain little information about the true discretization error. Thus, the marginal posterior PDF for is essentially the same as the prior PDF. However, because the prior PDF for is so broad, the prior for constrains the results. That is, because we indicate a priori that the scheme is convergent, the data are inconsistent with values of in the tails of the prior. For this reason, the posterior for is somewhat more peaked than the prior even though the marginal posterior for is the same as the prior.

Figure (b)b shows the “medium” sampling uncertainty level. For this case, the sampling uncertainty dominates discretization error at the smallest , but discretization error dominates at the largest . Some information regarding the order of accuracy can be learned from the data in this case, leading to a posterior PDF for that is significantly different from the prior, unlike the large sampling uncertainty case. Note that the peak of the marginal posterior for is nearly the formal order of accuracy (), but that there is significant uncertainty associated with this estimate. Since the discretization error can be estimated with more confidence and the sampling error is smaller, the posterior PDF for is much narrower than in the large sampling uncertainty case. Further, it is slightly shifted from the fine resolution result. This shift is a correction for discretization error at the fine resolution.

The final case is the small sampling uncertainty case shown in (c)c. In this case, the sampling error is many times smaller than the discretization error, and the Bayesian procedure should reduce to the standard Richardson extrapolation. This fact is confirmed by the observation that the PDFs for both and are very narrow. The peak of the posterior distribution for occurs at approximately , which is slightly larger than the expected order of accuracy but agrees with the estimate that would be obtained from standard Richardson extrapolation. For comparison, the prior and posterior mean and standard deviation values for the estimated true mean are given in Table 3.

Case Prior Mean Post Mean Prior Std Dev Post Std Dev
High Noise 23.3432 23.3672 0.4 0.29
Med Noise 23.5718 23.5669 0.4 0.04
Low Noise 23.5486 23.5520 0.4 0.0010
Table 3: Prior and posterior mean and standard deviations for the true expectation of as determined from Bayesian inference.

Iii DNS of Channel Flow

The techniques described in §II have been used to investigate sampling and discretization errors in DNS of a wall-bounded turbulent flow. Specifically, DNS of fully-developed incompressible turbulent channel flow at bulk Reynolds number has been analyzed , where is the bulk velocity, is the kinematic viscosity and is the channel half-height. For this case, the friction Reynolds number is , where is the friction velocity, and it has been previously simulated by many authors [Kim et al., 1987, Hoyas and Jiménez, 2008b]. In the following, quantities are normalized by and , unless otherwise indicated. As is customary, a superscript will indicated normalization in wall units; that is, normalization by and .

This relatively low Reynolds number case has been chosen to enable testing of the methods developed here because it is computationally tractable to simulate for times longer than usual, using higher resolution than usual. This allows the model predictions to be tested against observed results, as shown in the small domain case. Though the physical results are not scientifically new, the characterization of the two error sources in DNS is novel. This characterization will provide insights relevant to DNS of wall-bounded flows in general. All of the data used in the Bayesian Richardson extrapolation, including computed statistics and estimated sampling error, are available from http://turbulence.ices.utexas.edu.

iii.1 Discretization and Sampling Details

The incompressible 3D Navier-Stokes equations are solved using the formulation of Kim, Moin, and Moser [1987] (KMM), as implemented in the code developed by Lee et alLee et al. [2013]. This formulation involves integrating evolution equations for the wall-normal vorticity, , and the Laplacian of the vertical velocity, . Periodic boundary conditions are imposed in the streamwise () and spanwise () directions, while in the wall normal direction (), no slip conditions are imposed at the walls. A semi-implicit, third-order Runge–Kutta/Crank–Nicholson scheme is used for the time discretization R. et al. [1991]. The flow is driven by a uniform pressure gradient which is adjusted continuously to maintain a constant mass flux. In space, a Fourier/Galerkin method is used in the streamwise and spanwise directions. Unlike KMM, here a B-spline/collocation representation is used in the wall-normal direction because it allows for flexible non-uniform grids while retaining spectral-like resolution[Kwok et al., 2001]. The B-spline breakpoints for are set in the interval according to

(16)

where is the number of breakpoints and is a stretching parameter, which is set to 0.985 for this study. The Greville abscissae, also called the Marsden–Schoenberg points, implied by these breakpoints[Johnson, 2005, Botella and Shariff, 2003] are used as the collocation points, of which there are , where is the B-spline order (7 in the simulations reported here). To develop Richardson extrapolation estimates, a nominal mesh resolution was defined, along with two uniform de-refinements (by factors of approximately and ), labeled “coarse” and “coarsest.” The nominal mesh was designed to conform to resolution heuristics typically used in DNS of wall-bounded turbulence. That is, in - and -directions, , , where and with and being the number of Fourier modes in the representation in these directions. In the -direction, the nominal mesh is required to have at the walls and at the channel center, where is the spacing between the break points.

A constant timestep was used in the simulations reported here. This is slightly different from typical DNS practice, in which variable timesteps based on a Courant–Friedrichs–Lewy (CFL) condition[Courant et al., 1928] is used. Constant timesteps are used here to ensure equidistant temporal samples, which simplifies the temporal analysis required to estimate the sampling uncertainty. The timestep size for the nominal mesh was selected by monitoring the timestep in a CFL-based variable time-step calculation and choosing a step smaller than the smallest observed timestep. Accordingly, these simulations are somewhat better resolved in time than is common practice.

Two sets of DNS simulations were conducted: one using a relatively small domain with and ; the other in a larger domain with and . The domain size of the latter is identical to that of the simulation reported by Hoyas & Jiménez[Hoyas and Jiménez, 2008b]. The small domain case was studied because the simulations are less expensive, so that it was practical to perform a simulation with a finer grid than nominal (by a factor of 2, called finest), to allow validation of the Bayesian Richardson extrapolations. The large domain was simulated because the error estimates for this case will be relevant to the interpretation of the reference simulation results of Hoyas & Jiménez. For each case, the simulation was run until a statistically stationary state was reached, and then statistics were collected over an evolution time , with a sampling period of or 10 samples per flow-through. The numerical parameters for each simulation are given in Table 4

Name
Small Domain
Coarsest 96 96 64 24.3 12.2 0.44 9.14 2651.0 0.02
Coarse 136 136 90 17.2 8.6 0.26 6.46 273.5 0.01414
Nominal 192 192 128 12.2 6.1 0.16 4.53 2145.3 0.01
Finest 384 384 256 6.1 3.0 0.07 2.26 709.3 0.005
Large Domain
Coarsest 256 192 64 27.4 12.2 0.44 9.14 40.0 0.02
Coarse 362 270 90 19.4 8.7 0.26 6.46 30.0 0.01414
Nominal 512 384 128 13.7 6.1 0.16 4.53 20.0 0.01
Table 4: Numerical and sampling parameters for turbulent channel flow simulations conducted at , . Variables are as defined in section III.1.

iii.2 Small Domain Results

The Bayesian Richardson extrapolation procedure has been applied to a variety of statistical quantities of particular interest in the channel flow. For brevity, we show full results, including details of the joint posterior PDF for the true value , the discretization error constant , and the order of accuracy , for only two scalars: the centerline mean velocity and the skin friction coefficient. A summary of results for single-point statistics including the mean velocity, the Reynolds stresses, and the vorticity correlations at multiple points across the channel is also given. In all cases, the inverse problem is formulated using data from the coarsest, coarse, and nominal mesh resolutions. Data from the finest mesh is reserved to provide a validation test of the procedure.

iii.2.1 Centerline Mean Velocity

The results of Bayesian Richardson extrapolation applied to the centerline mean velocity , normalized by the bulk velocity , are presented here. First we examine the results of the inverse problem for the discretization error model. Then, we test the calibrated model by using the model to predict the value that should be observed on the finest mesh. Finally, we use the model to examine the discretization error on the nominal mesh.

Figure 11 shows the posterior PDF for the calibration parameters.

Figure 11: Results of the inverse problem for , , and . The input data are data from the coarsest, coarse, and nominal meshes. The diagonal shows the marginal PDFs for each of the parameters while the off-diagonal entries show samples of the joint posterior PDF projected onto planes in parameter space.

The posterior PDF for is maximum near . While this does not correspond directly to any of the schemes used here, there is large uncertainty about the order, with the first and third quartiles of the marginal distribution for at approximately and , respectively. Despite the large uncertainty about the order of accuracy, the uncertainty regarding the true value is quite small. For instance, the difference between the th and th percentiles is less than of the mean value.

Given the samples from the posterior PDF represented in Figure 11, one can use the calibrated model to make predictions of the value of the average centerline velocity that should be observed for any value of the resolution parameter , by evaluating

(17)

Here is the sampling uncertainty for the simulation from which the observed average velocity is obtained. Thus the distribution for obtained from (17) includes uncertainty from two sources. First, the calibrated discretization error model parameters (, and ) are uncertain. Second, the observation is contaminated by sampling error. The consistency of the model with the actual observation can be assessed by simply examining whether the observed value is a plausible draw from the prediction distribution generated according to (17). If the observed value is highly unlikely according to the prediction, the model is declared invalid.

Results of this validation check for the centerline velocity are shown in Figure 12.

Figure 12: PDF of the mean centerline velocity for the finest mesh predicted according to (17) (blue) and the observed mean centerline velocity on the finest mesh (green).

Clearly, the observed value is not near the tail of the prediction distribution, indicating that there is no reason to believe the model is invalid.

Finally, Figure 13 shows the estimated discretization error on the nominal mesh, normalized by the observed mean value.

Figure 13: Discretization error, as computed by the calibrated model, for the centerline mean velocity on the nominal mesh.

Note that the discretization error is very small. Essentially all of the probability is assigned to values of less than , and half is assigned to values less than . For comparison, the standard deviation of the sampling error was estimated as for this mesh. Thus, even after more than 2000 flow-throughs, sampling uncertainty is still significant for this quantity.

Note that the discretization error distribution in Figure 13 has an odd shape, with high probability assigned to negative values very close to zero, but essentially zero probability to positive values. Similar distributions are observed in the results shown in subsequent sections for other quantities as well. This feature can be understood by examining the posterior distribution shown in Figure 11. Specifically, the value of is bounded away from zero. Since , is the only parameter that can change the sign of . Thus, since it is bounded away from zero, the the model is completely sure of the sign of the discretization error. Also, this result for is entirely consistent with monotonic data with sampling uncertainty that is small relative to the changes observed between different resolution simulations. In this case, one should be able to determine the sign of the discretization error with very high confidence.

This explains why the discretization error tends to have all its probability on one side of zero, but we also observe that the probability density is highest near zero. This feature results from the fact that is not well-informed. In particular, large values of , which lead to small are not ruled out by the data. Since increasingly larger values of lead to increasingly smaller values of , the probability clusters near zero.

iii.2.2 Skin Friction

Results for the skin friction coefficient are analyzed here in a series of figures analogous to those shown for the centerline mean velocity. To begin, Figure 14 shows the joint posterior PDF for the parameters of the discretization error model.

Figure 14: Results of the inverse problem for , , and . The input data are data from the coarsest, coarse, and nominal meshes. The diagonal shows the marginal PDFs for each of the parameters while the off-diagonal entries show samples of the joint posterior PDF projected onto planes in parameter space.

While the order of accuracy appears somewhat better informed than for the centerline velocity, there is still significant uncertainty, with the th and th percentiles at 3.06 and 5.28, respectively. However, the marginal posterior for the true value of is again quite narrow, with the difference between the th and th percentiles being only of the mean value.

Figure 15 compares the model prediction of the skin friction on the finest mesh, computed from the calibrated results and estimated sampling uncertainty in the finest mesh result using (17), and the observed results.

Figure 15: PDF of the mean skin friction coefficient for the finest mesh predicted according to (17) (blue) and the observed mean centerline velocity on the finest mesh (green).

Clearly there is good agreement between the prediction PDF and the observation. As with the centerline velocity, there is no reason to question the discretization error model in this case.

Figure 16: Discretization error, as computed by the calibrated model, for the skin friction on the nominal mesh.

Finally, the estimated discretization error on the nominal mesh is shown in Figure 16. As with the centerline velocity, the discretization error is quite small. The mean discretization error is only of the mean value. Unlike the centerline velocity, the discretization error is large relative to the estimated sampling error standard deviation, which is less than .

iii.2.3 Summary of Results for Single-Point Statistics

Uncertainties in a number of single-point statistics that are generally of interest in DNS are presented here, including the mean velocity, Reynolds stresses, and vorticity variances, as functions of the wall-normal location. As shown for the skin friction and centerline velocity, the first step after performing the Bayesian update to calibrate the discretization error model is to assess the predictions of the model relative to the finest mesh results. Here, this assessment is performed by evaluating the cumulative distribution function (CDF) corresponding to the prediction for the finest mesh value, as given by (17)), at the observed result for the finest mesh. This value is important because, if it is close to zero or close to one, then the observed value corresponds to a draw from one of the tails of the prediction distribution.

The results are presented in Figures 26. In each figure, the solid line is the computed value of the CDF at the observed value. The grey shows the region between and , which is the credibility interval. When the observed results give a CDF value that falls outside of this region, the model and the observation are in poor agreement. In this case, we cannot have confidence in the model, and it is declared invalid for our purposes. When the values are in the grey region, the model passes this validation check.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 26: Comparison of median predicted change between the nominal and finest mesh results (dashed line) and the observed change (solid line), both with credibility intervals.

For the mean velocity, viscous stress, Reynolds shear stress, wall-normal velocity variance, and spanwise velocity variance, there is reasonable to excellent agreement between the model predictions and the observations. For these quantities, the model is not invalidated by this assessment. Alternatively, for the streamwise velocity variance and the vorticity variances, there are large regions of the channel where the observed value falls outside of the credibility interval. For example, examining , for , the percentile of the observed value is less then , meaning that the model assigns probability greater than to values larger than the observed value. This level of disagreement means that the model cannot be used with confidence. The model for the vorticity variances also appears to be invalid based on this assessment.

A closer examination of the data at these locations shows the problem. The results are not converging monotonically with increasing mesh resolution. For example, at , the value of on the coarsest, coarse and nominal meshes was 0.010343, 0.010042, and 0.00997, respectively. However, the value observed on the finest mesh was 0.010010, an increase in magnitude compared to the nominal mesh. This non-monotonic behavior cannot be captured by the simple model used here. Further, even if a model capable of producing non-monotonic convergence were used, it would be unlikely to produce an accurate prediction given that the calibration data (i.e., the three coarser mesh results) are monotonic. The vorticity variance data also show non-monotonic behavior with increasing resolution.

Regardless, it is clear that the simple model used here is insufficient for some quantities. Given that the complete discretization is a mix of spectral, high-order B-spline, and 2nd and 3rd order time marching schemes, it is not necessarily surprising that the convergence behavior is complex, and it is clear that none of the results are consistent with the final asymptotic behavior of the scheme, which must be 2nd order due to the temporal discretization of the viscous terms. More importantly, the invalidity of the model does not imply that the errors are large. For example, the change between the nominal and finest mesh results for is less than . However, for quantities where the model is invalid, we clearly cannot use it to make reliable statements about the discretization error. For this reason, no additional results are shown for the streamwise velocity variance or vorticity variances.

For the quantities where the model is not invalidated, we use it to predict the discretization error for the nominal mesh result. Figures 29 and 33 show these predictions. The median prediction is shown as a solid line with error bars indicating the credibility interval. For comparison, the estimated sampling error is indicated by the dashed lines, which correspond to the credibility interval of the sampling error model. For all quantities, the errors are presented as percentage values.

(a)
(b)
Figure 29: Estimated discretization error (solid) and sampling uncertainty (dashed) and their credibility intervals.

For the mean velocity and viscous shear stress, both the estimated discretization error and sampling errors are less than 1% in magnitude everywhere across the channel. In fact, for most points, the median error in the mean velocity is less than one quarter of a percent, with nearly all the 90% confidence intervals at less than one half of a percent.

Very near the wall, the discretization errors are estimated to be larger than the sampling error. For , the median of the discretization error lies within the credibility interval for the sampling error, but generally there is some probability that the discretization error is larger. On the whole, it appears that neither error is dominant.

(a)
(b)
(c)
Figure 33: Estimated discretization error (solid) and sampling uncertainty (dashed) and their credibility intervals.

Qualitatively similar conclusions can be drawn for , , and , but the errors are somewhat larger. The median discretization error is less than everywhere and is largest near the wall. Near the wall, the discretization error is larger than sampling error. Near the center of the channel, the situation is reversed.

iii.3 Large Domain Results

Turbulent channel flow simulations at have been performed many times. Currently, one of the most useful simulations is that of Hoyas & Jiménez[Hoyas and Jiménez, 2008b], because of its large spatial domain, because statistical data is easily accessible online, and because it is part of a series of simulations with Reynolds numbers ranging over an order of magnitude. The large domain simulations reported in this subsection were performed in the same domain size (, ) as Hoyas & Jiménez so that a direct comparison can be made to those results, and so that the uncertainty estimates developed here will be indicative of the uncertainties in this commonly referenced work.

Unlike the smaller box case, only three meshes were used, so it is not possible to test the validity of the calibrated discretization error model against a higher resolution result. However, since the the Reynolds number and mesh resolution are the same or similar to the small domain case, we expect that the model is valid for the same quantities. Further, consistent with typical DNS practice, statistics were gathered over only a modest simulation time (10s of flow-throughs), although each flow-through with the large box represents significantly more data than the small box case. Full details of the simulation are given in Table 4.

iii.3.1 Centerline Mean Velocity and Skin Friction

As in sections III.2.1 and III.2.2, we present detailed results for the centerline mean velocity and the skin friction. The posterior PDFs for the calibration parameters are qualitatively similar to the results for the small domain (see Figures 11 and 14), and are therefore not shown. The posterior PDFs are marginally less well informed due to the somewhat larger sampling uncertainty in the large domain results, but do not differ materially. For example, for the centerline velocity, the true value shifts slightly to the left to a mean of approximately 1.162769, and there is still significant uncertainty about the value of . The mean is 4.84, but the th and th percentiles lie at 3.19 and 7.10, respectively. As in the small domain case, the order of accuracy for the skin friction is somewhat better informed than that for the centerline velocity, but there is still significant uncertainty, with the th and th percentiles at 3.06 and 5.28, respectively. However, the marginal posterior for the true value of is again quite narrow, with the difference between the th and th percentiles being only of the mean value.

Figure 34 shows the estimated discretization error for the centerline velocity on the nominal mesh, normalized by the observed mean value.

Figure 34: Discretization error, as computed by the calibrated model, for the centerline mean velocity on the nominal mesh.

As in the small box case, it is almost certain that the discretization error is less than , and the mean is only . The th percentile lies at approximately , which is very close to the estimated standard deviation of the sampling error.

The estimated discretization error in the skin friction on the nominal mesh is shown in Figure 35.

Figure 35: Discretization error, as computed by the calibrated model, for the skin friction on the nominal mesh.

As in the small box case, the discretization error is small, with a mean of approximately . For comparison, the estimated standard deviation of the sampling error is approximately .

As mentioned earlier, this domain size is identical to that of a previous simulation reported by Hoyas & Jiménez[Hoyas and Jiménez, 2008b]. For the purposes of verification, a direct comparison between that simulation and the nominal mesh of this study is performed. The centerline velocity on the nominal mesh for our study is where the quoted uncertainty estimate is one standard deviation of the sampling error. This is within 0.031% of the value of 1.16267 quoted by Hoyas & Jiménez. While small, these values differ by more than a standard deviation of the estimated sampling error. However, these two simulations, while run with similar resolution, did differ in both their choice of wall-normal numerics (B-splines vs. Chebychev polynomials) as well as the number of points in (128 vs. 97). It is therefore plausible that the discrepancy between the values of the centerline velocity in these simulations is a combined result of discretization error and sampling error. Indeed, the observed difference of 0.00036 is in the range of plausible discretization errors , as shown in Figure 34. Similarly, our value of the skin friction coefficient from the nominal mesh () differs by from the value quoted by Jiménez (). In absolute terms, this is a very small difference, but it is significantly larger than the estimated sampling error. Recalling the aforementioned differences between the present wall-normal numerics and those of Hoyas & Jiménez, it is plausible that the discrepancy is due to discretization error. Indeed, the discrepancy is plausible as a value of the discretization error as shown in Figure 35.

iii.3.2 Summary of Results for Single-Point Statistics

This section shows the estimated discretization and sampling errors for the mean velocity , viscous shear stress , Reynolds shear stress , wall-normal velocity variance , and spanwise velocity variance . Recall that the discretization error model for these quantities passed the validation assessment for the small domain case, as shown in §III.2.3.

The results are shown in Figures 38 and 42, which are analogous to Figures 29 and 33 for the small domain results.

(a)
(b)
Figure 38: Estimated discretization error (solid) and sampling uncertainty (dashed) and their credibility intervals.
(a)
(b)
(c)
Figure 42: Estimated discretization error (solid) and sampling uncertainty (dashed) and their credibility intervals.

As in the small domain case, the estimated discretization errors in the mean velocity and viscous stress are less than one percent nearly everywhere. The larger percentage errors in the viscous stress near the centerline are an artifact of the local viscous stress going to zero at the center of the channel. The largest percent error observed anywhere aside from the centerline is roughly four percent in very near the wall. Of course, as , meaning that this error is still very small. Finally, in general, the discretization errors observed are largest near the wall. In this region, they tend to be larger than the sampling error. In the center of the channel, the sampling error is generally larger.

In addition to providing an assessment of the discretization error on the nominal mesh, the Bayesian procedure provides an estimate of the true value in the limit of infinite resolution (). This estimate is provided by the posterior distribution for that is obtained from the Bayesian update that is performed to calibrate the discretization error model. These posterior estimates for the true profiles are plotted in Figure 48 for the quantities for which the discretization error model was found valid for the small domain results.

(a)
(b)
(c)
(d)
(e)
Figure 48: Estimated true value obtained from the posterior distribution for . The grey region shows the credibility interval between the th and th percentiles of the posterior. All quantities are normalized using the nominal value of .

All quantities are plotted using wall normalization, but, to avoid introducing additional uncertainty due to the fact that is an uncertain quantity, the nominal value for is used. The resulting intervals are small. For mean velocity and viscous shear stress, the uncertainty is small enough that the credibility interval appears as just a thick line. In the Reynolds stress and wall-normal and spanwise variances, the effect of the uncertainty is more visible, particularly near the peak values, but still quite small.

Iv Conclusions

DNS data is crucial to advancing understanding of turbulent flow physics and to calibration of engineering models of turbulent flow. Given these uses, it is important to fully understand and characterize the errors and uncertainties in computed statistical outputs. However, because of complications due to sampling error, systematic studies of discretization error are not standard for DNS. In this work, two enabling utilities have been developed and applied: a sampling error estimator that accounts for correlation in the data used to compute statistics and a Bayesian extension of Richardson extrapolation that can be used to estimate discretization error in the presence of uncertainty due to finite sampling. These tools enable systematic estimation of both sampling and discretization errors in statistical quantities computed from simulations of chaotic systems.

The results for the Lorenz equations demonstrate that these tools perform well in a simple, well-understood setting. However, the results for DNS of channel flow indicate that their usage in a complex setting is more difficult. One obvious complication is that discretization errors resulting from practical simulations may not be in the asymptotic regime. The simple discretization error representation used here was found to be adequate for many important quantities, including mean velocity and Reynolds shear stress. Further, the estimated errors in these quantities are small, indicating that the usual heuristics used to design meshes for DNS of wall-bounded turbulence are reasonable. Thus, we conclude that simulations of channel flow based on these resolution heuristics with similar sampling time, such as those reported by Jiménez and co-authors Del Álamo and Jiménez [2003], Del Álamo et al. [2004], Hoyas and Jiménez [2006, 2008c], can be expected to have errors of the same magnitude as those reported here.

However, for other quantities, most notably the streamwise velocity variance, the discretization error model is invalidated by comparison against higher resolution simulations than those used to calibrate the model. Due to this failure, we are unable to quantify the discretization error in these quantities with confidence. None-the-less, the errors appear to be small because the observed change from the nominal to finest resolution results is quite small.

It may be possible to solve this problem by posing a more complex discretization error model, which could be based on retaining additional terms in a Taylor series expansion of the discretization error. Future work should focus on investigating such models as well as applying this technique to investigate resolution heuristics used for other numerical schemes and classes of flow. While it will likely not be practical to apply the full Bayesian Richardson extrapolation technique for each new simulation, by assessing the relevant resolution heuristics in a computationally tractable setting, as done for low channel flow here, one can develop estimates of the expected numerical accuracy of the results of more demanding simulations. This can then be combined with sampling error estimates, which are tractable for even expensive DNS, to obtain a complete characterization of DNS uncertainties.

Acknowledgments

The work presented here was supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported here. Finally, the authors wish to thank Mr. Myoungkyu Lee for the use of his simulation code as well as his assistance in generating several of the DNS runs.

Appendix A Motivation for the Sampling Error Estimator

If the samples were independent, the classical central limit theorem (CLT) states that

where . However, the samples resulting from a DNS calculation have a priori unknown correlation structure and, at least for small temporal or spatial separation, are certainly not independent.

To avoid the complications of correlated samples, many authors downsample instantaneous measurements until the retained samples are arguably uncorrelated and then use an estimate based on the classical CLT. However, optimally downsampling autocorrelated samples requires coarsening the data “just enough” to decorrelate the signal but not “too much” to avoid discarding useful data [Zwiers and von Storch, 1995]. As increasing the number of independent samples is computationally expensive in DNS, it is imperative to extract all possible information from the data. Thus, we seek a method to estimate the uncertainty in statistics computed from correlated samples.

The method developed in §II.1 is motivated by an extension of the CLT from a sequence of independent, identically distributed random variables to an -mixing sequence. For the precise statement of the theorem see Billingsely [2012, Theorem 27.4] or Zhengyan and Chuanrong [1996, Theorem 3.2.1]. Loosely speaking, the theorem states that, if random variables “far” apart in the sequence are nearly independent, which is expected for data resulting from DNS, then as ,

where

Thus,

where

is the autocorrelation at separation . Thus, the extension for weak dependence simply modifies the effective number of samples from the classical CLT. That is,

where

References

  • 10 [2006] ASME Committee V&V 10. Standard for Verification and Validation in Computational Solid Mechanics. ASME, 2006.
  • Alfonsi [2011] Giancarlo Alfonsi. On direct numerical simulation of turbulent flows. Applied Mechanics Reviews, 64(2):020802, 2011. doi: 10.1115/1.4005282.
  • Andersen [1974] N. Andersen. On the calculation of filter coefficients for maximum entropy spectral analysis. Geophysics, 39(1):69–72, February 1974. doi: 10.1190/1.1440413.
  • Andersen [1978] N. Andersen. Comments on the performance of maximum entropy algorithms. Proceedings of the IEEE, 66(11):1581–1582, November 1978. doi: 10.1109/PROC.1978.11160.
  • Billingsely [2012] Patrick Billingsely. Probability and Measure. Wiley, 4th edition, 2012.
  • Botella and Shariff [2003] Olivier Botella and Karim Shariff. B-spline methods in fluid dynamics. International Journal of Computational Fluid Dynamics, 17(2):133–149, March 2003. doi: 10.1080/1061856031000104879.
  • Box et al. [2008] George E. P. Box, Gwilym M. Jenkins, and Gregory C. Reinsel. Time Series Analysis: Forecasting and Control. John Wiley, fourth edition, June 2008. ISBN 9780470272848.
  • Broersen [2000] Piet. M. T. Broersen. Finite sample criteria for autoregressive order selection. IEEE Transactions on Signal Processing, 48(12):3550–3558, December 2000. doi: 10.1109/78.887047.
  • Broersen [2002] Piet. M. T. Broersen. Automatic spectral analysis with time series models. IEEE Transactions on Instrumentation and Measurement, 51(2):211–216, April 2002. doi: 10.1109/19.997814.
  • Broersen [2006] Piet M. T. Broersen. Automatic autocorrelation and spectral analysis. Springer–Verlag, 2006. doi: 10.1007/1-84628-329-9.
  • Burg [1975] John P. Burg. Maximum Entropy Spectral Analysis. PhD thesis, Stanford University, 1975.
  • Calvetti and Somersalo [2007] Daniela Calvetti and Erkki Somersalo. Introduction to Bayesian Scientific Computing. Springer, 2007.
  • Cheung et al. [2011] S. H. Cheung, T. A. Oliver, E. E. Prudencio, S. Prudhomme, and R. D. Moser. Bayesian uncertainty analysis with applications to turbulence modeling. Reliability Engineering & System Safety, 96:1137–1149, 2011. doi: 10.1016/j.ress.2010.09.013.
  • Christian [2001] Robert P. Christian. The Bayesian Choice. Springer, 2001.
  • Courant et al. [1928] R. Courant, K. Friedrichs, and H. Lewy. Über die partiellen Differenzengleichungen der mathematischen Physik. Math. Ann., 100(1):32–74, 1928. ISSN 0025-5831. doi: 10.1007/BF01448839. URL http://dx.doi.org/10.1007/BF01448839.
  • Cox [1961] R. T. Cox. The Algebra of Probable Inference. Johns Hopkins University Press, 1961.
  • Del Álamo and Jiménez [2003] Juan C. Del Álamo and Javier Jiménez. Spectra of the very large anisotropic scales in turbulent channels. Physics of Fluids, 15(6):L41–L44, June 2003. doi: 10.1063/1.1570830.
  • Del Álamo et al. [2004] Juan C. Del Álamo, Javier Jiménez, Paulo Zandonade, and Robert D. Moser. Scaling of the energy spectra of turbulent channels. Journal of Fluid Mechanics, 500:135–144, February 2004. doi: 10.1017/S002211200300733X.
  • Donzis et al. [2008] D. A. Donzis, P. K. Yeung, and K. R. Sreenivasan. Dissipation and enstrophy in isotropic turbulence: Resolution effects and scaling in direct numerical simulations. Physics of Fluids, 20:045108, 2008. doi: 10.1063/1.2907227.
  • Drake and Rossum [2011] Fred L. Drake and Guido Rossum. The Python Language Reference Manual. Network theory Ltd., 2011. ISBN 9781906966140.
  • Durbin and Reif [2011] Paul A. Durbin and Reif. Statistical theory and modeling for turbulent flows. Wiley, 2011. ISBN 9780470689318.
  • Eaton et al. [2008] John W. Eaton, David Bateman, and Søren Hauberg. GNU Octave Manual Version 3. Network Theory Limited, 2008. URL http://www.octave.org.
  • Faber [1986] L. J. Faber. Commentary on the denominator recursion for burg’s block algorithm. Proceedings of the IEEE, 74(7):1046–1047, July 1986. doi: 10.1109/PROC.1986.13584.
  • Foreman-Mackey et al. [2012] Daniel Foreman-Mackey, David W. Hogg, Dustin Lang, and Jonathan Goodman. emcee: The MCMC hammer, February 2012. URL http://arxiv.org/abs/1202.3665.
  • Goodman and Weare [2010] Jonathan Goodman and Jonathan Weare. Ensemble samplers with affine invariance. Communications in Applied Mathematics and Computational Science, 5(1):65–80, January 2010. doi: 10.2140/camcos.2010.5.65.
  • Hoyas and Jiménez [2006] Sergio Hoyas and Javier Jiménez. Scaling of the velocity fluctuations in turbulent channels up to re[sub tau] = 2003. Physics of Fluids, 18(1):011702, January 2006. doi: 10.1063/1.2162185.
  • Hoyas and Jiménez [2008a] Sergio Hoyas and Javier Jiménez. Reynolds number effects on the Reynolds-stress budgets in turbulent channels. Physics of Fluids, 20(10):101511, 2008a. doi: 10.1063/1.3005862.
  • Hoyas and Jiménez [2008b] Sergio Hoyas and Javier Jiménez. Reynolds number effects on the reynolds-stress budgets in turbulent channels. Physics of Fluids, 20(10):101511, 2008b. doi: 10.1063/1.3005862.
  • Hoyas and Jiménez [2008c] Sergio Hoyas and Javier Jiménez. Reynolds number effects on the reynolds-stress budgets in turbulent channels. Physics of Fluids, 20(10):101511, 2008c. doi: 10.1063/1.3005862.
  • Jaynes [2003] E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, 2003.
  • Jiménez and Moser [2007] J. Jiménez and R. D. Moser. What are we learning from simulating wall turbulence? Phil. Trans. R. Soc. A, 365:715–732, 2007. doi: 10.1098/rsta.2006.1943.
  • Johnson [2005] R. Johnson. Higher order B-spline collocation at the Greville abscissae. Applied Numerical Mathematics, 52(1):63–75, January 2005. doi: 10.1016/j.apnum.2004.04.002.
  • Kaipio and Somersalo [2005] Jari Kaipio and Erkki Somersalo. Statistical and Computational Inverse Problems. Springer, 2005.
  • Kim et al. [1987] J. Kim, P. Moin, and R. D. Moser. Turbulence statistics in fully developed channel flow at low Reynolds number. J. Fluid Mech., 177:133–166, 1987.
  • Kwok et al. [2001] Wai Y. Kwok, Robert D. Moser, and Javier Jiménez. A critical evaluation of the resolution properties of B-Spline and compact finite difference methods. Journal of Computational Physics, 174(2):510–551, December 2001. doi: 10.1006/jcph.2001.6919.
  • Lee et al. [2013] Myoungkyu Lee, Nicholas Malaya, and Robert D. Moser. Petascale direct numerical simulation of turbulent channel flow on up to 768k cores. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC13), Denver, CO, April 2013.
  • Moin and Mahesh [1998] Parviz Moin and Krishnan Mahesh. Direct numerical simulation: A tool in turbulence research. Annual Review of Fluid Mechanics, 30(1):539–578, 1998. doi: 10.1146/annurev.fluid.30.1.539.
  • Oberkampf and Roy [2010] W. L. Oberkampf and Christopher J. Roy. Verification and Validation in Scientific Computing. Cambridge University Press, October 2010.
  • Oliver and Moser [2011] T. A. Oliver and R. D. Moser. Bayesian uncertainty quantification applied to RANS turbulence models. Journal of Physics: Conference Series, 318(4):042032, 2011. doi: 10.1088/1742-6596/318/4/042032.
  • on Standards [1998] AIAA Computational Fluid Dynamics Committee on Standards. AIAA Guide for Verification and Validation of Computational Fluid Dynamics Simulations. AIAA, 1998. G-077-1998.
  • Percival [1993] Donald B. Percival. Three curious properties of the sample variance and autocovariance for stationary processes with unknown mean. The American Statistician, 47(4):274–276, 1993. doi: 10.1080/00031305.1993.10475997.
  • Pope [2000] Stephen B. Pope. Turbulent Flows. Cambridge University Press, October 2000. ISBN 0521598869.
  • Priestley [1981] M. B. Priestley. Spectral analysis and time series: Univariate series., volume 1. J.W. Arrowsmith, 1981. ISBN 0125649010.
  • R. et al. [1991] Spalart P. R., R. D. Moser, and M. M. Rogers. Spectral methods for the Navier-Stokes equations with one infinite and two periodic directions. J. Comput. Phys., 96:297–324, 1991.
  • Roache [1998] Patrick J. Roache. Verification and validation in computational science and engineering. Hermosa Publishers, 1998. ISBN 0913478083.
  • Robert and Casella [2010] Christian Robert and George Casella. Monte Carlo Statistical Methods. Springer, 2010.
  • Roy [2005] C. J. Roy. Review of code and solution verification procedures for computational simulation. Journal of Computational Physics, 205(1):131–156, May 2005. doi: 10.1016/j.jcp.2004.10.036.
  • Storch and Zwiers [2001] Storch and Francis W. Zwiers. Statistical analysis in climate research. Cambridge University Press, March 2001. ISBN 0521012309.
  • Thiébaux and Zwiers [1984] H. J. Thiébaux and F. W. Zwiers. The interpretation and estimation of effective sample size. J. Climate Appl. Meteor., 23(5):800–811, May 1984. doi: 10.1175/1520-0450(1984)023<0800:TIAEOE>2.0.CO;2.
  • Trenberth [1984] K. E. Trenberth. Some effects of finite sample size and persistence on meteorological statistics. Part I: Autocorrelations. Monthly Weather Review, 112, 1984. doi: 10.1175/1520-0493(1984)112%3C2359:SEOFSS%3E2.0.CO;2.
  • Wilcox [2006] David C. Wilcox. Turbulence modeling for CFD. DCW Industries, third edition, 2006. ISBN 9781928729082.
  • Zhengyan and Chuanrong [1996] Lin Zhengyan and Lu Chuanrong. Limit Theory for Mixing Dependent Random Variables, volume 378 of Mathematics and Its Applications. Science Press/Kluwer Academic Publishers, 1996.
  • Zwiers and von Storch [1995] Francis W. Zwiers and Hans von Storch. Taking serial correlation into account in tests of the mean. J. Climate, 8(2):336–351, February 1995. doi: 10.1175/1520-0442(1995)008%3C0336:TSCIAI%3E2.0.CO;2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
192884
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description