Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs
A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully and dispersive potential flow water wave model with random inputs for the probabilistic description of the evolution of waves. The model is analyzed using random sampling techniques and non-intrusive methods based on generalized Polynomial Chaos (PC). These methods allow to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution in different points in the parameter space, allowing for the of . The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained by the variability in the model input. We finally present a synthetic experiment studying the sensitivity of the wave load on an off-shore structure to a number of input uncertainties. In the numerical examples presented the PC methods have exhibited fast convergence, suggesting that the problem is amenable to being analyzed with such methods.
In coastal and off-shore engineering it is important to design maritime structures that can withstand critical failures due to wave-induced loadings. The most extreme wave induced-loadings can be estimated from direct measurements, laboratory experiments and simulation-based tools which can account for the wave kinematics sufficiently accurately. It is still common to predict wave kinematics using numerical tools have been validated by a single or few deterministic simulations and compared to idealized physical experiments, e.g., in wave tanks. These validation procedures, and at a greater scale real field simulations, are subject to a number of uncertainties that could lead to unexpected results. The study of the influence of these uncertainties on the resulting engineering analysis and decisions requires ultimately a shift from a deterministic approach to a probabilistic approach . Such engineering analysis are useful for risk management aimed at reducing risk in design and operations.
The research field of Uncertainty Quantification (UQ) includes all the techniques used to rigorously study uncertainties entering a systems and their influence on its dynamics. These techniques deliver tolerance intervals and probability distributions of system outputs, denoted Quantities of Interest (QoIs). Upon the knowledge of a deterministic model describing the dynamical system, denoted the forward model, uncertainty quantification can be split in four steps:
identification of sources of uncertainty and QoIs,
quantification of uncertainty sources by means of probability distributions,
uncertainty propagation through the system,
Uncertainties in coastal and off-shore engineering are either related to weather conditions or to structural/bathymetry characteristics. In the first case, the weather conditions are commonly grouped into a number of sea states characterized by a number of parameters, which determine particular probability distributions associated to waves, wind, currents, sea level and ice characteristics. Transitions between different sea states are modeled by stationary processes governing these parameters. The characterization of these uncertainties – step (a-b) – requires extensive measurements and the current is presented in .
A significant part of existing works on the propagation of uncertainty and sensitivity analysis of water waves use the Shallow Water Equations (SWE) as forward model, thus addressing mostly tidal waves, where vertical velocities are negligible. The focus is usually on the characterization of extreme responses or, in other words, the probability of failure of certain safety conditions. In  such analysis are performed using the MC method. In  the Monte Carlo (MC) and the Polynomial Chaos (PC) method in the Galerkin and collocation form are compared when applied on the SWE modeling the propagation of a wave over a submerged hump. In  the PC method in its collocation form is used for the study of uncertainties in flood-hazard mapping. In  random sampling methods, sparse grid, PC in Galerkin and collocation form, and a novel quadrature technique called Compound Uncorrelated Dimension (CUD) quadrature were compared when applied on the SWE modeling flood prediction under an uncertain river bed topography and characteristics. In  a combination of non-intrusive (collocation) PC and ANOVA decomposition was used for the propagation and sensitivity analysis of the uncertain parameters entering the SWE modeling the runup of waves. Unlike the preceding works,  studies the influence of uncertainties on the phase-averaged equation, which is suitable for slowly varying wave fields, e.g. ocean waves in deep water.  considers uncertainties entering the source term, the boundary conditions and the current field, adopting PC and ANOVA decomposition approaches.
We consider a fully and dispersive potential flow model, which allows the study of the phase resolved propagation of water waves over varying bathymetry. The model has traditionally been consider computationally very costly, however, recent progresses in the design of scalable linear solvers has made the model much more tractable for improved analysis of predictions via uncertainty quantification methods. UQ analysis require an overall computational cost which often scales badly, due to the curse of dimensionality, with the computational cost of the forward model. Thus, efficient numerical solvers for the forward model and efficient UQ techniques must be used. We employ state of the art software  to obtain efficient and accurate simulations of the forward model, while we turn to recently developed techniques based on polynomial chaos  to perform uncertainty quantification.
These techniques will be applied to classical benchmarks, such as , where different sources of uncertainty and QoIs will be investigated. Due to the lack of data, some assumptions will be made about the probability distributions of the sources of uncertainty – step (b), that in hydrodynamics simulations are commonly inlet/outlet conditions (boundary conditions), bathymetry data and structural positions (geometry). In experimental settings, all these uncertainties can be classified as epistemic , because they can in principle be reduced either by better measurements and/or by more precise experimental designs.
We will propagate the uncertainties through the dynamical system – step (c) – using traditional sampling techniques, such as Monte Carlo methods, and modern techniques based on generalized Polynomial Chaos (gPC) . Non-intrusive approaches such as stochastic collocation and sparse grid will be preferred to intrusive approaches, due to the ability of the former of re-using existing code, avoiding the need for re-engineering existing software.
For one of the analyzed test cases we will identify the sources of uncertainty to which the QoI is most sensitive – step (d). Techniques addressing this problem exist and we will focus on the variance based , where the sensitivity of a QoI to a certain input uncertainty is expressed by the amount of variance of the QoI which is due to such input uncertainty. The identification of the least influential inputs allows the refinement of the model, where the uncertainties on these inputs are disregarded. The method will be used to quantify the sensitivity to a number of input uncertainties of the wave load on an off-shore structure. The sensitivities can be used to refine the forward model, disregarding uncertainties on parameters that do not influence significantly the QoIs.
The paper organized as follows. In Section 2 we introduce the governing equation for the deterministic description of dispersive and water waves based on potential theory. In Section 3 we describe how a model with random inputs can be formulated and parametrized. Section 4 presents the theory of random sampling methods and of polynomial chaos methods for the forward propagation of uncertainty, and the description of the method of Sobol’ for sensitivity analysis. In Section 5, the effect of parametric uncertainty in bathymetry and wave input are studied and numerical experiments are compared for different approaches. Section 6 contains concluding remarks.
2Mathematical formulation of the forward model
We consider unsteady water waves described by a potential model for three-dimensional fully and dispersive free surface flows under the influence of gravity. The flow is assumed inviscid and irrotational. It can, without simplifications, be used for short and long wave propagation in both shallow and deep water where viscous and rotational effects are negligible. The sea bed is assumed variable and impermeable.
We introduce a Cartesian coordinate system with the horizontal and the vertical dimensions, where the coordinate points upwards. The functions and describe respectively the depth of the sea bed and the free surface. The still water level is given by .
2.1The deterministic potential flow model
The evolution of water waves over an arbitrary sea bed are described by the kinematic and dynamic free surface boundary conditions
where . We will consider waves in a spatial domain (fluid volume), and a time domain with final time . For the fluid volume, a Laplace problem defines the scalar velocity potential
Using a classical -transformation
the Laplace problem can be written as
The relation between the scalar velocity potential function and velocity field is
The governing equations can be solved in the setting of a numerical wave tank and is then subject to initial and boundary conditions
where wave generation and absorption is done using a line relaxation method . A complete derivation of the equations are given in . These model equations can be solved numerically using flexible-order finite differences  and the massively parallel implementation  enables fast hydrodynamics computations . A fast solver is a prerequisite for enabling UQ within acceptable time frames for realistic engineering applications.
2.2Calculation of loads on structures
The estimation of the pressure in the water column is needed for load calculations. We start from the momentum equation
where is the local pressure and is the fluid density. By integration of the pressure in the vertical direction and assuming that the flow is inviscid and irrotational it is possible to derive the pressure equation
The remaining integral term can be estimated numerically, e.g. by using a sufficiently accurate numerical quadrature rule.
For the estimation of structural forces, we can integrate the expression for the pressure given in Eq. (Equation 2) in the vertical direction, to obtain the force as
which for a vertical structure takes the form
The integral terms can be estimated numerically.
3The model with random inputs
The reformulation with random inputs of and leads to a system of PDEs for the unknowns and , which are now random fields:
3.1Uncertain bathymetries, random fields and Karhunen-Loève Expansion
The bathymetry function describing still-water depth in the considered domain can be uncertain. In laboratory experiments, this can be due to manufacturing errors, whereas in the real settings this is often due to the lack of precise knowledge of the bottom topography or the unknown action of sedimentation. Since this uncertainty is spatially varying, it must be treated as a random field. We will treat this random field using the Karhunen-Loève (KL) expansion  which is a useful parametrization technique when the field is characterized by a certain amount of spatial correlation.
Let be a spatially varying random field over a spatial domain with mean and covariance function . Then the bathymetry function can be parametrized as an infinite series
where the convergence is in , are random variables, , and are the solutions of the generalized eigenvalue problem
If is a Gaussian random field, then .
For practical computations is truncated at a desired order . It is easy to check how much of the variance of the original random field is retained by such approximation, using that
where the orthogonality of is exploited. There are several options regarding the correlation kernel . All these are problem dependent and an appropriate characterization of the random field has to be performed prior to the construction of the KL-expansion. In this work, we will use the exponential covariance kernel
where is the correlation length. Figure 1 shows realizations of the KL-expansions of a 1D random field for the submerged bar experiment considered in Section 5.1 with exponential covariance kernel and zero mean for different correlation lengths . The total variance represented by the KL-expansions is fixed to (the total variance of with exponential covariance kernel is ). In figure ? and ?, fields with different correlation lengths are illustrated. Shorter correlation lengths determine a slower decay of the expansion coefficients in and thus an expansion retaining an higher number of terms is required to express higher local variability.
We are interested in studying the propagation of uncertainties through the dynamical system . To reduce the notation used, let . We describe the results in terms of the probability distribution and/or the first moments, e.g., mean and variance, of :
In this work we will focus exclusively on non-intrusive methods, which require a minimal development effort. In particular the existing solvers are considered as black boxes and the non-intrusive methods need only to be wrapped around them. On the contrary, intrusive methods require the development of new solvers based on mixed discretization of the stochastic and the spatial models. These methods are usually better in dynamically adapting to time-dependent problems  but their implementation is often very demanding – sometimes prohibitive – for existing customized solvers.
4.1Pseudo-random sampling methods
Among the existent UQ techniques, pseudo-random sampling methods are the most widely used. The most notable of these techniques, is the Monte Carlo (MC) method. It is based on the law of large numbers which states that given the random vector and the functional ,
for . In the definition above is a ensemble of samples drawn from the probability distribution of and a.s. stands for almost surely implying convergence in probability. The probabilistic error of these approximations is reduced asymptotically as . In spite of this slow convergence rate, MC method is widely used due to its robustness, ease of use and to the fact that it does not suffer the curse of dimensionality, i.e., its convergence rate is independent from . Improvements of the MC method have been proposed in order to cover more uniformly the parameter space, obtaining improved convergence rates.
The Latin method  divides the parameter space in equiprobable bins along each dimension and samples are taken such that each bin contains only one sample. This provides a better convergence rate in the average cases, even if the worst case convergence rate remains .
The Quasi Monte Carlo (QMC) method  generates low discrepancy sequences of points such that the domain is uniformly covered. The convergence rate of QMC is improved to , but the dimension of the parameter space becomes relevant.
4.2Deterministic sampling methods
In the following we will handle functions with finite variance, i.e. belonging to the weighted space defined as
with inner product and norm defined as
For many standard one-dimensional distributions with density , we can find the set of uni-variate polynomials that form an orthonormal basis for . For instance, orthogonal basis for the space of random variables with Beta, Gaussian or Gamma distributions are the Jacobi, the Hermite and the Laguerre polynomials respectively. .
For random vectors of mutually independent random variables with densities and basis , it holds that . Thus, a possible set of basis functions for is given by the set of multivariate polynomials where is a multi-index and
This construction includes basis functions and is more accurate than the required order . An alternative set of basis is given by , where . For this set of basis
Once we have identified these basis function, we can define the projection operator as
This provides an approximation of the target function that is known as the generalized Polynomial Chaos (gPC) expansion of . Statistics of can be computed easily, e.g.
where the orthonormality of the basis is exploited.
The convergence of the polynomial approximation is spectral (super linear) if is a smooth function or algebraic otherwise, cf. . The coefficients in can be obtained by means of two methods: the Galerkin method, where a reformulation of in terms of stochastic modes is required, or the collocation method, where approximations of ’s are obtained by solving on carefully selected points in the parameter space. The Galerkin method is intrusive, i.e. the problem needs to be reformulated (see  for an introduction to Galerkin methods in this context), whereas the collocation method is non-intrusive and thus any existing deterministic solver for can be used without modification.
Stochastic Collocation method
The idea of the Stochastic Collocation (SC) method is to produce an ensemble of solutions , obtained by solving the governing equations at carefully selected points in the parameter space, in order to enable high accuracy in the evaluation of the coefficients of the gPC-expansion . An alternative approach is the interpolation method, but this is out of the scope of this work (see e.g. ).
Let be Gauss-type quadrature points and weights associated to , which can be obtained using the Golub-Welsch method . These rules are exact when the integrand have a polynomial order up to . Multidimensional Gauss-type quadrature rules defined by can be derived as and . Then the coefficients in can be approximated by
This procedure differs from the classical Monte Carlo method only by the sampling technique used to select collocation points in the associated parameter space. The construction of the quadrature rule is based on the tensor product of 1-dimensional rules leading to the exponential growth of the total number of collocation points, i.e. it suffers the curse of dimensionality.
Before addressing the curse of dimensionality, we need first to observe that quadrature rules based on the zeros of orthogonal polynomials are not nested in general, meaning that the quadrature points based on the polynomial of order are not in , with . This property is important in practical calculations in case one would like to increase the accuracy without wasting results already computed. Common choices of nested quadrature rules are the Clenshaw-Curtis and Fejér’s , that uses the maxima of the Chebyshev polynomials as quadrature points, and the Kronrod-Patterson rules . With appropriate scaling, this quadrature rule works on the bounded interval with a probability density function , corresponding to the uniform distribution. In general we will have to compute integrals as in , where does not need to be the uniform density function. However, using the fact that the CDF is bijective, we can use a variable transformation s.t.
Using these nested rules, we can attempt to alleviate the curse of dimensionality. One particularly successful approach is given by Sparse grid (see  for details). The idea is not to take the complete tensor product of the 1-dimensional grids, but only products up to the desired order for each stochastic dimension, very much alike the construction of the set of basis . This procedure assumes a certain level of separability of the function, meaning that the cross-contribution of the parameters is lower than the contribution of the parameters considered separately. Figure 2 shows a comparison of tensor grids and sparse grid. From figure ? we can see that the gain given by sparse grid over tensor grids increases with the stochastic dimension .
4.3Sensitivity analysis using the method of Sobol’
Sensitivity analysis is aimed at the quantification and the ranking of the influence of input uncertainties on QoIs. This analysis allows then to refine the model, disregarding input uncertainties that have negligible effects on the QoIs.
For the sake of simplicity, we let the QoI be described by the scalar function of the inputs. The influence of an input uncertainty on a QoI is quantified by the variance of the QoI that is due to the input uncertainty and its combination with any other group of uncertainties. This definition is based on the concept of ANOVA decomposition
where are the contributions to the variance due to input alone, are the contributions to the variance due to inputs and , and so on. The and Total Sensitivity of the QoI on input is then defined as
where is the sum of all for which . Note that the indices do not sum to one, because for there are many multi-indices contain both of them. Even for mild dimensions , the number of terms in the decomposition grows exponentially, thus it needs to be truncated considering a limited number of cross-interactions between inputs. A good heuristic for this truncation is to use the fact that by the components must sum up to the total variance. Then, for one seek such that
where is the number of indices contained in the multi-index . The number is called the effective dimension of .
The calculation of the factors is based on the construction of projection operators onto mutually orthogonal spaces – see . These projection operators involve high dimensional integrals which can be evaluated through random sampling or through the introduction of a surrogate function
called the cut (anchored) High Dimensional Model Representation (cut-HDMR), where is a chosen anchor point in the parameter space and is commonly chosen to be the effective dimension of . The functions are approximated through PC approximation, which result in approximations along lines, planes and hyper-cubes . Figure 3 shows the PC based Gauss quadrature points of order for the approximation of the functions including interactions of order .
5Uncertainty Quantification in Water Wave Simulations
We now use the formulation with random input introduced in to describe the uncertainty of the evolution of water waves, in terms of free surface position and velocity potential . For both the random sampling approaches and the non-intrusive polynomial chaos approaches, we need to solve at a set of points , producing the ensembles of solutions
The sampling strategy depends on the particular method chosen. Furthermore, the PC method constructs the surrogate functions
that provide an easy way to compute statistics and to reproduce the PDFs of the solution variables.
We revisit two classical benchmarks to illustrate how the forward propagation of uncertainties can be done efficiently and we construct a synthetic experiment where we analyze the sensitivity of the load on an off-shore structure to a number of input uncertainties. The presentation of PC methods have been preferred over the random sampling methods whenever the dimension of the problem allowed their application, i.e. when this resulted in reduced CPU time. However, random sampling methods have been used in all the presented cases in order to obtain reference solutions.
5.1Harmonic generation over a submerged bar
We now consider an experimental setting originally proposed by Beji and Battjes (1994) . In the experiment a wave propagates across a submerged bar. In the process the bound wave harmonics are decomposed into free harmonics which are released on the lee side of the bar and causes a transformation of the initial wave profile as described in . The measurements of the experiment are commonly used to validate dispersive and wave models such as . However, calibration details and measurement errors are not included in the public report by Beji and Battjes. Therefore, in the following, we will assume uncertainties and detail how these can be .
To analyze the wave evolution we use the bottom topography of the experiments shown in Figure 1. We consider the setup corresponding to Case A in the original experiments , where the input wave signal is defined by a wave period s and a wave height cm. In the numerical solver the input waves are generated using Stokes second order theory.
The shape of the bar and the shape of the incoming wave influence the spectrum of the waves that reach the right end of the domain as analyzed in . In the following different sources of uncertainties are considered and the results are compared with deterministic results often presented in existing literature as well as to the experimental measurements due to Luth et al. .
As a conventional for validation of the numerical wave model, we compare with the experimental measurements at eight gauges positioned in the wave tank. The results of this comparison are presented in Figure 4, where the bathymetry used is the exact bathymetry illustrated in Figure 1. The results have been computed with the parameters of the experiment given in Table 1.
|Bar height from bottom|
|Entering wave period|
These parameters will be changed in the following to reflect single realizations of uncertain parameters. Clearly, the computation and the experiments match qualitatively very well, however there are noticeable discrepancies between the numerical calculations and the experimental data. For example, the wave heights and phases are not exactly reproduced at the first and second measurement stations. Discrepancies in the wave signal are observed at the high peaks in measurements from stations 5, 7 and 9, and in the low and intermediate peaks of station 6.
The numerical calculations are done using a high-order accurate numerical method  with sufficient resolution to accurately resolve the dispersion and wave effects, and are therefore assumed to be converged to a grid-independent solution. The absorption zone introduced behind the bar has been defined so that minimum wave reflections occur.
The solution up to time is obtained in approximately 13s on an Intel Core i7-2640M CPU @ 2.80GHz. In the following it can be assumed that the influence of the choice of the parameters on the computational cost of solving the system is negligible, and thus the total computational cost scales proportionally with the number of solutions computed.
Uncertainty on wave period and water height
Here we use the truncated normal distribution
to represent the fact that large defects in the water height can be detected and corrected. An accurate representation of the wave signal requires that the wave maker displacement and the wave amplitudes are matched. This can be difficult to achieve in practice, especially for wave signals, and may lead to unintentional harmonic generation. To illustrate how such uncertainty in the signal can be accounted for, we use
to represent the uncertainty due to the generation of the input waves.
The SC method with order 10 was used to produce Figure 5, which shows the time dependent distribution function of the solution, its mean and the 95% tolerance interval. For the expansion of the uncertainty in the wave signal, the Hermite polynomials were used because they form an orthogonal basis with respect to the Gaussian distribution. The obtained uncertainty is not merely the superposition of the uncertainties that one would obtain in the one dimensional cases where inputs are considered separately, but is the result of the interaction between the two. This effect will be evident through the observation of the coefficients in the gPC expansion .
In order to check the convergence of the method, the SC method with order 25 was used to generate a reference solution, to which lower order SC methods and the MC method were compared. Figure ? shows the error in the approximation the mean of 10s of simulation as one uses an increasing number of simulations both for the MC method and the SC method. The plateau appearing above order 10 marks the maximum accuracy achievable with the selected tolerance set for the approximate solution of the Laplace problem (). Using gPC approximations of higher order will result in over-fitting the under-resolved part of the solution. Figure ? shows the convergence of the PDF of the solution at gauge 7 at time , as an increasing number of simulations is used in the SC method, and the approximation to the PDF obtained using the MC method
Figure 7 shows the decay of the projection coefficients in relation to both the input uncertainties. A total independence of the two parameters in the influence of the system would produce an expansion where all the non-zero coefficients are the ones with or , . This corresponds to have decays similar to the one shown in the first upper-left plot in Figure 7. The next plots, however, show that the two input uncertainties act on the solution in non-trivial ways when combined. This means that the results of the UQ analysis on the two separate sources, cannot be trivially superposed, but they need to be considered together in a unique UQ analysis.
Uncertain bottom topography
The topography of the basin is often precise in experimental settings, but rarely for real sites. Discrepancies with respect to the ideal/real design can be present. We will model these discrepancies using Gaussian random fields added on top of the deterministic basin, as shown in Figure 1.
We consider three Gaussian random fields with exponential covariance and with correlation lengths . The mean of the fields is set to be the nominal bottom topography and the total variance of the fields is set to . Two realizations of such random fields are shown in Figure 1. The random fields are expanded using the KL-expansion , capturing of the total variance of the fields. This results in truncated KL-expansions with , and terms respectively.
The LHS method is used in the three cases checking that the number of samples () is sufficient to estimate the mean up to the second digit of accuracy
5.2Load of shoaling waves on off-shore structures
A relevant part of the research on water waves is devoted to the estimation of loads – see Section 2.2 – on off-shore and coastal structures. In this context engineers are often interested in estimating the effect of extreme conditions or accumulated fatigue damage. To this end we will investigate the influence of uncertainties on the maximum load experienced by a structure positioned on the top of a shoaling bathymetry, after the system has reached its periodic solution.
The experimental setting is shown in figure ?, where waves are propagated left to right and shoal over the sloping bathymetry. It is assumed that a structure is to be built in the shallower part of the bathymetry at a fixed position . Wave loads are then calculated at and their maximum is taken when the system has reached its periodic solution. The experiment does not account for wave-structure interactions would occur in the case a real structure was put in place. In such cases wave loads are often estimated using Morison’s equation. Instead, we estimate loads based directly on the estimate pressure distribution in the water column of an assumed position of a cylinder. Figure ? shows a possible realization of the solution when an uncertain bathymetry is considered (see sec. ?).
Uncertainty on wave height and wave period
We will first focus on the uncertainties affecting the input wave characteristics. The input waves are regular stream function waves due to Dean  and are defined by their height and period , from which the wave speed is derived such that it satisfies the dispersion relation at the inlet of the domain. We describe the assumed uncertainties by Gaussian distributions with standard deviations of 10% of their nominal values:
Figure ? shows the PDF obtained using samples of the method and the PDF obtained using the gPC approximation constructed through the SC method with orders , using function evaluations. The results agree very closely even for low orders, suggesting that the underlying dependency of the load on the parameters is not significantly complex.
Uncertain bottom topography
We study now the influence of the uncertainty in the bottom topography to the maximum load predicted. The uncertainty is modeled through a Gaussian random field with Ornstein-Uhlenbeck  covariance and correlation lengths . The mean of the field is set to the nominal bathymetry while the total variance is set to . A realization of such field is shown in figure ?. The fields are parametrized by a KL-expansion preserving 95% of the total variance, resulting in truncations with , and terms respectively.
Figure ? shows the PDFs of the load with respect to the three uncertain bathymetries. The results are obtained using the method with samples, which produce estimates of the mean accurate to its third digit. The PDFs obtained are very similar for the three cases, suggesting that the load is mostly sensitive to slowly varying perturbations of the nominal bathymetry.
Sensitivity of loads to input uncertainties
We now consider all the uncertainties studied in section ? and ? at the same time and we quantify the influence of each of these inputs through the variance based sensitivity analysis provided by the method of Sobol’ (see Section 4.3). Drawing from the experience acquired in section ? we know that the three correlation lengths considered give very similar results. Then we consider an uncertain bottom topography modeled by a Gaussian random field with correlation length , resulting in a KL-expansion with terms. The total number of uncertain parameters in the system is thus .
|Std. dev. ()||2.03||0.0||1.90||1.92||1.97||2.03|
A necessary prerequisite to the computation of the sensitivities is the calculation of the effective dimension of the load function. This is done in terms of the amount of total variance expressed by the ANOVA-decomposition used in the method of Sobol’. This means one needs to compute the total variance first. We do this using the method and the sparse grid method of increasing levels. Table 2 shows the convergence of the sparse grid method in terms of total variance as the number of evaluations is increased.
The sensitivity analysis is then performed for increasing truncation orders in the number of interactions – see – and increasing PC orders. Table 3 shows that increasing both the number of interactions (cut order) and the PC order increases the percentage of total variance represented by the ANOVA decomposition. If one sets in the effective dimension criteria , then the analysis shows that the load function can be represented by a function including up to second order interactions. However, we can see that a sufficiently high PC order is required in order for the criteria to be fulfilled. Table 3 shows also the total sensitivities obtained. The sensitivities due to all the parameters characterizing the bottom topography are grouped in one unique entry. Under the assumed distributions for the input uncertainties, the height of the generated wave appears to be the most influential input on the load. Nevertheless, the other two inputs are not negligible either, meaning that none of them can be disregarded in the UQ analysis.
5.3Harmonic generation over a semi-circular shoal
Extending the analysis to the full three dimensional problem we will proceed to the experiments of Whalin . The experiments consist of a regular wave propagating over a semi-circular shoal, see figure ?. The shoaling process transfers energy between the bound harmonics but, in contrast to the submerged bar case, the harmonics remain bounded and refraction adds complexity to the solution. The Whalin experiments have become standard benchmarks for dispersive wave models regardless of a rather substantial scatter present in the experimental data. We will look into the case of of incoming waves with height m and period s. For this case most numerical models tend to over predict the amplitude of the second harmonic.
The deterministic numerical solution is computed for , to ensure the reaching of the periodic solution, and then is compared with the experimental measurements of the magnitude of the first three harmonics at different measurement locations through the center line of the domain. Due to the symmetry of the domain along its center line, the solution is computed only on half of the domain. Figure ? shows the fitting of the deterministic numerical solution to the measurement data.
For the deterministic case, the solution is obtained in approximately 4min on an Intel Core i7-2640M CPU @ 2.80GHz. The aim of the next sections is to study how the uncertainty in some experimental parameters can influence the results. Without presumption of causality, this analysis can highlight parameters that can influence results more deeply than others. The computational cost of solving the full three dimensional problem calls for efficient UQ methodologies that require the minimum number of simulations to make analysis practically feasible.
One dimensional uncertainties
Building up on the experience acquired on the two dimensional case and from experimental knowledge, we will focus our attention to the two parameters that are most difficult to match in real experiments, namely the input wave period and height. Due to the lack of information about how accurate experiments can be, we will assume that the input parameters are described by a Gaussian distribution and we will try to evaluate how sensitive the system is to single uncertainties, and, in the next section, to the combination of the two. We will model the wave height and the wave period with Gaussian distributions centered on their nominal values and with standard deviation.
The SC method for the estimation of the gPC expansion is adopted, with the order dictated by the accuracy required. Since the input uncertainties follow a Gaussian distribution, the polynomial basis used for their expansion are the Hermite polynomials. Figure 13 shows the mean and the tolerance interval as well as the space-dependent distribution of the harmonics and the fitting with the experimental data for the two parameters considered separately.
Two dimensional uncertainty
The same problem setting is now investigated with uncertainty on the wave height and period at the same time. The same distributions used in the one dimensional setting are used here for the uncertainty sources. The SC method of order is used to compute the space dependent probability distribution of the first three harmonics of the propagated wave. A total of only deterministic simulations are required to obtain the desired approximation
Figures Figure 14 shows the space dependent mean and tolerance interval of the first three harmonics, as well as their space dependent probability distribution. Again we can notice that the resulting uncertainty – measured in variance of the solution – is not the mere superposition of the variances obtained with one dimensional uncertainties (see Figure 13). The probability distribution of the first three harmonics seem now to include the experimental measurements within some high probability region.
Uncertain bottom topography
We model the uncertainty on the bottom topography through the superposition of a Gaussian random field with Ornstein-Uhlenbeck  covariance on top of the deterministic bathymetry shown in figure ?. The correlation lengths and the total variance of the field are chosen as illustrative examples to characterize the random field. The random fields are set to be isotropic, meaning that the correlation and variability are the same for both of the directions of the two dimensional bottom topography. The random fields are expanded using the KL-expansion capturing of the total variance. This results in truncated KL-expansions with , and terms respectively. Figure ? shows a realization of such topography. Unlike to the previous experiments where the domain was symmetric along its center line, in these experiments the symmetry is lost. Thus, the solution of the system must be computed over the whole domain, resulting in an increased computational cost. The deterministic solution over the whole domain is obtained in approximately 8min on an Intel Core i7-2640M CPU @ 2.80GHz.
The latin hypercube method is used for the three settings, with a number of samples which is sufficient to estimate the mean up to the second digit of accuracy
The presented numerical experiments of the potential flow model describing water waves have proved to be sensitive to a number of uncertainties. This phenomenon is not surprising, but the analysis showed how relevant its study is for improving the prediction capabilities in coastal and off-shore engineering. The types of uncertainties accounted are those that are likely to appear in experimental settings, such as the input wave characteristics, the water height, and the topography of manufactured basins. the distribution of such uncertainties, that would otherwise need to be characterized by extensive measurements or by a better description of the believed distributions.
The study of the influence of these uncertainties on problem-dependent quantities of interest benefits greatly from the application of non-intrusive polynomial chaos techniques, such as the stochastic collocation method and the sparse grid method. This is due to three main reasons: i) the convergence properties of these methods allow for super-linear convergence, which is attained on the studied experiments, ii) the calculation of the deterministic solution of the problem is a computationally demanding task, thus restraining the possible number of function evaluation obtainable in reasonable time, iii) the non-intrusive property of the methods allows for the of existing solvers, which have been tuned for High Performance Computing .
On the downside of polynomial chaos methods there is the fact that they are only suitable for problems with a relatively low number of input uncertainties. that each problem has its own peculiarities (e.g. stochastic dimension, deterministic computational complexity, etc.), we are aware that as the number of random inputs increases, the number of required solutions of the deterministic model increases more than polynomially, making the method not effective. Methods such as sparse grid alleviate this effect, leading to linear growth with respect to the number of random inputs. At the current state of research, pseudo-random sampling techniques are the only one that exhibit a (slow) convergence independent from the number of input uncertainties.
The analysis performed on the experimental settings show that the uncertainties on the input wave characteristics and the bottom topography have a relevant effect on the free surface solutions and on the load of these waves on off-shore structures. These effects are amplified when these uncertainties are considered simultaneously, leading to transformations of the input probability distribution. The results of such analysis could be considered when explaining some of the discrepancy between numerical solutions and experimental results. In ongoing works we are considering problems with higher stochastic dimensions, resorting to novel techniques for deterministic sampling, such as Adaptive Sparse grid  and Spectral Tensor Train decomposition .
The frameworks for random sampling
- The gravitational acceleration constant, , is set to be 9.81 .
- The method of Kernel Density Estimation  with a Gaussian kernel was used here in order to obtain the approximations of the PDFs from samples of the solution.
- The theoretical error of MC method can be expressed by the standard deviation of the MC estimator , which is . This can be approximated inserting the estimated variance and then used to test a convergence criteria for the method. Note that the estimate of LHS is often better than the MC estimate. In the example at hand both mean and variance are time and space dependent, i.e. . For the gauge locations , we defined the convergence criteria by .
- The number of quadrature points for a one dimensional Gauss rule of polynomial order is . The tensorization of this quadrature rule in two dimension leads then to evaluation points.
- The accuracy estimator is defined similarly to the one used in section ?. For the means and variances of the harmonics functions , defined along the center-line of the domain , the convergence criteria is given by .
- S. J. Wojtkiewicz, M. Eldred, R. J. Field, A. Urbina, and J. Red-Horse, “A toolkit for uncertainty quantification in large computational engineering models,” in Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2001.
- E. M. Bitner-Gregersen, S. K. Bhattacharya, I. K. Chatjigeorgiou, I. Eames, K. Ellermann, K. Ewans, G. Hermanski, M. C. Johnson, N. Ma, C. Maisondieu, A. Nilva, I. Rychlik, and T. Waseda, “Recent developments of ocean environmental description with focus on uncertainties,” Ocean Engineering, vol. 86, pp. 26–46, Aug. 2014.
- E. M. Bitner-Gregersen, K. C. Ewans, and M. C. Johnson, “Some uncertainties associated with wind and wave description and their importance for engineering applications,” Ocean Engineering, vol. 86, pp. 11–25, Aug. 2014.
- Cambridge: Cambridge University Press, 2012.
A. Naess and T. Moan, Stochastic Dynamics of Marine Structures.
- L. Ge, K. F. Cheung, and M. H. Kobayashi, “Stochastic Solution for Uncertainty Propagation in Nonlinear Shallow-Water Equations,” Journal of Hydraulic Engineering, vol. 134, pp. 1732–1743, Dec. 2008.
- L. Ge and K. F. Cheung, “Spectral Sampling Method for Uncertainty Propagation in Long-Wave Runup Modeling,” Journal of Hydraulic Engineering, vol. 137, pp. 277–288, Mar. 2011.
- PhD thesis, University of Braunschweig – Institute of Technology, 2009.
D. Liu, Uncertainty Quantification with Shallow Water Equations.
- M. Ricchiuto, P. M. Congedo, and A. Delis, “Runup and uncertainty quantification: sensitivity analysis via ANOVA decomposition,” Tech. Rep. April, INRIA, Bordeaux, 2014.
- B. Yildirim and G. E. Karniadakis, “Stochastic simulations of ocean waves: An uncertainty quantification study,” Ocean Modelling, vol. 86, pp. 15–35, Feb. 2015.
- A. P. Engsig-Karup, H. B. Bingham, and O. Lindberg, “An efficient flexible-order model for 3d nonlinear water waves,” Journal of Computational Physics, vol. 228, pp. 2100–2118, 2008.
- A. P. Engsig-Karup, M. G. Madsen, and S. L. Glimberg, “A massively parallel gpu-accelerated model for analysis of fully nonlinear free surface waves,” International Journal for Numerical Methods in Fluids, vol. 70, no. 1, pp. 20–36, 2011.
- S. L. Glimberg, A. P. Engsig-Karup, and M. G. Madsen, “A fast gpu-accelerated mixed-precision strategy for fully nonlinear water wave computations,” in Numerical Mathematics and Advanced Applications 2011, Proceedings of ENUMATH 2011, the 9th European Conference on Numerical Mathematics and Advanced Applications, Leicester, September 2011 (A. C. et al., ed.), Springer, 2012.
- A. P. Engsig-Karup, L. S. Glimberg, A. S. Nielsen, and O. Lindberg, “Fast hydrodynamics on heterogenous many-core hardware,” in Designing Scientific Applications on GPUs (R. Couturier, ed.), Lecture notes in computational science and engineering, CRC Press / Taylor & Francis Group, 2013.
- L. S. Glimberg, A. P. Engsig-Karup, B. Dammann, and A. S. Nielsen, “Development of high-performance software components for emerging architectures,” in Designing Scientific Applications on GPUs (R. Couturier, ed.), Lecture notes in computational science and engineering, CRC Press / Taylor & Francis Group, 2013.
- D. Xiu, “Fast numerical methods for stochastic computations: a review,” Communications in computational physics, vol. 5, no. 2, pp. 242–272, 2009.
- Springer, June 2010.
O. P. Le Maître and O. M. Knio, Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics.
- S. Beji and J. A. Battjes, “Numerical simulation of nonlinear-wave propagation over a bar,” Coastal Engineering, vol. 23, pp. 1–16, 1994.
- D. Dutykh and D. Clamond, “Efficient computation of steady solitary gravity waves,” 2013.
- A. Kiureghian and O. Ditlevsen, “Aleatory or epistemic? Does it matter?,” Structural Safety, 2009.
- I. Sobol’, “Sensitivity analysis for non linear mathematical models,” Math. Model. Comput. Exp., no. 1, pp. 407–414, 1993.
- H. Rabitz and O. Alis, “Managing the Tyranny of Parameters in Mathematical Modelling of Physical Systems,” in Sensitivity Analysis (A. Saltelli, K. Chan, and E. M. Scott, eds.), Chichester: John Wiley & Sons, Ltd., 2000.
- K. Chan, S. Tarantola, A. Saltelli, and I. Sobol’, “Variance-based methods,” in Sensitivity Analysis (A. Saltelli, K. Chan, and E. M. Scott, eds.), Chichester: John Wiley & Sons, Ltd., 2000.
- B. Sudret, “Global sensitivity analysis using polynomial chaos expansions,” Reliability Engineering & System Safety, vol. 93, no. 7, pp. 964–979, 2008.
- T. Crestaux, O. Le Maître, and J.-M. Martinez, “Polynomial chaos expansion for sensitivity analysis,” Reliability Engineering & System Safety, vol. 94, pp. 1161–1172, jul 2009.
- A. Alexanderian, J. Winokur, I. Sraj, A. Srinivasan, M. Iskandarani, W. C. Thacker, and O. M. Knio, “Global sensitivity analysis in an ocean general circulation model: A sparse spectral projection approach,” Computational Geosciences, vol. 16, pp. 757–778, 2012.
- J. Larsen and H. Dancy, “Open boundaries in short wave simulations - a new approach,” Coastal Engineering, vol. 7, pp. 285–297, 1983.
- New York: John Wiley & Sons, 3rd ed., 1995.
P. Billingsley, Probability and measure.
- Comprehensive Manuals of Surgical Specialties, New York: Springer, 4 ed., 1978.
M. Loève, Probability Theory, vols. I-II.
- C. Schwab and R. A. Todor, “Karhunen–Loève approximation of random fields by generalized fast multipole methods,” Journal of Computational Physics, vol. 217, pp. 100–122, Sept. 2006.
- G. Uhlenbeck and L. Ornstein, “On the theory of the Brownian motion,” Physical review, vol. 36, no. 1905, 1930.
- M. Cheng, T. Y. Hou, and Z. Zhang, “A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations II: Adaptivity and generalizations,” Journal of Computational Physics, vol. 242, pp. 753–776, June 2013.
- M. Cheng, T. Y. Hou, and Z. Zhang, “A dynamically bi-orthogonal method for time-dependent stochastic partial differential equations I: Derivation and algorithms,” Journal of Computational Physics, vol. 242, pp. 843–868, June 2013.
- S. Boyaval, C. Le Bris, T. Lelièvre, Y. Maday, N. C. Nguyen, and a. T. Patera, “Reduced Basis Techniques for Stochastic Problems,” Archives of Computational Methods in Engineering, vol. 17, pp. 435–454, Oct. 2010.
- D. Venturi, “On proper orthogonal decomposition of randomly perturbed fields with applications to flow past a cylinder and natural convection over a horizontal plate,” Journal of Fluid Mechanics, vol. 559, p. 215, July 2006.
- T. P. Sapsis and P. F. Lermusiaux, “Dynamically orthogonal field equations for continuous stochastic dynamical systems,” Physica D: Nonlinear Phenomena, vol. 238, pp. 2347–2360, Dec. 2009.
- M. Mckay, R. Beckman, and W. Conover, “A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output From a A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output From a Computer Code,” Technometrics, vol. 41, no. 1, pp. 55–61, 2000.
- W. J. Morokoff and R. E. Caflisch, “Quasi-Monte Carlo Integration,” Journal of Computational Physics, vol. 122, pp. 218–230, Dec. 1995.
- D. Xiu and G. E. Karniadakis, “The wiener–askey polynomial chaos for stochastic differential equations,” SIAM J. Sci. Comput., vol. 24, pp. 619–644, Feb. 2002.
- Princeton University Press, July 2010.
D. Xiu, Numerical Methods for Stochastic Computations: A Spectral Method Approach.
- Numerical Mathematics and Scientific Computation, Oxford University Press, 2004.
W. Gautschi, Orthogonal Polynomials: Computation and Approximation.
- W. Gautschi, “Algorithm 726: ORTHPOL;a package of routines for generating orthogonal polynomials and Gauss-type quadrature rules,” ACM Trans. Math. Softw., vol. 20, pp. 21–62, Mar. 1994.
- Scientific Computation, Berlin, Heidelberg: Springer Berlin Heidelberg, 2006.
C. Canuto, M. Hussaini, A. Quarteroni, and T. Zang, Spectral Methods - Fundamentals in Single Domains.
- G. H. Golub and J. H. Welsch, “Calculation of Gauss Quadrature Rules,” Mathematics of Computation, vol. 23, no. 106, pp. 221–230, 1969.
- L. Fejér, “Mechanische Quadraturen mit positiven Cotesschen Zahlen,” Math. Z., vol. 37, pp. 287–309, 1933.
- J. Waldvogel, “Fast Construction of the Fejér and Clenshaw–Curtis Quadrature Rules,” Bit Numerical Mathematics, vol. 46, no. 1, pp. 195–202, 2006.
- C. W. Clenshaw and A. R. Curtis, “A method for numerical integration on an automatic computer,” Numerische Mathematik, vol. 2, no. 1, pp. 197–205, 1960.
- A. S. Kronrod, “Nodes and Weights of Quadrature Formulas,” English transl. from Russian, Consultants Bureau, vol. 35, no. 597, 1965.
- K. Petras, “Smolyak cubature of given polynomial degree with few nodes for increasing dimension,” Numerische Mathematik, vol. 93, pp. 729–753, Feb. 2003.
- P. Conrad and Y. Marzouk, “Adaptive Smolyak pseudospectral approximations,” SIAM Journal on Scientific Computing, vol. 35, no. 6, pp. 2643–2670, 2013.
- T. Maly and L. R. Petzold, “Numerical methods and software for sensitivity analysis of differential-algebraic systems,” Applied Numerical Mathematics, vol. 20, no. 60, pp. 57–79, 1996.
- R. M. Errico, “What Is an Adjoint Model?,” Bulletin of the American Meteorological Society, pp. 2577–2591, 1997.
- Y. Cao, S. Li, L. Petzold, and R. Serban, “Adjoint Sensitivity Analysis for Differential-Algebraic Equations: The Adjoint DAE System and Its Numerical Solution,” SIAM Journal on Scientific Computing, vol. 24, pp. 1076–1089, jan 2003.
- jul 2007.
M. Ulbrich and S. Ulbrich, Primal-dual interior-point methods for PDE-constrained optimization, vol. 117.
- R. Herzog and K. Kunisch, “Algorithms for PDE-constrained optimization,” GAMM-Mitteilungen, vol. 33, pp. 163–176, oct 2010.
- Z. Gao and J. Hesthaven, “Efficient solution of ordinary differential equations with high-dimensional parametrized uncertainty,” Communications in Computational Physics, vol. 10, no. 2, pp. 253–286, 2011.
- L. Benxia and Y. Xiping, “Wave decomposition phenomenon and spectrum evolution over submerged bars,” Acta Oceanologica Sinica, vol. 28, no. 3, pp. 82–92, 2009.
- H. R. Luth, B. Klopman, and N. Kitou, “Projects 13G: Kinematics of waves breaking partially on an offshore bar: LDV measurements for waves with and without a net onshore current,” Technical report H1573, Delft Hydraulics, 1994.
- Springer Series in Statistics, 2001.
T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning, vol. 1.
- R. G. Dean, “Stream function representation of nonlinear ocean waves,” Journal of Geophysical Research, vol. 70, pp. 4561–4572, Sept. 1965.
- R. W. Whalin, “The limit of applicability of linear wave refraction theory in convergence zone,” Tech. Rep. H-71-3, US Army Corps of Engineers, 1971.
- D. Bigoni, A. P. Engsig-Karup, and Y. M. Marzouk, “Spectral tensor-train decomposition,” 2014.