An Efficient Intrusive Uncertainty Propagation Method For Multi-Physics System With Random Inputs

# An Efficient Intrusive Uncertainty Propagation Method For Multi-Physics System With Random Inputs

A. Mittal111Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA 94305. 333Corresponding author (Email: mittal0@stanford.edu)    G. Iaccarino222Mechanical Engineering, Stanford University, Stanford, CA 94305.
###### Abstract

Coupled partial differential equation (PDE) systems, which often represent multi-physics models, are naturally suited for modular numerical solution methods. However, several challenges yet remain in extending the benefits of modularization practices to the task of uncertainty propagation. Since the cost of each deterministic PDE solve can be expected to be usually quite significant, statistical sampling based methods like Monte-Carlo (MC) are inefficient because they do not take advantage of the mathematical structure of the problem and suffer for poor convergence properties. On the other hand, even if each module contains a moderate number of uncertain parameters, implementing spectral methods on the combined high-dimensional parameter space can be prohibitively expensive due to the curse of dimensionality. In this work, we present a module-based and efficient intrusive spectral projection (ISP) method for uncertainty propagation. In our proposed method, each subproblem is separated and modularized via block Gauss-Seidel (BGS) techniques, such that each module only needs to tackle the local stochastic parameter space. Moreover, the computational costs are significantly mitigated by constructing reduced chaos approximations of the input data that enter each module. We demonstrate implementations of our proposed method and its computational gains over the standard ISP method using numerical examples.

Key words. Multi-physics Systems, Stochastic Modeling, Polynomial Chaos, Intrusive Spectral Projection, Stochastic Galerkin Method.

AMS subject classifications. 60H15, 60H30, 60H35, 65C30, 65C50

## 1 Introduction

In recent years, Modeling and Simulation (M&S) tools have become ubiquitous due to the rapid growth in computing power and significant developments in numerical techniques. Predictive science, which involves the utilization of verified and validated computer simulations in studying complex physical phenomena, has become especially useful in multi-physics applications involving heat transfer, fluid flow, structural dynamics, chemical kinetics and nuclear processes, due to the difficulty and costs associated with experimental testing. However, to validate numerical predictions against experimental observations, uncertainties (characterized as random quantities) must be included within the associated physical model. This is primarily because mathematical descriptions of physical processes are often unphysical idealizations of their target scenarios (in terms of material and geometric properties), and also because knowledge of the model parameters is usually limited. Uncertainty Quantification (UQ) has therefore emerged as a vital discipline in the certification of predictive simulations. Once the uncertain parameters of a physical system have been identified, a UQ study entails the following sequence of steps: (1) uncertainty characterization, (2) uncertainty propagation and (3) uncertainty analysis.

In most cases where physical systems are formulated as partial differential equations (PDEs) with random inputs, uncertainty propagation becomes a compuationally demanding task. Monte-Carlo (MC) based sampling [1, 2], wherein a set of input samples are generated and the corresponding output samples are collected via repeated simulation runs, is the simplest strategy to tackle this problem. However, obtaining a large enough sample size to perform accurate uncertainty analysis can be prohibitively expensive due to the slow convergence rate of MC. Alternative approaches, involving the construction of response surfaces of the quantities of interest using sparse regression methods [3, 4] and subsequently sampling the surrogate model, have been proposed. However, in the context of multi-physics models, governed by coupled PDE systems, these methods ignore the coupling structures, which can be potentially exploited to reduce computational costs. Although spectral uncertainty propagation methods, for example, generalized polynomials chaos (gPC) [5] based methods, exhibit geometric convergence properties, their monolithic implementations, would succumb to the curse of dimensionality.

Multi-physics systems, are naturally suited for modular (partitioned) solution strategies [6, 7], wherein specialized numerical methods and solvers for each constituent single-physics component can be reused and integrated with minimal effort, to construct solvers for the full system. These methods can therefore facilitate a natural division of M&S expertise, and consequently a drastic reduction of developmental costs and maintanence overheads associated with multi-physics software. However, extending standard modularization practices for efficient uncertainty propagation yet remains challenging.

In this work, we propose a module-based reduced intrusive spectral projection (ISP) method for gPC based uncertainty propagation in coupled PDE solvers. In addition to providing a practical framework for solver modularization, our proposed method mitigates the curse of dimensionality by constructing reduced approximations of the input data before being communicated to their respective computational modules. The reduced approximations are constructed via simple linear algebraic transformations, and corresponding optimal quadrature rules are constructed to significantly reduce the repeated number of module executions. Previous work has exploited relatively trivial coupling structures, for example, unidirectional coupling [8] and linear coupling [9, 10, 11], to reduce the costs of uncertainty propagation, while here, we consider a more general setup with bidirectional and nonlinear coupling structures in the model. Moreover, Constantine et. al [12] successfully demonstrated a reduced approximation method for network (weakly) coupled systems and our proposed method extends the approach towards field (strongly) coupled models.

Several components of our proposed approach have been motivated by the recent works of Arnst et. al [13, 14, 15], wherein reduced chaos representations [16] are proposed as the reduced approximations. A limitation of their approach however, is that computational gains can only be achieved in some of the modules, since the dimension and order reduction is implemented unidirectionally. We address this limitation in our proposed method by facilitating the construction of reduced approximations across all the modules. Moreover, the respective tolerance values, which control the approximation errors, can be individually prescribed for each module.

The remainder of this article is organized as follows. In §2, the preliminary definitions and concepts concerning ISP based uncertainty propagation for a stochastic multi-physics system are introduced along with a description of the standard ISP method. In §3, we describe the proposed reduced ISP method by addressing each of its components individually. In §4, we report and compare the performance and accuracy of the standard and reduced ISP method implementations on two numerical examples.

## 2 Preliminary definitions and concepts

### 2.1 Stochastic multi-physics model

Without loss of generality, we consider to a multi-physics setup wherin a two-component system of stochastic, steady-state PDEs are bidirectionally coupled. The analysis that will be presented can be trivially extended to unsteady systems as well. A spatial discretization of each component of the system yields the coupled stochastic algebraic system:\forall\boldsymbol{\xi}=\left[\boldsymbol{\xi}_{1};\boldsymbol{\xi}_{2}\right]% \in\Xi,

 \displaystyle\boldsymbol{f}_{1}\left(\boldsymbol{u}_{1}\left(\boldsymbol{\xi}% \right);\boldsymbol{u}_{2}\left(\boldsymbol{\xi}\right),\boldsymbol{\xi}_{1}% \right)=\boldsymbol{0}, \displaystyle\boldsymbol{f}_{2}\left(\boldsymbol{u}_{2}\left(\boldsymbol{\xi}% \right);\boldsymbol{u}_{1}\left(\boldsymbol{\xi}\right),\boldsymbol{\xi}_{2}% \right)=\boldsymbol{0}, \hb@xt@.01(2.1)

where \boldsymbol{f}_{1}\in\mathbb{R}^{n_{1}} and \boldsymbol{f}_{2}\in\mathbb{R}^{n_{2}} denote the component residuals, \boldsymbol{u}_{1}\in\mathbb{R}^{n_{1}} and \boldsymbol{u}_{2}\in\mathbb{R}^{n_{2}} denote the component solution vectors, which represent the respective finite-dimensional discretizations of the spatially varying solution fields intrinsic in each respective component PDE system. Moreover, \boldsymbol{\xi}_{1}\in\Xi_{1}\subseteq\mathbb{R}^{s_{1}} and \boldsymbol{\xi}_{2}\in\Xi_{2}\subseteq\mathbb{R}^{s_{2}} denote the component random input parameters with probability density functions \mu_{1}:\Xi_{1}\rightarrow\mathbb{R}^{+} and \mu_{2}:\Xi_{2}\rightarrow\mathbb{R}^{+} respectively. Furthermore, \Xi\equiv\Xi_{1}\times\Xi{}_{2} denotes the combined parameter space with dimension s=s_{1}+s_{2} and joint probability density function \mu:\Xi\rightarrow\mathbb{R}^{+}:\forall\boldsymbol{\xi}\in\Xi,\mu\left(% \boldsymbol{\xi}\right)=\mu_{1}\left(\boldsymbol{\xi}_{1}\left(\boldsymbol{\xi% }\right)\right)\mu_{2}\left(\boldsymbol{\xi}_{2}\left(\boldsymbol{\xi}\right)% \right), implying that \boldsymbol{\xi}_{1} and \boldsymbol{\xi}_{2} are statistically independent.

### 2.2 Generalized Polynomial Chaos

Let \mathcal{L}^{2}\left(\Xi\right) denote the space of all \mu-weighted, square-integrable scalar functions in \Xi. Moreover, \forall i\in\left\{1,2\right\}, let \boldsymbol{G}_{i}\in\mathbb{R}^{n_{1}\times n_{1}} denote the symmetric-positive-definite Gramian matrix [17] corresponding to \boldsymbol{u}_{i} and

 \mathcal{L}_{i}^{2}\left(\Xi\right)=\left\{\boldsymbol{u}:\Xi\rightarrow% \mathbb{R}^{n_{i}}:\int_{\Xi}\boldsymbol{u}\left(\boldsymbol{\xi}\right)^{% \mathbf{T}}\boldsymbol{G}_{i}\boldsymbol{u}\left(\boldsymbol{\xi}\right)\mu% \left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}<\infty\right\} \hb@xt@.01(2.2)

denote the space of \mu-weighted, \boldsymbol{G}_{i}-square integrable functions that map from \Xi to \mathbb{R}^{n_{i}}. If \boldsymbol{u}_{i}\in\mathcal{L}_{i}^{2}\left(\Xi\right), then it can be formulated as an infinite polynomial series as follows. \forall\boldsymbol{\xi}\in\Xi,

 \boldsymbol{u}_{i}\left(\boldsymbol{\xi}\right)=\sum_{j\geq 0}\hat{\boldsymbol% {u}}_{i,j}\psi_{j}\left(\boldsymbol{\xi}\right) \hb@xt@.01(2.3)

where \left\{\psi_{j}:\Xi\rightarrow\mathbb{R}\right\}_{j\geq 0} denotes the set of distinct \mu-orthonormal polynomials, and a basis for \mathcal{L}^{2}\left(\Xi\right). The coordinate directions in \Xi are assumed to be statistically independent and therefore, each multivariate basis polynomial can be constructed as a product of s univariate orthonormal polynomials [18], which in turn can be precomputed using the Golub-Welsch algorithm [19]. Moreover, the indexing in the polynomial series is assumed to follow a total degree ordering, such that

 \mathrm{deg}\left(\psi_{j}\right)\geq\mathrm{deg}\left(\psi_{k}\right)% \Leftrightarrow j\geq k\geq 0, \hb@xt@.01(2.4)

where \forall j\geq 0, \mathrm{deg}\left(\psi_{j}\right) denotes the total degree of \psi_{j}.

Consequently, a gPC approximation [5] \boldsymbol{u}_{i}^{p}\approx\boldsymbol{u}_{i} of order p\geq 0 can be formulated by truncating Eq. 2.3 as follows. \forall\boldsymbol{\xi}\in\Xi,

 \boldsymbol{u}_{i}^{p}\left(\boldsymbol{\xi}\right)=\sum_{j=0}^{P}\hat{% \boldsymbol{u}}_{i,j}\psi_{j}\left(\boldsymbol{\xi}\right)=\hat{\boldsymbol{U}% }_{i}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right), \hb@xt@.01(2.5)

where \hat{\boldsymbol{U}}_{i}\equiv\hat{\boldsymbol{U}}_{i}^{p}=\left[\begin{array}% []{ccc}\hat{\boldsymbol{u}}_{i,0}&\cdots&\hat{\boldsymbol{u}}_{i,P}\end{array}% \right]\in\mathbb{R}^{n_{i}\times\left(P+1\right)} denotes the gPC coefficient matrix and \boldsymbol{\psi}\equiv\boldsymbol{\psi}^{p}=\left[\begin{array}[]{ccc}\psi_{0% }&\cdots&\psi_{P}\end{array}\right]^{\mathbf{T}}:\Xi\rightarrow\mathbb{R}^{P+1} denotes the basis vector. If the truncation is isotropic and based on the total degree, then the number of coefficients P+1={\displaystyle\frac{\left(p+s\right)!}{p!s!}}.

The Cameron-Martin theorem [20] states that if \boldsymbol{u}_{i} is infinitely regular in \Xi, then as p\rightarrow\infty, the gPC approximation \boldsymbol{u}_{i}^{p} converges exponentially to \boldsymbol{u}_{i}, in the mean-square sense. To state this formally, \exists\chi^{*}>0,\rho^{*}>1, such that

 \sqrt{\int_{\Xi}\left\|\boldsymbol{u}_{i}^{p}\left(\boldsymbol{\xi}\right)-% \boldsymbol{u}_{i}\left(\boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}_{i}}^{% 2}\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}}\leq\chi^{*}\rho^{-p} \hb@xt@.01(2.6)

for some \rho>\rho^{*}. Moreover, a corollary to the theorem states that if the regularity of \boldsymbol{u}_{i} is k, then the asymptotic rate of convergence is polynomial and the upper bound in the mean-square error would be \mathcal{O}\left(p^{-k}\right).

For computing statistical quantities of interest such as the moments and related probability density functions of the solutions via exhaustive sampling, the gPC approximation can be used as a significantly cheaper surrogate model when compared to repeated executions of a costly numerical PDE solver. In particular, approximations of the first two moments can be computed directly from the gPC coefficients as follows.

 \displaystyle\mathbb{E}\left(\boldsymbol{u}_{i}\right)\approx\mathbb{E}\left(% \boldsymbol{u}_{i}^{p}\right)=\hat{\boldsymbol{u}}_{i,0}, \displaystyle\mathrm{Cov}\left(\boldsymbol{u}_{i},\boldsymbol{u}_{i}\right)% \approx\mathrm{Cov}\left(\boldsymbol{u}_{i}^{p},\boldsymbol{u}_{i}^{p}\right)=% \hat{\boldsymbol{U}}_{i}\hat{\boldsymbol{U}}_{i}^{\mathbf{T}}-\hat{\boldsymbol% {u}}_{i,0}\hat{\boldsymbol{u}}_{i,0}^{\mathbf{T}}. \hb@xt@.01(2.7)

Moreover, the probability distribution function of any related quantity of interest can be computed with the kernel density estimation (KDE) method [21], using a large number of surrogate solution samples. Therefore, prior to any uncertainty analysis, the coefficient matrices of the solutions must be computed via intrusive spectral projection (ISP) or non-intrusive spectral projection (NISP) based uncertainty propagation methods. The focus of this work is the stochastic Galerkin method [22], which is a commonly used ISP based uncertainty propagation method.

### 2.3 Intrusive spectral projection

Let \mathcal{P}\left(\Xi\right)\equiv\mathcal{P}^{p}\left(\Xi\right)\equiv\left\{% \boldsymbol{v}^{\mathbf{T}}\boldsymbol{\psi}:\boldsymbol{v}\in\mathbb{R}^{P+1}\right\} denote the space of polynomials in \Xi, with total degree \leq p. Subsequently, the Galerkin form of the coupled stochastic system (Eq. 2.1) can be expressed as follows. Find \hat{\boldsymbol{U}}{}_{1}\in\mathbb{R}^{n_{1}\times\left(P+1\right)} and \hat{\boldsymbol{U}}{}_{2}\in\mathbb{R}^{n_{2}\times\left(P+1\right)}, such that \forall\omega_{1},\omega_{2}\in\mathcal{P}\left(\Xi\right),

 \displaystyle\int_{\Xi}\boldsymbol{f}_{1}\left(\hat{\boldsymbol{U}}{}_{1}% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right);\hat{\boldsymbol{U}}{}_{2}% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right),\hat{\boldsymbol{\Xi}}{}_{1}% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right)\right)\omega_{1}\left(% \boldsymbol{\xi}\right)\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi} \displaystyle+\int_{\Xi}\boldsymbol{f}_{2}\left(\hat{\boldsymbol{U}}{}_{2}% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right);\hat{\boldsymbol{U}}{}_{1}% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right),\hat{\boldsymbol{\Xi}}{}_{2}% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right)\right)\omega_{2}\left(% \boldsymbol{\xi}\right)\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}=% \boldsymbol{0}. \hb@xt@.01(2.8) \displaystyle\Rightarrow \displaystyle\hat{\boldsymbol{F}}_{1}\left(\hat{\boldsymbol{U}}{}_{1};\hat{% \boldsymbol{U}}{}_{2}\right)=\int_{\Xi}\boldsymbol{f}_{1}\left(\hat{% \boldsymbol{U}}{}_{1}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right);\hat{% \boldsymbol{U}}{}_{2}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right),\hat{% \boldsymbol{\Xi}}{}_{1}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right)\right)% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right)^{\mathbf{T}}\mu\left(% \boldsymbol{\xi}\right)d\boldsymbol{\xi}=\boldsymbol{0}, \displaystyle\hat{\boldsymbol{F}}_{2}\left(\hat{\boldsymbol{U}}{}_{2};\hat{% \boldsymbol{U}}{}_{1}\right)=\int_{\Xi}\boldsymbol{f}_{2}\left(\hat{% \boldsymbol{U}}{}_{2}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right);\hat{% \boldsymbol{U}}{}_{1}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right),\hat{% \boldsymbol{\Xi}}{}_{2}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right)\right)% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right)^{\mathbf{T}}\mu\left(% \boldsymbol{\xi}\right)d\boldsymbol{\xi}=\boldsymbol{0}. \hb@xt@.01(2.9)

where \forall i\in\left\{1,2\right\}, \hat{\boldsymbol{F}}_{i}\equiv\hat{\boldsymbol{F}}_{i}^{p} and \hat{\boldsymbol{\Xi}}_{i}\equiv\hat{\boldsymbol{\Xi}}_{i}^{p}\in\mathbb{R}^{s% _{i}\times\left(P+1\right)}:

 \displaystyle\hat{\boldsymbol{\Xi}}_{i} \displaystyle=\int_{\Xi}\boldsymbol{\xi}_{i}\left(\boldsymbol{\xi}\right)% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right)^{\mathbf{T}}\mu\left(% \boldsymbol{\xi}\right)d\boldsymbol{\xi} \hb@xt@.01(2.10)

denote the gPC coefficient matrices of component residual \boldsymbol{f}_{i} and random input vector \boldsymbol{\xi}_{i} respectively. Alternatively, the transposed Galerkin form [23], can be derived by projecting the coupled PDE system in \mathcal{P}\left(\Xi\right), and subsequently applying the respective spatial discretization schemes on the deterministic PDE system.

### 2.4 Numerical solution methods

Eq. 2.9 can be numerically solved using either monolithic (fully-coupled) or modular (partitioned) methods. A monolithic approach, for example, Newton’s method, would require the solution of the fully-coupled linear system

 \left[\begin{array}[]{cc}{\displaystyle\frac{\partial\underline{{\displaystyle% \hat{\boldsymbol{F}}_{1}}}}{\partial\underline{\hat{\boldsymbol{U}}_{1}}}}% \left(\hat{\boldsymbol{U}}_{1}^{\ell};\hat{\boldsymbol{U}}_{2}^{\ell}\right)&{% \displaystyle\frac{\partial\underline{{\displaystyle\hat{\boldsymbol{F}}_{1}}}% }{\partial\underline{\hat{\boldsymbol{U}}_{2}}}}\left(\hat{\boldsymbol{U}}_{1}% ^{\ell};\hat{\boldsymbol{U}}_{2}^{\ell}\right)\\ \\ {\displaystyle\frac{\partial\underline{{\displaystyle\hat{\boldsymbol{F}}_{2}}% }}{\partial\underline{\hat{\boldsymbol{U}}_{1}}}}\left(\hat{\boldsymbol{U}}_{2% }^{\ell};\hat{\boldsymbol{U}}_{1}^{\ell}\right)&{\displaystyle\frac{\partial% \underline{{\displaystyle\hat{\boldsymbol{F}}_{2}}}}{\partial\underline{\hat{% \boldsymbol{U}}_{2}}}}\left(\hat{\boldsymbol{U}}_{2}^{\ell};\hat{\boldsymbol{U% }}_{1}^{\ell}\right)\end{array}\right]\left[\begin{array}[]{c}\underline{\hat{% \boldsymbol{U}}_{1}^{\ell+1}}-\underline{\hat{\boldsymbol{U}}_{1}^{\ell}}\\ \\ \\ \underline{\hat{\boldsymbol{U}}_{2}^{\ell+1}}-\underline{\hat{\boldsymbol{U}}_% {2}^{\ell}}\end{array}\right]=-\left[\begin{array}[]{c}\underline{{% \displaystyle\hat{\boldsymbol{F}}_{1}}}\left(\hat{\boldsymbol{U}}_{1}^{\ell};% \hat{\boldsymbol{U}}_{2}^{\ell}\right)\\ \\ \\ \underline{{\displaystyle\hat{\boldsymbol{F}}_{2}}}\left(\hat{\boldsymbol{U}}_% {2}^{\ell};\hat{\boldsymbol{U}}_{1}^{\ell}\right)\end{array}\right], \hb@xt@.01(2.11)

to obtain the solution updates and converge to the solution of the coupled Galerkin system (Eq. 2.9). Here, \underline{\ \cdot\ } simply denotes the vectorization operator. In general, developing linear solvers (and associated preconditioners) for Eq. 2.11 is quite challenging since each component of the coupled system may require disparate numerical treatment. For example, fluid-structure interaction models [24] contains elliptic and hyperbolic PDEs, which are spatially discretized using Finite Element and Finite Volume methods respectively. Moreover, the quadratic convergence rate may not be guaranteed in practice. Furthermore, variants of Newton’s method, for instance, Gauss-Newton [25] and Levenberg-Marquardt [26], when implemented in this monolithic fashion, would also be affected by these limitations.

A modular solution method, for example, the block-Gauss-Seidel (BGS) method [6], can address some of these limitations. Primarily, separate solvers for each subproblem in Eq. 2.9, possibly employing disparate solution methods and numerical treatment, can be developed with some degree of independence and coupled together as modules with minimal modifications. In what follows, Algorithm 1 describes the standard ISP based uncertainty propagation method, which is based on the BGS iterative method. Here, \hat{\boldsymbol{M}}_{1}\equiv\hat{\boldsymbol{M}}_{1}^{p} and \hat{\boldsymbol{M}}_{2}\equiv\hat{\boldsymbol{M}}_{2}^{p} denote the module operators used in the respective solvers for {\displaystyle\hat{\boldsymbol{F}}_{1}}=\boldsymbol{0} and {\displaystyle\hat{\boldsymbol{F}}_{2}}=\boldsymbol{0}.

A more general formulation of a two-component stochastic multi-physics model can be defined as follows. \forall\boldsymbol{\xi}=\left[\boldsymbol{\xi}_{1};\boldsymbol{\xi}_{2}\right]% \in\Xi,

 \displaystyle\boldsymbol{f}_{1}\left(\boldsymbol{u}_{1}\left(\boldsymbol{\xi}% \right);\boldsymbol{v}_{2}\left(\boldsymbol{\xi}\right),\boldsymbol{\xi}_{1}% \right)=\boldsymbol{0},\ \boldsymbol{v}_{1}\left(\boldsymbol{\xi}\right)=% \boldsymbol{g}_{1}\left(\boldsymbol{u}_{1}\left(\boldsymbol{\xi}\right)\right), \displaystyle\boldsymbol{f}_{2}\left(\boldsymbol{u}_{2}\left(\boldsymbol{\xi}% \right);\boldsymbol{v}_{1}\left(\boldsymbol{\xi}\right),\boldsymbol{\xi}_{2}% \right)=\boldsymbol{0},\ \boldsymbol{v}_{2}\left(\boldsymbol{\xi}\right)=% \boldsymbol{g}_{2}\left(\boldsymbol{u}_{2}\left(\boldsymbol{\xi}\right)\right), \hb@xt@.01(2.12)

where \boldsymbol{g}_{1}\in\mathbb{R}^{m_{1}} and \boldsymbol{g}_{2}\in\mathbb{R}^{m_{2}} denote the interface (coupling) functions. For this system, the BGS method for ISP based uncertainty propagation can be characterized as a slight modification of Algorithm 1, wherein the iterations are formulated as follows.

 \displaystyle\hat{\boldsymbol{U}}_{1}^{\ell+1}=\hat{\boldsymbol{M}}_{1}\left(% \hat{\boldsymbol{U}}_{1}^{\ell};\hat{\boldsymbol{V}}_{2}^{\ell}\right),\ \hat{% \boldsymbol{V}}_{1}^{\ell+1}=\hat{\boldsymbol{G}}_{1}\left(\hat{\boldsymbol{U}% }_{1}^{\ell+1}\right), \displaystyle\hat{\boldsymbol{U}}_{2}^{\ell+1}=\hat{\boldsymbol{M}}_{2}\left(% \hat{\boldsymbol{U}}_{2}^{\ell};\hat{\boldsymbol{V}}_{1}^{\ell}\right),\ \hat{% \boldsymbol{V}}_{2}^{\ell+1}=\hat{\boldsymbol{G}}_{2}\left(\hat{\boldsymbol{U}% }_{2}^{\ell+1}\right), \hb@xt@.01(2.13)

where \forall i\in\left\{1,2\right\}, \boldsymbol{G}_{i}\equiv\boldsymbol{G}_{i}^{p} denotes the interface operator, such that \forall\hat{\boldsymbol{U}}\in\mathbb{R}^{n_{i}\times\left(P+1\right)},

 \boldsymbol{G}_{i}\left(\hat{\boldsymbol{U}}\right)=\int_{\Xi}\boldsymbol{g}_{% i}\left(\hat{\boldsymbol{U}}\boldsymbol{\psi}\left(\boldsymbol{\xi}\right)% \right)\boldsymbol{\psi}\left(\boldsymbol{\xi}\right)^{\mathbf{T}}\mu\left(% \boldsymbol{\xi}\right)d\boldsymbol{\xi}. \hb@xt@.01(2.14)

While modular solution methods are well suited for coupling disparate numerical strategies for various components of the multi-physics model, it is worth noting that the convergence rate is, in general, linear [27].

#### 2.4.1 Computational cost and limitations

Let \bar{\mathcal{C}}_{1}\equiv\bar{\mathcal{C}}_{1}\left(n_{1}\right) and \bar{\mathcal{C}}_{2}\equiv\bar{\mathcal{C}}_{2}\left(n_{2}\right) denote the respective average costs of solving \boldsymbol{f}_{1}=\boldsymbol{0} and \boldsymbol{f}_{2}=\boldsymbol{0} deterministically, given a set of input realizations in \Xi. Therefore, the overall computational cost of the standard ISP method

 \mathcal{C}_{s}\approx\mathcal{O}\left(\bar{\mathcal{C}}_{1}P^{\alpha_{1}}+% \bar{\mathcal{C}}_{2}P^{\alpha_{2}}\right), \hb@xt@.01(2.15)

where, in general, \alpha_{1},\alpha_{2}>1. Therefore, the \mathcal{C}_{s} would grow exponentially with respect to the combined stochastic dimension s and order p. This undesirable complexity, known as the curse of dimensionality, would still persist if a monolithic solution strategy were used instead.

Moreover, since the Galerkin form of each subproblem is formulated using polynomials in the global (combined) stochastic parameter space \Xi, modifying the characteristics of any particular module’s stochastic inputs would have to be reflected in all other modules. This requirement limits the developmental independence of modules and therefore, limits the practical benefits of modularization.

In the next section, we describe the proposed reduced ISP based uncertainty propagation method, which mitigates the curse of dimensionality via reduced approximation methods, and facilitates a desirable module-based independence by eliminating the influence of external stochastic inputs in formulating the Galerkin form of each subproblem.

## 3 Reduced ISP based uncertainty propagation

We will now individually describe the components of the proposed algorithm.

### 3.1 Modular gPC approximation

Since the component inputs are assumed to be statistically independent, then \forall i\in\left\{1,2\right\}, the gPC approximation of component solution \boldsymbol{u}_{i} can be rewritten as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1},\boldsymbol{\xi}_{2}\in\Xi_{2},

 \displaystyle\boldsymbol{u}_{i}^{p}\left(\boldsymbol{\xi}_{1},\boldsymbol{\xi}% _{2}\right) \displaystyle=\sum_{j=0}^{P}\hat{\boldsymbol{u}}_{i,j}\psi_{j}\left(% \boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2}\right) \displaystyle=\sum_{j=0}^{P}\hat{\boldsymbol{u}}_{i,\jmath_{1}\left(j\right),% \jmath_{2}\left(j\right)}\psi_{1,\jmath_{1}\left(j\right)}\left(\boldsymbol{% \xi}_{1}\right)\psi_{2,\jmath_{2}\left(j\right)}\left(\boldsymbol{\xi}_{2}% \right), \hb@xt@.01(3.1)

where \forall i\in\left\{1,2\right\}, \left\{\psi_{i,j}:\Xi_{i}\rightarrow\mathbb{R}\right\}_{j\geq 0} denotes the set of distinct \mu_{i}-orthonormal polynomials ordered according to their total degree and \jmath_{i}\equiv\jmath_{i}^{p}:\left\{0\leq j\leq P\right\}\rightarrow\left\{0% \leq j\leq P_{i}\right\} denotes the map from the global index set to the modular index set. Since the total degree of the polynomial expansion must be \leq p, in the isotropic case, the number of distinct polynomials needed is P_{i}+1={\displaystyle\frac{\left(p+s_{i}\right)}{p!s_{i}!}} .

\forall i\in\left\{1,2\right\}, let \boldsymbol{\psi}_{i}\equiv\boldsymbol{\psi}_{i}^{p}=\left[\begin{array}[]{ccc% }\psi_{i,0}&\cdots&\psi_{i,P_{i}}\end{array}\right]^{\mathbf{T}}:\Xi_{i}% \rightarrow\mathbb{R}^{P_{i}+1} denote the modular basis vector. Therefore, from Eq. 3.1, the global basis vector \boldsymbol{\psi} can be formulated in terms of \boldsymbol{\psi}_{i} as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1},\boldsymbol{\xi}_{2}\in\Xi_{2},

 \boldsymbol{\psi}\left(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2}\right)=% \boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)\boldsymbol{\psi}_{1}% \left(\boldsymbol{\xi}_{1}\right)=\boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{% 1}\right)\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right), \hb@xt@.01(3.2)

where \boldsymbol{\Pi}_{1}:\Xi_{2}\rightarrow\mathbb{R}^{\left(P+1\right)\times\left% (P_{1}+1\right)} and \boldsymbol{\Pi}_{2}:\Xi_{1}\rightarrow\mathbb{R}^{\left(P+1\right)\times\left% (P_{2}+1\right)} denote sparse polynomial matrices with at-most P+1 non-zero elements. \forall i\in\left\{1,2\right\}, 0\leq j\leq P,0\leq k\leq P_{i}, let \Pi_{i,j,k} denote the \left(j+1,k+1\right)-th element in \boldsymbol{\Pi}_{i}. Therefore, \forall\boldsymbol{\xi}_{1}\in\Xi_{1},\boldsymbol{\xi}_{2}\in\Xi_{2},

 \displaystyle\Pi_{1,j,k}\left(\boldsymbol{\xi}_{2}\right) \displaystyle=\begin{cases}\psi_{2,\jmath_{2}\left(j\right)}\left(\boldsymbol{% \xi}_{2}\right)&k=\jmath_{1}\left(j\right)\\ 0&k\neq\jmath_{1}\left(j\right)\end{cases}, \displaystyle\Pi_{2,j,k}\left(\boldsymbol{\xi}_{1}\right) \displaystyle=\begin{cases}\psi_{1,\jmath_{1}\left(j\right)}\left(\boldsymbol{% \xi}_{1}\right)&k=\jmath_{2}\left(j\right)\\ 0&k\neq\jmath_{2}\left(j\right)\end{cases}. \hb@xt@.01(3.3)

Subsequently, the gPC approximations of \boldsymbol{u}_{1} and \boldsymbol{u}_{2} can be formulated as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1},\boldsymbol{\xi}_{2}\in\Xi_{2},

 \boldsymbol{u}_{1}^{p}\left(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2}\right)=% \tilde{\boldsymbol{U}}_{1}\left(\boldsymbol{\xi}_{2}\right)\boldsymbol{\psi}_{% 1}\left(\boldsymbol{\xi}_{1}\right),\ \boldsymbol{u}_{2}^{p}\left(\boldsymbol{% \xi}_{1},\boldsymbol{\xi}_{2}\right)=\tilde{\boldsymbol{U}}_{2}\left(% \boldsymbol{\xi}_{1}\right)\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}% \right), \hb@xt@.01(3.4)

where \tilde{\boldsymbol{U}}_{1}\equiv\tilde{\boldsymbol{U}}_{1}^{p}:\Xi_{2}% \rightarrow\mathbb{R}^{n_{1}\times\left(P_{1}+1\right)} and \tilde{\boldsymbol{U}}_{2}\equiv\tilde{\boldsymbol{U}}_{2}^{p}:\Xi_{1}% \rightarrow\mathbb{R}^{n_{2}\times\left(P_{2}+1\right)} denote the modular gPC coefficient matrices of \boldsymbol{u}_{1} and \boldsymbol{u}_{2} respectively. From Eq. 3.2, these matrices are related to the respective global gPC coefficient matrices \hat{\boldsymbol{U}}_{1} and \hat{\boldsymbol{U}}_{2} as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1},\boldsymbol{\xi}_{2}\in\Xi_{2},

 \tilde{\boldsymbol{U}}_{1}\left(\boldsymbol{\xi}_{2}\right)=\hat{\boldsymbol{U% }}_{1}\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right),\ \tilde{% \boldsymbol{U}}_{2}\left(\boldsymbol{\xi}_{1}\right)=\hat{\boldsymbol{U}}_{2}% \boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}\right). \hb@xt@.01(3.5)

Moreover, from their definition in Eq. 3.3, \boldsymbol{\Pi}_{1} and \boldsymbol{\Pi}_{2} satisfy

 \int_{\Xi_{2}}\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)\boldsymbol% {\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)^{\mathbf{T}}\mu_{2}\left(% \boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}_{2}=\int_{\Xi_{1}}\boldsymbol{\Pi% }_{2}\left(\boldsymbol{\xi}_{1}\right)\boldsymbol{\Pi}_{2}\left(\boldsymbol{% \xi}_{1}\right)^{\mathbf{T}}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)d% \boldsymbol{\xi}_{1}=\boldsymbol{I}_{P+1}. \hb@xt@.01(3.6)

Therefore, from Eq. 3.5 and Eq. 3.6, we have

 \displaystyle\hat{\boldsymbol{U}}_{1} \displaystyle=\int_{\Xi_{2}}\tilde{\boldsymbol{U}}_{1}\left(\boldsymbol{\xi}_{% 2}\right)\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)^{\mathbf{T}}\mu% _{2}\left(\boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}_{2}, \displaystyle\hat{\boldsymbol{U}}_{2} \displaystyle=\int_{\Xi_{1}}\tilde{\boldsymbol{U}}_{2}\left(\boldsymbol{\xi}_{% 2}\right)\boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}\right)^{\mathbf{T}}\mu% _{1}\left(\boldsymbol{\xi}_{1}\right)d\boldsymbol{\xi}_{1}. \hb@xt@.01(3.7)

### 3.2 Modular Galerkin form

\forall i\in\left\{1,2\right\}, let \mathcal{P}_{i}\left(\Xi_{i}\right)\equiv\mathcal{P}_{i}^{p}\left(\Xi_{i}% \right)\equiv\left\{\boldsymbol{v}^{\mathbf{T}}\boldsymbol{\psi}_{i}:% \boldsymbol{v}\in\mathbb{R}^{P_{i}+1}\right\} denote the space of polynomials in \Xi_{i} with total degree \leq p. Subsequently, the modular Galerkin form of each component of the coupled stochastic system (Eq. 2.1) can be expressed as follows. In module 1, find \tilde{\boldsymbol{U}}{}_{1}:\Xi_{2}\rightarrow\mathbb{R}^{n_{1}\times\left(P_% {1}+1\right)}, such that \forall\hat{\boldsymbol{U}}_{2}\in\mathbb{R}^{n_{2}\times\left(P+1\right)}, \boldsymbol{\xi}_{2}\in\Xi_{2}, \omega_{1}\in\mathcal{P}_{1}\left(\Xi_{1}\right),

 \displaystyle\int_{\Xi_{1}}\boldsymbol{f}_{1}\left(\tilde{\boldsymbol{U}}{}_{1% }\left(\boldsymbol{\xi}_{2}\right)\boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_% {1}\right);\hat{\boldsymbol{U}}_{2}\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_% {2}\right)\boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_{1}\right),\tilde{% \boldsymbol{\Xi}}{}_{1}\boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_{1}\right)% \right)\omega_{1}\left(\boldsymbol{\xi}_{1}\right)\mu_{1}\left(\boldsymbol{\xi% }_{1}\right)d\boldsymbol{\xi}_{1} \displaystyle=\boldsymbol{0} \hb@xt@.01(3.8) \displaystyle\Rightarrow \displaystyle\tilde{\boldsymbol{F}}_{1}\left(\tilde{\boldsymbol{U}}{}_{1};\hat% {\boldsymbol{U}}{}_{2},\boldsymbol{\xi}_{2}\right)=\int_{\Xi_{1}}\left(% \boldsymbol{f}_{1}\left(\tilde{\boldsymbol{U}}{}_{1}\left(\boldsymbol{\xi}_{2}% \right)\boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_{1}\right);\hat{\boldsymbol% {U}}_{2}\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)\boldsymbol{\psi}% _{1}\left(\boldsymbol{\xi}_{1}\right),\tilde{\boldsymbol{\Xi}}{}_{1}% \boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_{1}\right)\right)\right. \displaystyle\times\left.\boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_{1}\right% )^{\mathbf{T}}\right)\mu_{1}\left(\boldsymbol{\xi}_{1}\right)d\boldsymbol{\xi}% _{1}=\boldsymbol{0}. \hb@xt@.01(3.9)

Similarly, in module 2, find \tilde{\boldsymbol{U}}{}_{2}:\Xi_{1}\rightarrow\mathbb{R}^{n_{2}\times\left(P_% {2}+1\right)}, such that \forall\hat{\boldsymbol{U}}_{1}\in\mathbb{R}^{n_{1}\times\left(P+1\right)}, \boldsymbol{\xi}_{1}\in\Xi_{1}, \omega_{2}\in\mathcal{P}_{2}\left(\Xi_{2}\right),

 \displaystyle\int_{\Xi_{2}}\boldsymbol{f}_{2}\left(\tilde{\boldsymbol{U}}{}_{2% }\left(\boldsymbol{\xi}_{1}\right)\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_% {2}\right);\hat{\boldsymbol{U}}_{1}\boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_% {1}\right)\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right),\tilde{% \boldsymbol{\Xi}}{}_{2}\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right)% \right)\omega_{2}\left(\boldsymbol{\xi}_{2}\right)\mu_{2}\left(\boldsymbol{\xi% }_{2}\right)d\boldsymbol{\xi}_{2} \displaystyle=\boldsymbol{0} \hb@xt@.01(3.10) \displaystyle\Rightarrow \displaystyle\tilde{\boldsymbol{F}}_{2}\left(\tilde{\boldsymbol{U}}{}_{2};\hat% {\boldsymbol{U}}{}_{1},\boldsymbol{\xi}_{1}\right)=\int_{\Xi_{2}}\left(% \boldsymbol{f}_{2}\left(\tilde{\boldsymbol{U}}{}_{2}\left(\boldsymbol{\xi}_{1}% \right)\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right);\hat{\boldsymbol% {U}}_{1}\boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}\right)\boldsymbol{\psi}% _{2}\left(\boldsymbol{\xi}_{2}\right),\tilde{\boldsymbol{\Xi}}{}_{2}% \boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right)\right)\right. \displaystyle\times\left.\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right% )^{\mathbf{T}}\right)\mu_{2}\left(\boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}% _{2}=\boldsymbol{0}. \hb@xt@.01(3.11)

Here, \tilde{\boldsymbol{F}}_{i}\equiv\tilde{\boldsymbol{F}}_{i}^{p} and \tilde{\boldsymbol{\Xi}}_{i}\equiv\tilde{\boldsymbol{\Xi}}_{i}^{p}:

 \displaystyle\tilde{\boldsymbol{\Xi}}_{i} \displaystyle=\int_{\Xi}\boldsymbol{\xi}_{i}\boldsymbol{\psi}_{i}\left(% \boldsymbol{\xi}_{i}\right)^{\mathbf{T}}\mu_{i}\left(\boldsymbol{\xi}_{i}% \right)d\boldsymbol{\xi}_{i} \hb@xt@.01(3.12)

denote the modular gPC coefficient matrices of component residual \boldsymbol{f}_{i} and random input vector \boldsymbol{\xi}_{i} respectively. Alternatively, a transposed modular Galerkin form of component i in the coupled PDE system can be derived by projecting it in \mathcal{P}_{i}\left(\Xi_{i}\right), and subsequently applying the respective spatial discretization scheme.

Let \tilde{\boldsymbol{M}}_{1}\equiv\tilde{\boldsymbol{M}}_{1}^{p} and \tilde{\boldsymbol{M}}_{2}\equiv\tilde{\boldsymbol{M}}_{2}^{p} denote the respective module operators used in the solvers for \tilde{\boldsymbol{F}}_{1}=\boldsymbol{0} and \tilde{\boldsymbol{F}}_{2}=\boldsymbol{0}. Therefore, \forall\hat{\boldsymbol{U}}_{i}\in\mathbb{R}^{n_{i}\times\left(P+1\right)},% \boldsymbol{\xi}_{i}\in\Xi_{i}, the iterations in the respective solvers can be formulated as follows.

 \displaystyle\tilde{\boldsymbol{U}}{}_{1}^{\ell+1}\left(\boldsymbol{\xi}{}_{2}\right) \displaystyle=\tilde{\boldsymbol{M}}_{1}\left(\tilde{\boldsymbol{U}}{}_{1}^{% \ell}\left(\boldsymbol{\xi}{}_{2}\right);\hat{\boldsymbol{U}}_{2}^{\ell}% \boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)\right), \displaystyle\tilde{\boldsymbol{U}}{}_{2}^{\ell+1}\left(\boldsymbol{\xi}{}_{1}\right) \displaystyle=\tilde{\boldsymbol{M}}_{2}\left(\tilde{\boldsymbol{U}}_{2}^{\ell% }\left(\boldsymbol{\xi}{}_{1}\right);\hat{\boldsymbol{U}}_{1}^{\ell+1}% \boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}\right)\right). \hb@xt@.01(3.13)

From Eq. 3.6, the global gPC coefficient matrices \hat{\boldsymbol{U}}_{1} and \hat{\boldsymbol{U}}_{2} can be computed iteratively using a BGS method wrapped around the module operators, as follows.

 \displaystyle\hat{\boldsymbol{U}}_{1}^{\ell+1} \displaystyle=\int_{\Xi_{2}}\tilde{\boldsymbol{M}}_{1}\left(\hat{\boldsymbol{U% }}_{1}^{\ell}\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right);\hat{% \boldsymbol{U}}_{2}^{\ell}\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right% )\right)\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}\right)^{\mathbf{T}}\mu_% {2}\left(\boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}_{2}, \displaystyle\hat{\boldsymbol{U}}_{2}^{\ell+1} \displaystyle=\int_{\Xi_{1}}\tilde{\boldsymbol{M}}_{2}\left(\hat{\boldsymbol{U% }}_{2}^{\ell}\mathbf{\boldsymbol{\Pi}}_{2}\left(\boldsymbol{\xi}_{1}\right);% \hat{\boldsymbol{U}}_{1}^{\ell+1}\boldsymbol{\Pi}{}_{2}\left(\boldsymbol{\xi}_% {1}\right)\right)\boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}\right)^{% \mathbf{T}}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)d\boldsymbol{\xi}_{1}. \hb@xt@.01(3.14)

Let \left\{\left(\boldsymbol{\xi}_{i}^{\left(j\right)},w_{i}^{\left(j\right)}% \right)\right\}_{j=1}^{Q_{i}} denote a quadrature rule with level q\geq p, in \Xi_{i}, such that all polynomial functions with degree \leq 2q+1 can be numerically integrated up to machine precision. Subsequently, the integrals in Eq. 3.14 can be approximated as follows.

 \displaystyle\hat{\boldsymbol{U}}_{1}^{\ell+1} \displaystyle\approx\sum_{j=1}^{Q_{2}}w_{2}^{\left(j\right)}\tilde{\boldsymbol% {M}}_{1}\left(\hat{\boldsymbol{U}}_{1}^{\ell}\boldsymbol{\Pi}{}_{1}\left(% \boldsymbol{\xi}_{2}^{\left(j\right)}\right);\hat{\boldsymbol{U}}_{2}^{\ell}% \boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}^{\left(j\right)}\right)\right)% \boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}^{\left(j\right)}\right){}^{% \mathbf{T}}, \displaystyle\hat{\boldsymbol{U}}_{2}^{\ell+1} \displaystyle\approx\sum_{j=1}^{Q_{1}}w_{1}^{\left(j\right)}\tilde{\boldsymbol% {M}}_{2}\left(\hat{\boldsymbol{U}}_{2}^{\ell}\boldsymbol{\Pi}_{2}\left(% \boldsymbol{\xi}_{1}^{\left(j\right)}\right);\hat{\boldsymbol{U}}_{1}^{\ell+1}% \boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}\right)\right)% \boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}\right)^{% \mathbf{T}}. \hb@xt@.01(3.15)

As opposed to the standard ISP method, which is based on solving the global Galerkin form, these module operators can be developed independently and modified without affecting each other. Moreover, since the coordinate directions in \Xi_{i} are assumed to be independent, the quadrature rules can be constructed by tensorization of univariate quadrature rules in each direction, such that Q_{i}=\left(q+1\right)^{s_{i}}\geq\left(p+1\right)^{s_{i}}.Alternatively, sparse grid tensorization methods [28] exhibit a significantly slower, albeit exponential growth in the size of the quadrature rules with respect to the dimension s_{i} and level q.

Assuming that the repeated execution of module operators \tilde{\boldsymbol{M}}_{1} and \tilde{\boldsymbol{M}}_{2} dominate the computational costs, the overall cost of this method would therefore be \approx\mathcal{O}\left(\bar{\mathcal{C}}_{1}P_{1}^{\alpha_{1}}Q_{2}+\bar{% \mathcal{C}}_{2}P_{2}^{\alpha_{2}}Q_{1}\right). Since \alpha_{1},\alpha_{2}>1, and P_{1},P_{2}<P, the overall cost would indeed be lesser than the cost of the standard ISP method. However, the curse of dimensionality would still persist here, since the overall costs would grow exponentially with respect to the combined dimension s and order p. To address this limitation, we propose that reduced approximations of the data that enter their respective modules be constructed at every iteration. We will now describe two main components of this construction: (1) the dimension reduction routine and (2) the order reduction routine.

### 3.3 Dimension reduction

At each iteration \ell, let \boldsymbol{y}_{1}^{\ell}\equiv\left[\boldsymbol{u}_{1}^{\ell};\boldsymbol{u}_% {2}^{\ell}\right]:\Xi\rightarrow\mathbb{R}^{n} and \boldsymbol{y}_{2}^{\ell}\equiv\left[\boldsymbol{u}_{2}^{\ell};\boldsymbol{u}_% {1}^{\ell+1}\right]:\Xi\rightarrow\mathbb{R}^{n} denote the input data for module 1 and 2 respectively, where n=n_{1}+n_{2}. Consequently, the gPC coefficient matrices \hat{\boldsymbol{Y}}_{1}^{\ell}\equiv\left[\hat{\boldsymbol{U}}_{1}^{\ell};% \hat{\boldsymbol{U}}_{2}^{\ell}\right]=\left[\begin{array}[]{ccc}\hat{% \boldsymbol{y}}_{1,\jmath_{1}\left(0\right),\jmath_{2}\left(0\right)}^{\ell}&% \cdots&\hat{\boldsymbol{y}}_{1,\jmath_{1}\left(P\right),\jmath_{2}\left(P% \right)}^{\ell}\end{array}\right]\in\mathbb{R}^{n\times\left(P+1\right)} and \hat{\boldsymbol{Y}}_{2}^{\ell}\equiv\left[\hat{\boldsymbol{U}}_{2}^{\ell};\hat{\boldsymbol{U}}_{1}^{\ell+1}% \right]=\left[\begin{array}[]{ccc}\hat{\boldsymbol{y}}_{2,\jmath_{1}\left(0% \right),\jmath_{2}\left(0\right)}^{\ell}&\cdots&\hat{\boldsymbol{y}}_{2,\jmath% _{1}\left(P\right),\jmath_{2}\left(P\right)}^{\ell}\end{array}\right]\in% \mathbb{R}^{n\times\left(P+1\right)} define the respective input data for the module operators \tilde{\boldsymbol{M}}_{1} and \tilde{\boldsymbol{M}}_{2}, Moreover, let \boldsymbol{\Gamma}_{1},\boldsymbol{\Gamma}_{2}:

 \boldsymbol{\Gamma}_{1}=\left[\begin{array}[]{cc}\boldsymbol{G}_{1}&% \boldsymbol{0}\\ \boldsymbol{0}&\boldsymbol{G}_{2}\end{array}\right]\in\mathbb{R}^{n\times n},% \ \boldsymbol{\Gamma}_{2}=\left[\begin{array}[]{cc}\boldsymbol{G}_{2}&% \boldsymbol{0}\\ \boldsymbol{0}&\boldsymbol{G}_{1}\end{array}\right]\in\mathbb{R}^{n\times n} \hb@xt@.01(3.16)

denote the Gramian matrices corresponding to \boldsymbol{y}_{1}^{\ell} and \boldsymbol{y}_{2}^{\ell} respectively, where \forall i\in\left\{1,2\right\}, P_{i}^{\prime}+1={\displaystyle\frac{\left(p-1+s_{i}\right)!}{\left(p-1\right)% !s_{i}!}}.

Therefore, using \hat{\boldsymbol{Y}}_{1}^{\ell} and \hat{\boldsymbol{Y}}_{2}^{\ell}, we construct \boldsymbol{Z}_{1}^{\ell} and \boldsymbol{Z}_{2}^{\ell}:

 \displaystyle\boldsymbol{Z}_{1}^{\ell} \displaystyle=\left(\boldsymbol{I}_{P_{1}^{\prime}+1}\otimes\boldsymbol{\Gamma% }_{1}^{\frac{1}{2}}\right)\left[\begin{array}[]{ccc}\boldsymbol{\zeta}_{1,0,1}% ^{\ell}&\cdots&\boldsymbol{\zeta}_{1,0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \boldsymbol{\zeta}_{1,P_{1}^{\prime},1}^{\ell}&\cdots&\boldsymbol{\zeta}_{1,P_% {1}^{\prime},P_{2}}^{\ell}\end{array}\right]\in\mathbb{R}^{\left(P_{1}^{\prime% }+1\right)n\times P_{2}}, \displaystyle\boldsymbol{Z}_{2}^{\ell} \displaystyle=\left(\boldsymbol{I}_{P_{2}^{\prime}+1}\otimes\boldsymbol{\Gamma% }_{2}^{\frac{1}{2}}\right)\left[\begin{array}[]{ccc}\boldsymbol{\zeta}_{2,0,1}% ^{\ell}&\cdots&\boldsymbol{\zeta}_{2,0,P_{1}}^{\ell}\\ \vdots&&\vdots\\ \boldsymbol{\zeta}_{2,P_{2}^{\prime},1}^{\ell}&\cdots&\boldsymbol{\zeta}_{2,P_% {2}^{\prime},P_{1}}^{\ell}\end{array}\right]\in\mathbb{R}^{\left(P_{2}^{\prime% }+1\right)n\times P_{1}}, \hb@xt@.01(3.17)

where \forall 0\leq j\leq P_{1}^{\prime}, 1\leq k\leq P_{2}, \boldsymbol{\zeta}_{1,j,k}^{\ell}\in\mathbb{R}^{n}:

 \boldsymbol{\zeta}_{1,j,k}^{\ell}=\begin{cases}\hat{\boldsymbol{y}}_{1,j,k}^{% \ell}&\mathrm{deg}\left(\psi_{1,j}\right)+\mathrm{deg}\left(\psi_{2,k}\right)% \leq p\\ \boldsymbol{0}&\mathrm{deg}\left(\psi_{1,j}\right)+\mathrm{deg}\left(\psi_{2,k% }\right)>p\end{cases}, \hb@xt@.01(3.18)

and \forall 0\leq j\leq P_{2}^{\prime}, 1\leq k\leq P_{1}, \boldsymbol{\zeta}_{2,j,k}^{\ell}\in\mathbb{R}^{n}:

 \boldsymbol{\zeta}_{2,j,k}^{\ell}=\begin{cases}\hat{\boldsymbol{y}}_{2,j,k}^{% \ell}&\mathrm{deg}\left(\psi_{2,j}\right)+\mathrm{deg}\left(\psi_{1,k}\right)% \leq p\\ \boldsymbol{0}&\mathrm{deg}\left(\psi_{2,j}\right)+\mathrm{deg}\left(\psi_{1,k% }\right)>p\end{cases}. \hb@xt@.01(3.19)

Subsequently, we compute the singular value decomposition (SVD) of \boldsymbol{Z}_{1}^{\ell} and \boldsymbol{Z}_{2}^{\ell}:

 \boldsymbol{Z}_{1}^{\ell}=\tilde{\boldsymbol{\Upsilon}}_{1}^{\ell}\boldsymbol{% \Sigma}_{1}^{\ell}\left(\tilde{\boldsymbol{\Theta}}_{1}^{\ell}\right)^{\mathbf% {T}},\ \boldsymbol{Z}_{2}^{\ell}=\tilde{\boldsymbol{\Upsilon}}_{2}^{\ell}% \boldsymbol{\Sigma}_{2}^{\ell}\left(\tilde{\boldsymbol{\Theta}}_{2}^{\ell}% \right)^{\mathbf{T}}, \hb@xt@.01(3.20)

where the decomposition matrices have the following structure and dimensions.

 \displaystyle\tilde{\boldsymbol{\Upsilon}}_{1}^{\ell} \displaystyle=\left[\begin{array}[]{ccc}\tilde{\boldsymbol{\upsilon}}_{1,0,1}^% {\ell}&\cdots&\tilde{\boldsymbol{\upsilon}}_{1,0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \tilde{\boldsymbol{\upsilon}}_{1,P_{1}^{\prime},1}^{\ell}&\cdots&\tilde{% \boldsymbol{\upsilon}}_{1,P_{1}^{\prime},P_{2}}^{\ell}\end{array}\right]\in% \mathbb{R}^{\left(P_{1}^{\prime}+1\right)n\times P_{2}}, \displaystyle\tilde{\boldsymbol{\Upsilon}}_{2}^{\ell} \displaystyle=\left[\begin{array}[]{ccc}\tilde{\boldsymbol{\upsilon}}_{2,0,1}^% {\ell}&\cdots&\tilde{\boldsymbol{\upsilon}}_{2,0,P_{1}}^{\ell}\\ \vdots&&\vdots\\ \tilde{\boldsymbol{\upsilon}}_{2,P_{2}^{\prime},1}^{\ell}&\cdots&\tilde{% \boldsymbol{\upsilon}}_{2,P_{2}^{\prime},P_{1}}^{\ell}\end{array}\right]\in% \mathbb{R}^{\left(P_{2}^{\prime}+1\right)n\times P_{1}}, \hb@xt@.01(3.21)
 \boldsymbol{\Sigma}_{1}^{\ell}=\left[\begin{array}[]{ccc}\sigma_{1,1}^{\ell}\\ &\ddots\\ &&\sigma_{1,P_{2}}^{\ell}\end{array}\right]\in\mathbb{R}^{P_{2}\times P_{2}},% \ \boldsymbol{\Sigma}_{2}^{\ell}=\left[\begin{array}[]{ccc}\sigma_{2,1}^{\ell}% \\ &\ddots\\ &&\sigma_{2,P_{1}}^{\ell}\end{array}\right]\in\mathbb{R}^{P_{1}\times P_{1}}, \hb@xt@.01(3.22)
 \displaystyle\tilde{\boldsymbol{\Theta}}_{1}^{\ell} \displaystyle=\left[\begin{array}[]{ccc}\hat{\theta}_{1,1,1}&\cdots&\hat{% \theta}_{1,1,P_{2}}\\ \vdots&&\vdots\\ \hat{\theta}_{1,P_{2},1}&\cdots&\hat{\theta}_{1,P_{2},P_{2}}\end{array}\right]% \in\mathbb{R}^{P_{2}\times P_{2}}, \displaystyle\tilde{\boldsymbol{\Theta}}_{2}^{\ell} \displaystyle=\left[\begin{array}[]{ccc}\hat{\theta}_{2,1,1}&\cdots&\hat{% \theta}_{2,1,P_{1}}\\ \vdots&&\vdots\\ \hat{\theta}_{2,P_{1},1}&\cdots&\hat{\theta}_{2,P_{1},P_{1}}\end{array}\right]% \in\mathbb{R}^{P_{1}\times P_{1}}. \hb@xt@.01(3.23)

We then construct \hat{\boldsymbol{\Upsilon}}_{1}^{\ell}\in\mathbb{R}^{\left(P_{1}+1\right)n% \times P_{2}}, \hat{\boldsymbol{\Upsilon}}_{2}^{\ell}\in\mathbb{R}^{\left(P_{2}+1\right)n% \times P_{1}}:

 \displaystyle\hat{\boldsymbol{\Upsilon}}_{1}^{\ell} \displaystyle=\left(\boldsymbol{I}_{P_{1}+1}\otimes\boldsymbol{\Gamma}_{1}^{-% \frac{1}{2}}\right)\left[\begin{array}[]{c}\tilde{\boldsymbol{\Upsilon}}_{1}^{% \ell}\\ \boldsymbol{0}\end{array}\right]=\left[\begin{array}[]{ccc}\hat{\boldsymbol{% \upsilon}}_{1,0,1}^{\ell}&\cdots&\hat{\boldsymbol{\upsilon}}_{1,0,P_{2}}^{\ell% }\\ \vdots&&\vdots\\ \hat{\boldsymbol{\upsilon}}_{1,P_{1},1}^{\ell}&\cdots&\hat{\boldsymbol{% \upsilon}}_{1,P_{1},P_{2}}^{\ell}\end{array}\right], \displaystyle\hat{\boldsymbol{\Upsilon}}_{2}^{\ell} \displaystyle=\left(\boldsymbol{I}_{P_{2}^{\prime}+1}\otimes\boldsymbol{\Gamma% }_{2}^{-\frac{1}{2}}\right)\left[\begin{array}[]{c}\tilde{\boldsymbol{\Upsilon% }}_{2}^{\ell}\\ \boldsymbol{0}\end{array}\right]=\left[\begin{array}[]{ccc}\hat{\boldsymbol{% \upsilon}}_{2,0,1}^{\ell}&\cdots&\hat{\boldsymbol{\upsilon}}_{2,0,P_{1}}^{\ell% }\\ \vdots&&\vdots\\ \hat{\boldsymbol{\upsilon}}_{2,P_{2},1}^{\ell}&\cdots&\hat{\boldsymbol{% \upsilon}}_{2,P_{2},P_{1}}^{\ell}\end{array}\right], \hb@xt@.01(3.24)

and \hat{\boldsymbol{\Theta}}_{1}^{\ell}\in\mathbb{R}^{\left(P_{2}+1\right)\times P% _{2}}, \hat{\boldsymbol{\Theta}}_{2}^{\ell}\in\mathbb{R}^{\left(P_{1}+1\right)\times P% _{1}}:

 \displaystyle\hat{\boldsymbol{\Theta}}_{1}^{\ell} \displaystyle=\left[\begin{array}[]{ccc}0&\cdots&0\\ &\tilde{\boldsymbol{\Theta}}_{1}^{\ell}\end{array}\right]=\left[\begin{array}[% ]{ccc}\hat{\theta}_{1,0,1}^{\ell}&\cdots&\hat{\theta}_{1,0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \hat{\theta}_{1,P_{2},1}^{\ell}&\cdots&\hat{\theta}_{1,P_{2},P_{2}}^{\ell}\end% {array}\right]=\left[\begin{array}[]{c}\hat{\boldsymbol{\theta}}_{1,1}\\ \vdots\\ \hat{\boldsymbol{\theta}}_{1,P_{2}}\end{array}\right]^{\mathbf{T}}, \displaystyle\hat{\boldsymbol{\Theta}}_{2}^{\ell} \displaystyle=\left[\begin{array}[]{ccc}0&\cdots&0\\ &\tilde{\boldsymbol{\Theta}}_{2}^{\ell}\end{array}\right]=\left[\begin{array}[% ]{ccc}\hat{\theta}_{2,0,1}^{\ell}&\cdots&\hat{\theta}_{2,0,P_{1}}^{\ell}\\ \vdots&&\vdots\\ \hat{\theta}_{2,P_{1},1}^{\ell}&\cdots&\hat{\theta}_{2,P_{1},P_{1}}^{\ell}\end% {array}\right]=\left[\begin{array}[]{c}\hat{\boldsymbol{\theta}}_{2,1}\\ \vdots\\ \hat{\boldsymbol{\theta}}_{2,P_{1}}\end{array}\right]^{\mathbf{T}}. \hb@xt@.01(3.25)

Subsequently, the gPC approximations \boldsymbol{y}_{1}^{\ell,p} and \boldsymbol{y}_{2}^{\ell,p} can be rewritten as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1}, \boldsymbol{\xi}_{2}\in\Xi_{2},

 \displaystyle\boldsymbol{y}_{1}^{\ell,p}\left(\boldsymbol{\xi}_{1},\boldsymbol% {\xi}_{2}\right) \displaystyle=\bar{\boldsymbol{y}}_{1}^{\ell}\left(\boldsymbol{\xi}_{1}\right)% +\sum_{j=1}^{P_{2}}\sigma_{1,j}^{\ell}\boldsymbol{\upsilon}_{1,j}^{\ell}\left(% \boldsymbol{\xi}_{1}\right)\theta_{1,j}^{\ell}\left(\boldsymbol{\xi}_{2}\right), \displaystyle\boldsymbol{y}_{2}^{\ell,p}\left(\boldsymbol{\xi}_{1},\boldsymbol% {\xi}_{2}\right) \displaystyle=\bar{\boldsymbol{y}}_{2}^{\ell}\left(\boldsymbol{\xi}_{2}\right)% +\sum_{j=1}^{P_{1}}\sigma_{2,j}^{\ell}\boldsymbol{\upsilon}_{2,j}^{\ell}\left(% \boldsymbol{\xi}_{2}\right)\theta_{2,j}^{\ell}\left(\boldsymbol{\xi}_{1}\right), \hb@xt@.01(3.26)

where

 \bar{\boldsymbol{y}}_{1}^{\ell}\left(\boldsymbol{\xi}_{1}\right)=\sum_{j=0}^{P% _{1}}\hat{\boldsymbol{y}}_{1,j,0}^{\ell}\psi_{1,j}\left(\boldsymbol{\xi}_{1}% \right),\ \bar{\boldsymbol{y}}_{2}^{\ell}\left(\boldsymbol{\xi}_{2}\right)=% \sum_{j=0}^{P_{2}}\hat{\boldsymbol{y}}_{2,0,j}^{\ell}\psi_{2,j}\left(% \boldsymbol{\xi}_{2}\right). \hb@xt@.01(3.27)

Moreover, \forall 1\leq j\leq P_{2},

 \boldsymbol{\upsilon}_{1,j}^{\ell}\left(\boldsymbol{\xi}_{1}\right)=\sum_{k=0}% ^{P_{1}}\hat{\boldsymbol{\upsilon}}_{1,k,j}^{\ell}\psi_{1,k}\left(\boldsymbol{% \xi}_{1}\right),\ \theta_{1,j}^{\ell}\left(\boldsymbol{\xi}_{2}\right)=\sum_{k% =0}^{P_{2}}\hat{\theta}_{1,k,j}^{\ell}\psi_{2,k}\left(\boldsymbol{\xi}_{2}% \right)=\hat{\boldsymbol{\theta}}_{1,j}\boldsymbol{\psi}_{2}\left(\boldsymbol{% \xi}_{2}\right), \hb@xt@.01(3.28)

and \forall 1\leq j\leq P_{1},

 \boldsymbol{\upsilon}_{2,j}^{\ell}\left(\boldsymbol{\xi}_{2}\right)=\sum_{k=0}% ^{P_{2}}\hat{\boldsymbol{\upsilon}}_{2,k,j}^{\ell}\psi_{2,k}\left(\boldsymbol{% \xi}_{2}\right),\ \theta_{2,j}^{\ell}\left(\boldsymbol{\xi}_{1}\right)=\sum_{k% =0}^{P_{1}}\hat{\theta}_{2,k,j}^{\ell}\psi_{1,k}\left(\boldsymbol{\xi}_{1}% \right)==\hat{\boldsymbol{\theta}}_{2,j}\boldsymbol{\psi}_{1}\left(\boldsymbol% {\xi}_{1}\right), \hb@xt@.01(3.29)

Each functional representation in Eq. 3.26 is a finite-dimensional and separable variant of the widely used Karhunen-Loeve (KL) expansion [29]. While the coefficients in a standard KL expansion are deterministic, the coefficients in the separable KL expansion are random.

Moreover, the modular gPC coefficient matrices of \boldsymbol{y}_{1}^{\ell} and \boldsymbol{y}_{2}^{\ell} can be formulated as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1}, \boldsymbol{\xi}_{2}\in\Xi_{2},

 \displaystyle\tilde{\boldsymbol{Y}}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}\right) \displaystyle=\tilde{\boldsymbol{Y}}_{1,0}^{\ell}+\sum_{j=1}^{P_{2}}\sigma_{1,% j}^{\ell}\tilde{\boldsymbol{Y}}_{1,j}^{\ell}\theta_{1,j}^{\ell}\left(% \boldsymbol{\xi}_{2}\right), \displaystyle\tilde{\boldsymbol{Y}}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}\right) \displaystyle=\tilde{\boldsymbol{Y}}_{2,0}^{\ell}+\sum_{j=1}^{P_{1}}\sigma_{2,% j}^{\ell}\tilde{\boldsymbol{Y}}_{2,j}^{\ell}\theta_{2,j}^{\ell}\left(% \boldsymbol{\xi}_{1}\right), \hb@xt@.01(3.30)

where \tilde{\boldsymbol{Y}}_{1,0}^{\ell}=\left[\begin{array}[]{ccc}\hat{\boldsymbol% {y}}_{1,0,0}^{\ell}&\cdots&\hat{\boldsymbol{y}}_{1,P_{1},0}^{\ell}\end{array}% \right], \tilde{\boldsymbol{Y}}_{2,0}^{\ell}=\left[\begin{array}[]{ccc}\hat{\boldsymbol% {y}}_{2,0,0}^{\ell}&\cdots&\hat{\boldsymbol{y}}_{1,0,P_{2}}^{\ell}\end{array}\right], \forall 1\leq j\leq P_{2}, \tilde{\boldsymbol{Y}}_{1,j}^{\ell}=\left[\begin{array}[]{ccc}\hat{\boldsymbol% {\upsilon}}_{1,0,j}^{\ell}&\cdots&\hat{\boldsymbol{\upsilon}}_{1,P_{1},j}^{% \ell}\end{array}\right], and \forall 1\leq j\leq P_{1}, \tilde{\boldsymbol{Y}}_{2,j}^{\ell}=\left[\begin{array}[]{ccc}\hat{\boldsymbol% {\upsilon}}_{2,0,j}^{\ell}&\cdots&\hat{\boldsymbol{\upsilon}}_{2,P_{2},j}^{% \ell}\end{array}\right].

#### 3.3.1 Reduced chaos expansions

At each iteration \ell, the expansions in Eq. 3.26 can be truncated by retaining d_{1}\equiv d_{1}^{\ell} and d_{2}\equiv d_{2}^{\ell} terms, to define the respective reduced chaos expansions [16] \boldsymbol{y}_{1}^{\ell,d_{1}}\equiv\boldsymbol{y}_{1}^{\ell,p,d_{1}} and \boldsymbol{y}_{2}^{\ell,d_{2}}\equiv\boldsymbol{y}_{2}^{\ell,p,d_{2}}: \forall\boldsymbol{\xi}_{1}\in\Xi_{1}, \boldsymbol{\xi}_{2}\in\Xi_{2},

 \displaystyle\boldsymbol{y}_{1}^{\ell,p}\left(\boldsymbol{\xi}_{1},\boldsymbol% {\xi}_{2}\right)\approx\boldsymbol{y}_{1}^{\ell,d_{1}}\left(\boldsymbol{\xi}_{% 1},\boldsymbol{\xi}_{2}\right) \displaystyle=\bar{\boldsymbol{y}}_{1}^{\ell}\left(\boldsymbol{\xi}_{1}\right)% +\sum_{j=1}^{d_{1}}\sigma_{1,j}^{\ell}\boldsymbol{\upsilon}_{1,j}^{\ell}\left(% \boldsymbol{\xi}_{1}\right)\theta_{1,j}^{\ell}\left(\boldsymbol{\xi}_{2}\right), \displaystyle\boldsymbol{y}_{2}^{\ell,p}\left(\boldsymbol{\xi}_{1},\boldsymbol% {\xi}_{2}\right)\approx\boldsymbol{y}_{2}^{\ell,d_{2}}\left(\boldsymbol{\xi}_{% 1},\boldsymbol{\xi}_{2}\right) \displaystyle=\bar{\boldsymbol{y}}_{2}^{\ell}\left(\boldsymbol{\xi}_{2}\right)% +\sum_{j=1}^{d_{2}}\sigma_{2,j}^{\ell}\boldsymbol{\upsilon}_{2,j}^{\ell}\left(% \boldsymbol{\xi}_{2}\right)\theta_{2,j}^{\ell}\left(\boldsymbol{\xi}_{1}\right). \hb@xt@.01(3.31)

Similarly, reduced dimensional approximations of the modular gPC coefficient matrices \tilde{\boldsymbol{Y}}_{1}^{\ell} and \tilde{\boldsymbol{Y}}_{2}^{\ell} can be defined as follows. \forall\boldsymbol{\xi}_{1}\in\Xi_{1}, \boldsymbol{\xi}_{2}\in\Xi_{2},

 \displaystyle\tilde{\boldsymbol{Y}}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}\right% )\approx\tilde{\boldsymbol{Y}}_{1}^{\ell,d_{1}}\left(\boldsymbol{\xi}_{2}\right) \displaystyle=\tilde{\boldsymbol{Y}}_{1,0}^{\ell}+\sum_{j=1}^{d_{1}}\sigma_{1,% j}^{\ell}\tilde{\boldsymbol{Y}}_{1,j}^{\ell}\theta_{1,j}^{\ell}\left(% \boldsymbol{\xi}_{2}\right), \displaystyle\tilde{\boldsymbol{Y}}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}\right% )\approx\tilde{\boldsymbol{Y}}_{2}^{\ell,d_{2}}\left(\boldsymbol{\xi}_{1}\right) \displaystyle=\tilde{\boldsymbol{Y}}_{2,0}^{\ell}+\sum_{j=1}^{d_{2}}\sigma_{2,% j}^{\ell}\tilde{\boldsymbol{Y}}_{2,j}^{\ell}\theta_{2,j}^{\ell}\left(% \boldsymbol{\xi}_{1}\right), \hb@xt@.01(3.32)

Let \boldsymbol{\theta}_{1}^{\ell}\equiv\left[\begin{array}[]{ccc}\theta_{1,1}^{% \ell}&\cdots&\theta_{1,d_{1}}^{\ell}\end{array}\right]^{\mathbf{T}}:\Xi_{2}% \rightarrow\Theta_{1}^{\ell}\subseteq\mathbb{R}^{d_{1}} and \boldsymbol{\theta}_{2}^{\ell}\equiv\left[\begin{array}[]{cc}\theta_{2,1}^{% \ell}&\cdots\end{array}\right. \left.\theta_{2,d_{2}}^{\ell}\ \right]^{\mathbf{T}}:\Xi_{1}\rightarrow\Theta_{% 2}^{\ell}\subseteq\mathbb{R}^{d_{2}} denote the reduced dimensional random vectors that define \boldsymbol{y}_{1}^{\ell,d_{1}} and \boldsymbol{y}_{2}^{\ell,d_{2}} respectively, with probability density functions \nu_{1}^{\ell}:\Theta_{1}^{\ell}\rightarrow\mathbb{R}^{+} and \nu_{2}^{\ell}:\Theta_{2}^{\ell}\rightarrow\mathbb{R}^{+}. Therefore, \forall i\in\left\{1,2\right\}, there exists an affine map \tilde{\boldsymbol{Z}}_{i}^{\ell}:\Theta_{i}^{\ell}\rightarrow\mathbb{R}^{n% \times\left(P_{i}+1\right)} which defines the composition \tilde{\boldsymbol{Y}}_{i}^{\ell}=\tilde{\boldsymbol{Z}}_{i}^{\ell}\circ% \boldsymbol{\theta}_{i}^{\ell}, such that \forall\boldsymbol{\theta}=\left[\begin{array}[]{ccc}\theta_{1}&\cdots&\theta_% {d_{i}}\end{array}\right]^{\mathbf{T}}\in\Theta_{i}^{\ell},

 \tilde{\boldsymbol{Z}}_{i}^{\ell}\left(\boldsymbol{\theta}\right)=\tilde{% \boldsymbol{Z}}_{i}^{\ell}\left(\theta_{1},\ldots,\theta_{d_{i}}\right)=\tilde% {\boldsymbol{Z}}_{i,0}^{\ell}+\sum_{j=1}^{d_{i}}\tilde{\boldsymbol{Z}}_{i,j}^{% \ell}\theta_{j},

where \tilde{\boldsymbol{Z}}_{i,0}^{\ell}=\tilde{\boldsymbol{Y}}_{i,0}^{\ell} and \forall 1\leq j\leq d_{i}, \tilde{\boldsymbol{Z}}_{i,j}^{\ell}=\sigma_{i,j}^{\ell}\tilde{\boldsymbol{Y}}_% {i,j}^{\ell}.

Subsequently, we extract the following submatrix blocks from \tilde{\boldsymbol{Z}}_{1,0}^{\ell},\ldots,\tilde{\boldsymbol{Z}}_{1,d_{1}}^{\ell} and \tilde{\boldsymbol{Z}}_{2,0}^{\ell},\ldots,\tilde{\boldsymbol{Z}}_{2,d_{2}}^{\ell}. \forall 0\leq j\leq d_{1}

 \tilde{\boldsymbol{Z}}_{1,j}^{\ell}=\left[\begin{array}[]{c}\tilde{\boldsymbol% {U}}_{1,1,j}^{\ell}\\ \tilde{\boldsymbol{U}}_{2,1,j}^{\ell}\end{array}\right], \hb@xt@.01(3.33)

and \forall 0\leq j\leq d_{2}

 \tilde{\boldsymbol{Z}}_{2,j}^{\ell}=\left[\begin{array}[]{c}\tilde{\boldsymbol% {U}}_{2,2,j}^{\ell}\\ \tilde{\boldsymbol{U}}_{1,2,j}^{\ell}\end{array}\right], \hb@xt@.01(3.34)

where \forall i,j\in\left\{1,2\right\}, 0\leq k\leq d_{i}, \tilde{\boldsymbol{U}}_{i,j,k}^{\ell}\in\mathbb{R}^{n_{i}\times\left(P_{j}+1% \right)}.

#### 3.3.2 Selecting the reduced dimensions

We prescribe tolerances \epsilon_{1,\mathrm{dim}}>0 and \epsilon_{2,\mathrm{dim}}>0, such that at each iteration \ell, d_{1} and d_{2} are respectively selected as the minimum k_{1}\in\mathbb{N} and k_{2}\in\mathbb{N}, which satisfy the following inequalities.

 \displaystyle\sqrt{\sum_{j=k_{1}+1}^{P_{2}}\left(\sigma_{1,j}^{\ell}\right)^{2}} \displaystyle\leq\epsilon_{1,\mathrm{dim}}\sqrt{\sum_{j=1}^{P_{2}}\left(\sigma% _{1,j}^{\ell}\right)^{2}}, \displaystyle\sqrt{\sum_{j=k_{2}+1}^{P_{1}}\left(\sigma_{2,j}^{\ell}\right)^{2}} \displaystyle\leq\epsilon_{2,\mathrm{dim}}\sqrt{\sum_{j=1}^{P_{1}}\left(\sigma% _{2,j}^{\ell}\right)^{2}}. \hb@xt@.01(3.35)

If d_{1}<s_{2}, then a reduced dimensional approximation of the input data that enters module operator \tilde{\boldsymbol{M}}_{1} exists, with an approximation error \mathcal{O}\left(\epsilon_{1,\mathrm{dim}}\right). Similarly, if d_{2}<s_{1}, then a reduced dimensional approximation of the input data that enters module operator \tilde{\boldsymbol{M}}_{2} exists, with an approximation error \mathcal{O}\left(\epsilon_{2,\mathrm{dim}}\right). Theorem 1 proves these error bounds.

#### Theorem 1

\forall i\in\left\{1,2\right\} and iteration \ell, the approximation \boldsymbol{y}_{i}^{\ell,d_{i}} satisfies the inequality

 \sqrt{\int_{\Xi}\left\|\boldsymbol{y}_{i}^{\ell}\left(\boldsymbol{\xi}\right)-% \boldsymbol{y}_{i}^{\ell,d_{i}}\left(\boldsymbol{\xi}\right)\right\|_{% \boldsymbol{\Gamma}_{i}}^{2}\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}}% \leq\epsilon_{i,\mathrm{dim}}\sqrt{\int_{\Xi}\left\|\boldsymbol{y}_{i}^{\ell}% \left(\boldsymbol{\xi}\right)-\tilde{\boldsymbol{y}}_{i}^{\ell}\left(% \boldsymbol{\xi}_{i}\left(\boldsymbol{\xi}\right)\right)\right\|_{\boldsymbol{% \Gamma}_{i}}^{2}\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}} \hb@xt@.01(3.36)

#### 3.3.3 Proof

For i=1, the square of the left hand side expression in Eq. 3.36 can be written as follows.

 \displaystyle\int_{\Xi_{1}\times\Xi_{2}}\left\|\boldsymbol{y}_{1}^{\ell}\left(% \boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2}\right)-\boldsymbol{y}_{1}^{\ell,d_{i% }}\left(\boldsymbol{\xi}_{1},\boldsymbol{\xi}_{2}\right)\right\|_{\boldsymbol{% \Gamma}_{1}}^{2}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)\mu_{2}\left(% \boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}_{1}d\boldsymbol{\xi}_{2} \displaystyle=\int_{\Xi_{1}\times\Xi_{2}}\left\|\sum_{j=d_{1}+1}^{P_{2}}\sigma% _{1,j}^{\ell}\boldsymbol{\upsilon}_{1,j}^{\ell}\left(\boldsymbol{\xi}_{1}% \right)\theta_{1,j}^{\ell}\left(\boldsymbol{\xi}_{2}\right)\right\|_{% \boldsymbol{\Gamma}_{1}}^{2}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)\mu_{2}% \left(\boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}_{1}d\boldsymbol{\xi}_{2} \displaystyle=\int_{\Xi_{1}\times\Xi_{2}}\left\|\sum_{j=d_{1}+1}^{P_{2}}\sigma% _{1,j}^{\ell}\tilde{\boldsymbol{Y}}_{1,j}^{\ell}\boldsymbol{\psi}_{1}\left(% \boldsymbol{\xi}_{1}\right)\hat{\boldsymbol{\theta}}_{1,j}^{\ell}\boldsymbol{% \psi}_{2}\left(\boldsymbol{\xi}_{2}\right)\right\|_{\boldsymbol{\Gamma}_{1}}^{% 2}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)\mu_{2}\left(\boldsymbol{\xi}_{2}% \right)d\boldsymbol{\xi}_{1}d\boldsymbol{\xi}_{2}
 \displaystyle=\int_{\Xi_{1}\times\Xi_{2}}\left(\boldsymbol{\psi}_{2}\left(% \boldsymbol{\xi}_{2}\right)^{\mathbf{T}}\left[\begin{array}[]{c}\hat{% \boldsymbol{\theta}}_{1,d_{1}+1}\\ \vdots\\ \hat{\boldsymbol{\theta}}_{1,P_{2}}\end{array}\right]^{\mathbf{T}}\left[\begin% {array}[]{ccc}\sigma_{1,d_{1}+1}^{\ell}\\ &\ddots\\ &&\sigma_{1,P_{2}}^{\ell}\end{array}\right]\right. \displaystyle\left.\times\left[\begin{array}[]{ccc}\tilde{\boldsymbol{\upsilon% }}_{1,0,1}^{\ell}&\cdots&\tilde{\boldsymbol{\upsilon}}_{1,0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \tilde{\boldsymbol{\upsilon}}_{1,P_{1}^{\prime},1}^{\ell}&\cdots&\tilde{% \boldsymbol{\upsilon}}_{1,P_{1}^{\prime},P_{2}}^{\ell}\\ &\boldsymbol{0}\end{array}\right]^{\mathbf{T}}\left(\boldsymbol{\psi}_{1}\left% (\boldsymbol{\xi}_{1}\right)\otimes\boldsymbol{\Gamma}_{1}^{-\frac{1}{2}}% \right)\right)\boldsymbol{\Gamma}_{1}\left(\left(\boldsymbol{\psi}_{1}\left(% \boldsymbol{\xi}_{1}\right)^{\mathbf{T}}\otimes\boldsymbol{\Gamma}_{1}^{-\frac% {1}{2}}\right)\right. \displaystyle\left.\times\left[\begin{array}[]{ccc}\tilde{\boldsymbol{\upsilon% }}_{1,0,1}^{\ell}&\cdots&\tilde{\boldsymbol{\upsilon}}_{1,0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \tilde{\boldsymbol{\upsilon}}_{1,P_{1}^{\prime},1}^{\ell}&\cdots&\tilde{% \boldsymbol{\upsilon}}_{1,P_{1}^{\prime},P_{2}}^{\ell}\\ &\boldsymbol{0}\end{array}\right]\left[\begin{array}[]{ccc}\sigma_{1,d_{1}+1}^% {\ell}\\ &\ddots\\ &&\sigma_{1,P_{2}}^{\ell}\end{array}\right]\left[\begin{array}[]{c}\hat{% \boldsymbol{\theta}}_{1,d_{1}+1}\\ \vdots\\ \hat{\boldsymbol{\theta}}_{1,P_{2}}\end{array}\right]\right. \displaystyle\left.\times\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{2}\right% )\right)\mu_{1}\left(\boldsymbol{\xi}_{1}\right)\mu_{2}\left(\boldsymbol{\xi}_% {2}\right)d\boldsymbol{\xi}_{1}d\boldsymbol{\xi}_{2} \displaystyle=\mathrm{trace}\left(\left(\int_{\Xi_{2}}\boldsymbol{\psi}_{2}% \left(\boldsymbol{\xi}_{2}\right)\boldsymbol{\psi}_{2}\left(\boldsymbol{\xi}_{% 2}\right)^{\mathbf{T}}\mu_{2}\left(\boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi% }_{2}\right)\left[\begin{array}[]{c}\hat{\boldsymbol{\theta}}_{1,d_{1}+1}\\ \vdots\\ \hat{\boldsymbol{\theta}}_{1,P_{2}}\end{array}\right]^{\mathbf{T}}\right. \displaystyle\left.\times\left[\begin{array}[]{ccc}\sigma_{1,d_{1}+1}^{\ell}\\ &\ddots\\ &&\sigma_{1,P_{2}}^{\ell}\end{array}\right]\left[\begin{array}[]{ccc}\tilde{% \boldsymbol{\upsilon}}_{1,0,1}^{\ell}&\cdots&\tilde{\boldsymbol{\upsilon}}_{1,% 0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \tilde{\boldsymbol{\upsilon}}_{1,P_{1}^{\prime},1}^{\ell}&\cdots&\tilde{% \boldsymbol{\upsilon}}_{1,P_{1}^{\prime},P_{2}}^{\ell}\\ &\boldsymbol{0}\end{array}\right]^{\mathbf{T}}\right. \displaystyle\left.\times\left(\left(\int_{\Xi_{1}}\boldsymbol{\psi}_{1}\left(% \boldsymbol{\xi}_{1}\right)\boldsymbol{\psi}_{1}\left(\boldsymbol{\xi}_{1}% \right)^{\mathbf{T}}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)d\boldsymbol{\xi}_% {1}\right)\otimes\boldsymbol{I}_{n}\right)\left[\begin{array}[]{ccc}\tilde{% \boldsymbol{\upsilon}}_{1,0,1}^{\ell}&\cdots&\tilde{\boldsymbol{\upsilon}}_{1,% 0,P_{2}}^{\ell}\\ \vdots&&\vdots\\ \tilde{\boldsymbol{\upsilon}}_{1,P_{1}^{\prime},1}^{\ell}&\cdots&\tilde{% \boldsymbol{\upsilon}}_{1,P_{1}^{\prime},P_{2}}^{\ell}\\ &\boldsymbol{0}\end{array}\right]\right. \displaystyle\left.\times\left[\begin{array}[]{ccc}\sigma_{1,d_{1}+1}^{\ell}\\ &\ddots\\ &&\sigma_{1,P_{2}}^{\ell}\end{array}\right]\left[\begin{array}[]{c}\hat{% \boldsymbol{\theta}}_{1,d_{1}+1}\\ \vdots\\ \hat{\boldsymbol{\theta}}_{1,P_{2}}\end{array}\right]\right)=\sum_{j=d_{1}+1}^% {P_{2}}\left(\sigma_{1,j}^{\ell}\right)^{2}. \hb@xt@.01(3.37)

Similarly, for the right hand side expression, we can show that

 \int_{\Xi_{1}\times\Xi_{2}}\left\|\boldsymbol{y}_{1}^{\ell}\left(\boldsymbol{% \xi}_{1},\boldsymbol{\xi}_{2}\right)-\tilde{\boldsymbol{y}}_{1}^{\ell}\left(% \boldsymbol{\xi}_{1}\right)\right\|_{\boldsymbol{\Gamma}_{1}}^{2}\mu_{1}\left(% \boldsymbol{\xi}_{1}\right)\mu_{2}\left(\boldsymbol{\xi}_{2}\right)d% \boldsymbol{\xi}_{1}d\boldsymbol{\xi}_{2}=\sum_{j=1}^{P_{2}}\left(\sigma_{1,j}% ^{\ell}\right)^{2} \hb@xt@.01(3.38)

By substituting Eq. 3.37 and Eq. 3.38 into Eq. 3.35, we can arrive at Eq. 3.36. Similarly, the inequality can be proven for i=2. \square

#### Remarks

For the alternative (general) formulation of the coupled stochastic system given in Eq. 2.12 is considered, the dimension reduction routine would be exactly the same as described here, with the exception that the input data must be respectively formulated as \boldsymbol{y}_{1}^{\ell}\equiv\left[\boldsymbol{u}_{1}^{\ell};\boldsymbol{v}_% {2}^{\ell}\right] and \boldsymbol{y}_{2}^{\ell}\equiv\left[\boldsymbol{u}_{2}^{\ell};\boldsymbol{v}_% {1}^{\ell+1}\right]. Moreover, if any of the right hand side summations in Eq. 3.35 are zero, then a corresponding zero dimensional reduced approximation of the input data is constructed. We will now describe the order reduction procedure.

### 3.4 Order reduction

\forall i\in\left\{1,2\right\} and iteration \ell, an approximation of the input data in module operator \tilde{\boldsymbol{M}}_{i} can be constructed in the reduced dimensional stochastic space \Theta_{i}^{\ell}, as described in §3.3.

Consequently, for any square-integrable, vector valued functions \boldsymbol{u}:\Theta_{i}^{\ell}\rightarrow\mathbb{R}^{n}, a reduced gPC approximation \boldsymbol{u}^{\tilde{p}_{i}}\equiv\boldsymbol{u}^{p,d_{i},\tilde{p}_{i}} of order \tilde{p}_{i}\equiv\tilde{p}_{i}^{\ell}\geq 0 can be formulated as follows. \forall\boldsymbol{\theta}\in\Theta_{i}^{\ell},

 \boldsymbol{u}\left(\boldsymbol{\theta}\right)\approx\boldsymbol{u}^{\tilde{p}% _{i}}\left(\boldsymbol{\theta}\right)=\sum_{j=0}^{\tilde{P}_{i}}\tilde{% \boldsymbol{u}}_{j}\phi_{i,j}^{\ell}\left(\boldsymbol{\theta}\right)=\tilde{% \boldsymbol{U}}^{\tilde{p}_{i}}\boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{i}}\left% (\boldsymbol{\theta}\right), \hb@xt@.01(3.39)

where \left\{\phi_{i,j}^{\ell}\equiv\phi_{i,j}^{\ell,p}:\Theta_{i}^{\ell}\rightarrow% \mathbb{R}\right\}_{j\geq 0} denotes the set of distinct \nu_{i}^{\ell}-orthonormal polynomials, \tilde{\boldsymbol{U}}^{\tilde{p}_{i}}\equiv\tilde{\boldsymbol{U}}^{p,\tilde{p% }_{i}}=\left[\begin{array}[]{ccc}\tilde{\boldsymbol{u}}_{0}&\cdots&\tilde{% \boldsymbol{u}}_{\tilde{P}_{i}}\end{array}\right]\in\mathbb{R}^{n\times\left(% \tilde{P}_{i}+1\right)} denotes the reduced order gPC coefficient matrix and \boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{i}}\equiv\boldsymbol{\phi}_{i}^{\ell,p,% \tilde{p}_{i}}=\left[\begin{array}[]{ccc}\phi_{i,0}^{\ell}&\cdots&\phi_{i,% \tilde{P}_{i}}^{\ell}\end{array}\right]^{\mathbf{T}}:\Theta_{i}^{\ell}% \rightarrow\mathbb{R}^{\tilde{P}_{i}+1} denotes the reduced order basis vector. If the formulation of the gPC approximation \boldsymbol{u}^{\tilde{p}_{i}} is based on an isotropic, total-degree truncation of the infinite polynomial series, then \tilde{P}_{i}={\displaystyle\frac{\left(\tilde{p}_{i}+d_{i}\right)}{\tilde{p}_% {i}!d_{i}!}}. Moreover, we assume that \tilde{p}_{1},\tilde{p}_{2}\leq p, which implies that \tilde{P}_{1}\leq P_{2} and \tilde{P}_{2}\leq P_{1}.

Since the coordinate directions in \Theta_{i}^{\ell} are not necessarily statistically independent, we cannot simply compute the elements of \boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{i}} as products of univariate polynomials. Instead, we propose another SVD based numerical construction method.

#### 3.4.1 Reduced order basis construction

\forall i\in\left\{1,2\right\}, iteration \ell and degree index \boldsymbol{\alpha}=\left[\begin{array}[]{ccc}\alpha_{1}&\cdots&\alpha_{d_{i}}% \end{array}\right]^{\mathbf{T}}\in\mathbb{N}_{0}^{d_{i}}, let m_{i,\boldsymbol{\alpha}}^{\ell}:\Theta_{i}^{\ell}\rightarrow\mathbb{R} denote the monomial function with \mathrm{deg}\left(m_{i,\boldsymbol{\alpha}}^{\ell}\right)=\left|\boldsymbol{% \alpha}\right|=\alpha_{1}+\cdots+\alpha_{d_{i}}, such that \forall\boldsymbol{\theta}=\left[\begin{array}[]{ccc}\theta_{1}&\cdots&\theta_% {d_{i}}\end{array}\right]\in\Theta_{i}^{\ell},

 m_{i,\boldsymbol{\alpha}}^{\ell}=\prod_{j=1}^{d_{i}}\theta_{j}^{\alpha_{j}}. \hb@xt@.01(3.40)

The number of such monomials with total degree \leq\tilde{p}_{i} is equal to \tilde{P}_{i}+1.

Let \left\{\boldsymbol{\alpha}_{j}:\left|\boldsymbol{\alpha}_{j}\right|\leq\tilde{% p}_{i}\right\}_{j=0}^{\tilde{P}_{i}} denote the corresponding set of indices and \boldsymbol{m}_{i}^{\ell,\tilde{p}_{i}}\equiv\boldsymbol{\pi}_{i}^{\ell,p,% \tilde{p}_{i}}=\left[\begin{array}[]{ccc}m_{i,\boldsymbol{\alpha}_{0}}&\cdots&% m_{i,\boldsymbol{\alpha}_{\tilde{P}_{i}}}\end{array}\right]^{\mathbf{T}}:% \Theta_{i}^{\ell}\rightarrow\mathbb{R}^{\tilde{P}_{i}+1} denote the monomial vector. In general, the respective basis vector \boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{1}} can be computed as follows. \forall\boldsymbol{\theta}\in\Theta_{i}^{\ell},

 \boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{i}}\left(\boldsymbol{\theta}\right)=% \left(\boldsymbol{L}_{i}^{\ell,\tilde{p}_{i}}\right)^{-1}\boldsymbol{m}_{i}^{% \ell,\tilde{p}_{i}}\left(\boldsymbol{\theta}\right),

where \boldsymbol{L}_{i}^{\ell,\tilde{p}_{i}}\in\mathbb{R}^{\tilde{P}_{i}+1} is the lower triangular matrix which defines the Cholesky factorization of the Hankel matrix \boldsymbol{H}_{i}^{\ell,\tilde{p}_{i}}\equiv\boldsymbol{H}_{i}^{\ell,p,\tilde% {p}_{i}}=\boldsymbol{L}_{i}^{\ell,\tilde{p}_{i}}\left(\boldsymbol{L}_{i}^{\ell% ,\tilde{p}_{i}}\right)^{\mathbf{T}}:

 \displaystyle\boldsymbol{H}_{i}^{\ell,\tilde{p}_{i}} \displaystyle=\int_{\Theta_{i}^{\ell}}\boldsymbol{m}_{i}^{\ell,\tilde{p}_{i}}% \left(\boldsymbol{\theta}\right)\boldsymbol{m}_{i}^{\ell,\tilde{p}_{i}}\left(% \boldsymbol{\theta}\right)^{\mathbf{T}}\nu_{i}^{\ell}\left(\boldsymbol{\theta}% \right)d\boldsymbol{\theta}= \displaystyle=\begin{cases}\int_{\Xi_{2}}\boldsymbol{m}_{1}^{\ell,\tilde{p}_{1% }}\left(\boldsymbol{\theta}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}\right)\right)% \boldsymbol{m}_{1}^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\ell}% \left(\boldsymbol{\xi}_{2}\right)\right)^{\mathbf{T}}\mu_{2}\left(\boldsymbol{% \xi}_{2}\right)d\boldsymbol{\xi}_{2}&i=1\\ \int_{\Xi_{1}}\boldsymbol{m}_{2}^{\ell,\tilde{p}_{2}}\left(\boldsymbol{\theta}% _{2}^{\ell}\left(\boldsymbol{\xi}_{1}\right)\right)\boldsymbol{m}_{2}^{\ell,% \tilde{p}_{2}}\left(\boldsymbol{\theta}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}% \right)\right)^{\mathbf{T}}\mu_{1}\left(\boldsymbol{\xi}_{1}\right)d% \boldsymbol{\xi}_{1}&i=2\end{cases}, \hb@xt@.01(3.41)

which in turn can be approximated using the respective quadrature rule as follows. \tilde{\boldsymbol{H}}_{i}^{\ell,\tilde{p}_{i}}\approx\boldsymbol{H}_{i}^{\ell% ,\tilde{p}_{i}}:

 \tilde{\boldsymbol{H}}_{i}^{\ell,\tilde{p}_{i}}=\begin{cases}{\displaystyle% \sum_{j=1}^{Q_{2}}}w_{2}^{\left(j\right)}\boldsymbol{m}_{1}^{\ell,\tilde{p}_{1% }}\left(\boldsymbol{\theta}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}^{\left(j% \right)}\right)\right)\boldsymbol{m}_{1}^{\ell,\tilde{p}_{1}}\left(\boldsymbol% {\theta}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}^{\left(j\right)}\right)\right)^{% \mathbf{T}}&i=1\\ {\displaystyle\sum_{j=1}^{Q_{1}}}w_{1}^{\left(j\right)}\boldsymbol{m}_{2}^{% \ell,\tilde{p}_{2}}\left(\boldsymbol{\theta}_{2}^{\ell}\left(\boldsymbol{\xi}_% {1}^{\left(j\right)}\right)\right)\boldsymbol{m}_{2}^{\ell,\tilde{p}_{2}}\left% (\boldsymbol{\theta}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}% \right)\right)^{\mathbf{T}}&i=2\end{cases}. \hb@xt@.01(3.42)

However, the possibility of negative weights in the quadrature rules can lead to negative or zero eigenvalues in the approximation \tilde{\boldsymbol{H}}_{i}^{\ell,\tilde{p}_{i}}, and consequently preclude the Cholesky factorization. Therefore, we instead compute the rank-reduced SVD of \tilde{\boldsymbol{H}}_{i}^{\ell,\tilde{p}_{i}}:

 \tilde{\boldsymbol{H}}_{i}^{\ell,\tilde{p}_{i}}=\tilde{\boldsymbol{V}}_{i}^{% \ell,\tilde{p}_{i}}\tilde{\boldsymbol{S}}_{i}^{\ell,\tilde{p}_{i}}\tilde{% \boldsymbol{\Sigma}}_{i}^{\ell,\tilde{p}_{i}}\left(\tilde{\boldsymbol{V}}_{i}^% {\ell,\tilde{p}_{i}}\right)^{\mathbf{T}}, \hb@xt@.01(3.43)

where \tilde{\boldsymbol{V}}_{i}^{\ell,\tilde{p}_{i}},\tilde{\boldsymbol{\Sigma}}_{i% }^{\ell,\tilde{p}_{i}} denote the usual decomposition matrices and \tilde{\boldsymbol{S}}_{i}^{\ell,\tilde{p}_{i}} denotes a diagonal matrix with \pm 1 as its diagonal elements. Since the approximation \tilde{\boldsymbol{H}}_{i}^{\ell,\tilde{p}_{i}} is symmetric, such a decomposition will always exist [30].

Subsequently, the basis vector \boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{1}} is formulated as follows. \forall\boldsymbol{\theta}\in\Theta_{i}^{\ell},

 \boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}\right)=% \left(\tilde{\boldsymbol{\Sigma}}_{i}^{\ell,\tilde{p}_{i}}\right)^{-\frac{1}{2% }}\left(\tilde{\boldsymbol{V}}_{i}^{\ell,\tilde{p}_{i}}\right)^{\mathbf{T}}% \boldsymbol{m}_{i}^{\ell,\tilde{p}_{i}}\left(\boldsymbol{\theta}\right). \hb@xt@.01(3.44)

Here, \boldsymbol{\phi}_{1}^{\ell,\tilde{p}_{1}} and \boldsymbol{\phi}_{2}^{\ell,\tilde{p}_{2}} satisfy the discrete orthogonality condition

 \displaystyle{\displaystyle\sum_{j=1}^{Q_{2}}}w_{2}^{\left(j\right)}% \boldsymbol{\phi}_{1}^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\ell}% \left(\boldsymbol{\xi}_{2}^{\left(j\right)}\right)\right)\boldsymbol{\phi}_{1}% ^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\ell}\left(\boldsymbol{\xi% }_{2}^{\left(j\right)}\right)\right)^{\mathbf{T}} \displaystyle=\boldsymbol{S}_{1}^{\ell,\tilde{p}_{1}}, \displaystyle{\displaystyle\sum_{j=1}^{Q_{1}}}w_{1}^{\left(j\right)}% \boldsymbol{\pi}_{2}^{\ell,\tilde{p}_{2}}\left(\boldsymbol{\theta}_{2}^{\ell}% \left(\boldsymbol{\xi}_{1}^{\left(j\right)}\right)\right)\boldsymbol{\pi}_{2}^% {\ell,\tilde{p}_{2}}\left(\boldsymbol{\theta}_{2}^{\ell}\left(\boldsymbol{\xi}% _{1}^{\left(j\right)}\right)\right)^{\mathbf{T}} \displaystyle=\boldsymbol{S}_{2}^{\ell,\tilde{p}_{2}}.

Therefore, the reduced order gPC coefficient matrix can be approximated as follows.

 \displaystyle\tilde{\boldsymbol{U}}^{\tilde{p}_{i}} \displaystyle=\int_{\Theta_{i}^{\ell}}\boldsymbol{u}\left(\boldsymbol{\theta}% \right)\boldsymbol{\phi}_{i}^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}% \right)^{\mathbf{T}}\boldsymbol{S}_{i}^{\ell,\tilde{p}_{1}}\nu_{i}^{\ell}\left% (\boldsymbol{\theta}\right)d\boldsymbol{\theta} \displaystyle=\begin{cases}\int_{\Xi_{2}}\boldsymbol{u}\left(\boldsymbol{% \theta}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}\right)\right)\boldsymbol{\phi}_{1% }^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\ell}\left(\boldsymbol{% \xi}_{2}\right)\right)^{\mathbf{T}}\boldsymbol{S}_{1}^{\ell,\tilde{p}_{1}}\mu_% {2}\left(\boldsymbol{\xi}_{2}\right)d\boldsymbol{\xi}_{2}&i=1\\ \int_{\Xi_{1}}\boldsymbol{u}\left(\boldsymbol{\theta}_{2}^{\ell}\left(% \boldsymbol{\xi}_{1}\right)\right)\boldsymbol{\phi}_{2}^{\ell,\tilde{p}_{2}}% \left(\boldsymbol{\theta}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}\right)\right)^{% \mathbf{T}}\boldsymbol{S}_{2}^{\ell,\tilde{p}_{2}}\mu_{1}\left(\boldsymbol{\xi% }_{1}\right)d\boldsymbol{\xi}_{1}&i=2\end{cases} \displaystyle\approx\begin{cases}{\displaystyle\sum_{j=1}^{Q_{2}}}w_{2}^{\left% (j\right)}\boldsymbol{u}\left(\boldsymbol{\theta}_{1}^{\ell}\left(\boldsymbol{% \xi}_{2}^{\left(j\right)}\right)\right)\boldsymbol{\phi}_{1}^{\ell,\tilde{p}_{% 1}}\left(\boldsymbol{\theta}_{1}^{\ell}\left(\boldsymbol{\xi}_{2}^{\left(j% \right)}\right)\right)^{\mathbf{T}}\boldsymbol{S}_{1}^{\ell,\tilde{p}_{1}}&i=1% \\ {\displaystyle\sum_{j=1}^{Q_{1}}}w_{1}^{\left(j\right)}\boldsymbol{u}\left(% \boldsymbol{\theta}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}% \right)\right)\boldsymbol{\phi}_{2}^{\ell,\tilde{p}_{2}}\left(\boldsymbol{% \theta}_{2}^{\ell}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}\right)\right)^{% \mathbf{T}}\boldsymbol{S}_{2}^{\ell,\tilde{p}_{2}}&i=2\end{cases}. \hb@xt@.01(3.45)

Eq. 3.45 implies that \left\{\left(\boldsymbol{\theta}_{1}^{\left(j\right)}=\boldsymbol{\theta}_{1}^% {\ell}\left(\boldsymbol{\xi}_{2}^{\left(j\right)}\right),w_{2}^{\left(j\right)% }\right)\right\}_{j=1}^{Q_{2}} is a quadrature rule in \Theta_{1}^{\ell} with level \geq\tilde{p}_{1} and \left\{\left(\boldsymbol{\theta}_{2}^{\left(j\right)}=\boldsymbol{\theta}_{2}^% {\ell}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}\right),w_{1}^{\left(j\right)% }\right)\right\}_{j=1}^{Q_{1}} is a quadrature rule in \Theta_{2}^{\ell} with level \geq\tilde{p}_{2}. However, since Q_{1} grows exponentially with respect to s_{1} and p, and Q_{2} grows exponentially with respect to s_{2} and p, these quadrature rules are not optimal with respect to the reduced dimensions d_{1},d_{2} and reduced orders \tilde{p}_{1},\tilde{p}_{2}. Therefore, we propose a QR factorization based method to construct the computationally optimal quadrature rules.

#### 3.4.2 Optimal quadrature rule construction

At each iteration \ell, the respective optimally sparse quadrature rules in \Theta_{1}^{\ell} and \Theta_{2}^{\ell}, denoted as \left\{\left(\boldsymbol{\theta}_{1}^{\left(j\right)},\tilde{w}_{2}^{\left(j% \right)}\right)\right\}_{j=1}^{Q_{2}} and \left\{\left(\boldsymbol{\theta}_{2}^{\left(j\right)},\tilde{w}_{1}^{\left(j% \right)}\right)\right\}_{j=1}^{Q_{1}}, with levels \tilde{p}_{1} and \tilde{p}_{2}, would contain the minimum possible number of non-zero weights and still be able to numerically integrate all polynomials with total degree \leq 2\tilde{p}_{1} and \leq 2\tilde{p}_{2}, up to machine precision. This requirement is dictated by the projection formula in Eq. 3.45.

\forall i\in\left\{1,2\right\}, let \boldsymbol{w}_{i}=\left[\begin{array}[]{ccc}w_{i}^{\left(1\right)}&\cdots&w_{% i}^{\left(Q_{i}\right)}\end{array}\right]^{\mathbf{T}}\in\mathbb{R}^{Q_{i}} denote the dense weight vector and and \tilde{\boldsymbol{w}}_{i}=\left[\begin{array}[]{ccc}w_{i}^{\left(1\right)}&% \cdots&w_{i}^{\left(Q_{i}\right)}\end{array}\right]^{\mathbf{T}}\in\mathbb{R}^% {Q_{i}} denote the optimally sparse weight vector. Therefore, \tilde{\boldsymbol{w}}_{1} and \tilde{\boldsymbol{w}}_{2} each solve an \ell_{0}-minimization problem as follows.

 \displaystyle\tilde{\boldsymbol{w}}_{1} \displaystyle=\arg\min_{\boldsymbol{\omega}\in\mathbb{R}^{Q_{1}}}\left\|% \boldsymbol{\omega}\right\|_{0}:\boldsymbol{M}_{2}^{\ell,2\tilde{p}_{2}}% \boldsymbol{\omega}=\boldsymbol{M}_{2}^{\ell,2\tilde{p}_{2}}\boldsymbol{w}_{1}, \displaystyle\tilde{\boldsymbol{w}}_{2} \displaystyle=\arg\min_{\boldsymbol{\omega}\in\mathbb{R}^{Q_{2}}}\left\|% \boldsymbol{\omega}\right\|_{0}:\boldsymbol{M}_{1}^{\ell,2\tilde{p}_{1}}% \boldsymbol{\omega}=\boldsymbol{M}_{1}^{\ell,2\tilde{p}_{1}}\boldsymbol{w}_{2}, \hb@xt@.01(3.46)

where

 \displaystyle\boldsymbol{M}_{1}^{\ell,2\tilde{p}_{1}}\equiv\boldsymbol{M}_{1}^% {\ell,p,2\tilde{p}_{1}} \displaystyle=\left[\begin{array}[]{ccc}\boldsymbol{m}_{1}^{\ell,2\tilde{p}_{1% }}\left(\boldsymbol{\theta}_{1}^{\left(1\right)}\right)&\cdots&\boldsymbol{m}_% {1}^{\ell,2\tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\left(Q_{2}\right)}% \right)\end{array}\right]\in\mathbb{R}^{\left(\tilde{N}_{1}+1\right)\times Q_{% 2}}, \displaystyle\boldsymbol{M}_{2}^{\ell,2\tilde{p}_{2}}\equiv\boldsymbol{M}_{2}^% {\ell,p,2\tilde{p}_{2}} \displaystyle=\left[\begin{array}[]{ccc}\boldsymbol{m}_{2}^{\ell,2\tilde{p}_{2% }}\left(\boldsymbol{\theta}_{2}^{\left(1\right)}\right)&\cdots&\boldsymbol{m}_% {2}^{\ell,2\tilde{p}_{1}}\left(\boldsymbol{\theta}_{2}^{\left(Q_{1}\right)}% \right)\end{array}\right]\in\mathbb{R}^{\left(\tilde{N}_{2}+1\right)\times Q_{% 1}} \hb@xt@.01(3.47)

denote the corresponding respective Vandermonde matrices with \tilde{N}_{1}+1={\displaystyle\frac{\left(2\tilde{p}_{1}+d_{1}\right)}{\left(2% \tilde{p}_{1}\right)!d_{1}!}} and \tilde{N}_{2}+1={\displaystyle\frac{\left(2\tilde{p}_{2}+d_{2}\right)}{\left(2% \tilde{p}_{2}\right)!d_{2}!}} rows respectively. In general, these matrices may not be full rank and therefore, we can setup numerically stable and equivalent \ell_{0}-minimization problems for \tilde{\boldsymbol{w}}_{1} and \tilde{\boldsymbol{w}}_{2} as follows.

 \displaystyle\tilde{\boldsymbol{w}}_{1} \displaystyle=\arg\min_{\boldsymbol{\omega}\in\mathbb{R}^{Q_{1}}}\left\|% \boldsymbol{\omega}\right\|_{0}:\left(\boldsymbol{Q}_{2,r_{2}}^{\ell,2\tilde{p% }_{2}}\right)^{\mathbf{T}}\boldsymbol{\omega}=\left(\boldsymbol{Q}_{2,r_{2}}^{% \ell,2\tilde{p}_{2}}\right)^{\mathbf{T}}\boldsymbol{w}_{1}, \displaystyle\tilde{\boldsymbol{w}}_{2} \displaystyle=\arg\min_{\boldsymbol{\omega}\in\mathbb{R}^{Q_{2}}}\left\|% \boldsymbol{\omega}\right\|_{0}:\left(\boldsymbol{Q}_{1,r_{1}}^{\ell,2\tilde{p% }_{1}}\right)^{\mathbf{T}}\boldsymbol{\omega}=\left(\boldsymbol{Q}_{1,r_{1}}^{% \ell,2\tilde{p}_{1}}\right)^{\mathbf{T}}\boldsymbol{w}_{2}, \hb@xt@.01(3.48)

where \forall i\in\left\{1,2\right\}, r_{i} denotes the rank of \boldsymbol{M}_{i}^{\ell,2\tilde{p}_{i}} and \boldsymbol{Q}_{i,r_{i}}^{\ell,2\tilde{p}_{i}} defines the pivoted-QR factorization of \left(\boldsymbol{M}_{i}^{\ell,2\tilde{p}_{i}}\right)^{\mathbf{T}} :

 \left(\boldsymbol{M}_{i}^{\ell,2\tilde{p}_{i}}\right)^{\mathbf{T}}\boldsymbol{% \Pi}_{i}^{\ell,2\tilde{p}_{i}}=\boldsymbol{Q}_{i,r_{i}}^{\ell,2\tilde{p}_{i}}% \boldsymbol{R}_{i}^{\ell,2\tilde{p}_{i}}. \hb@xt@.01(3.49)

Instead of solving the NP-hard \ell_{0}-minimization problems, we employ a direct approach to compute ’weakly’ optimal quadrature rules, with at most r_{2} and r_{1} non-zero weights respectively. Therefore, \forall i\in\left\{1,2\right\}, we compute the pivoted-QR factorization of \left(\boldsymbol{Q}_{i,r_{i}}^{\ell,2\tilde{p}_{i}}\right)^{\mathbf{T}} :

 \left(\boldsymbol{Q}_{i,r_{i}}^{\ell,2\tilde{p}_{i}}\right)^{\mathbf{T}}\tilde% {\boldsymbol{\Pi}}_{i}^{\ell,2\tilde{p}_{i}}=\tilde{\boldsymbol{Q}}_{i}^{\ell,% 2\tilde{p}_{i}}\tilde{\boldsymbol{R}}_{i}^{\ell,2\tilde{p}_{i}}, \hb@xt@.01(3.50)

construct the upper triangular square matrix \tilde{\boldsymbol{R}}_{i,r_{i}}^{\ell,2\tilde{p}_{i}} using the first r_{i} columns of \tilde{\boldsymbol{R}}_{i}^{\ell,2\tilde{p}_{i}}, and compute the sparse weight vectors as follows.

 \displaystyle\tilde{\boldsymbol{w}}_{1} \displaystyle=\tilde{\boldsymbol{\Pi}}_{2}^{\ell,2\tilde{p}_{2}}\left[\begin{% array}[]{c}\left(\tilde{\boldsymbol{R}}_{2,r_{2}}^{\ell,2\tilde{p}_{2}}\right)% ^{-1}\tilde{\boldsymbol{R}}_{2}^{\ell,2\tilde{p}_{2}}\left(\tilde{\boldsymbol{% \Pi}}_{2}^{\ell,2\tilde{p}_{2}}\right)^{\mathbf{T}}\boldsymbol{w}_{1}\\ \boldsymbol{0}\end{array}\right], \displaystyle\tilde{\boldsymbol{w}}_{2} \displaystyle=\tilde{\boldsymbol{\Pi}}_{1}^{\ell,2\tilde{p}_{1}}\left[\begin{% array}[]{c}\left(\tilde{\boldsymbol{R}}_{1,r_{1}}^{\ell,2\tilde{p}_{1}}\right)% ^{-1}\tilde{\boldsymbol{R}}_{1}^{\ell,2\tilde{p}_{1}}\left(\tilde{\boldsymbol{% \Pi}}_{1}^{\ell,2\tilde{p}_{1}}\right)^{\mathbf{T}}\boldsymbol{w}_{2}\\ \boldsymbol{0}\end{array}\right]. \hb@xt@.01(3.51)

\forall i\in\left\{1,2\right\}, if \mathcal{Z}_{i}=\left\{1\leq j\leq Q_{i}:\tilde{w}_{i}^{\left(i\right)}\neq 0\right\} denotes the index set corresponding to the non-zero elements in \tilde{\boldsymbol{w}}_{i}, then an efficient reduced projection formula for computing the reduced order gPC coefficient matrix \tilde{\boldsymbol{U}} can be formulated as follows.

 \tilde{\boldsymbol{U}}^{\tilde{p}_{i}}\approx\begin{cases}{\displaystyle\sum_{% j\in\mathcal{Z}_{2}}}\tilde{w}_{2}^{\left(j\right)}\boldsymbol{u}\left(% \boldsymbol{\theta}_{1}^{\left(j\right)}\right)\boldsymbol{\phi}_{1}^{\ell,% \tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\left(j\right)}\right)^{\mathbf{T% }}\boldsymbol{S}_{1}^{\ell,\tilde{p}_{1}}&i=1\\ {\displaystyle\sum_{j\in\mathcal{Z}_{1}}}\tilde{w}_{1}^{\left(j\right)}% \boldsymbol{u}\left(\boldsymbol{\theta}_{2}^{\left(j\right)}\right)\boldsymbol% {\phi}_{2}^{\ell,\tilde{p}_{2}}\left(\boldsymbol{\theta}_{2}^{\left(j\right)}% \right)^{\mathbf{T}}\boldsymbol{S}_{2}^{\ell,\tilde{p}_{2}}&i=2\end{cases}. \hb@xt@.01(3.52)

#### 3.4.3 Approximating modular and global gPC coefficients

\forall i\in\left\{1,2\right\} and iteration \ell, let \tilde{\boldsymbol{U}}^{d_{i}}\equiv\tilde{\boldsymbol{U}}^{p,d_{i}}:\Theta_{i% }^{\ell}\rightarrow\mathbb{R}^{n\times\left(P_{i}+1\right)} denote the reduced dimensional modular gPC coefficient matrix of \boldsymbol{u}, corresponding to module i. Consequently, for the original modular gPC coefficient matrix \tilde{\boldsymbol{U}}^{p}, defined §3.1, a composition \tilde{\boldsymbol{U}}^{p}=\tilde{\boldsymbol{U}}^{d_{i}}\circ\boldsymbol{% \theta}_{i}^{\ell} exists. Therefore, using the reduced projection formula (Eq. 3.52), the reduced dimensional global gPC coefficient matrix \hat{\boldsymbol{U}}^{d_{i}}\equiv\hat{\boldsymbol{U}}^{p,d_{i}} can be computed as follows.

 \hat{\boldsymbol{U}}^{d_{i}}\approx\begin{cases}{\displaystyle\sum_{j\in% \mathcal{Z}_{2}}}\tilde{w}_{2}^{\left(j\right)}\tilde{\boldsymbol{U}}^{d_{1}}% \left(\boldsymbol{\theta}_{1}^{\left(j\right)}\right)\left(\left(\boldsymbol{% \phi}_{1}^{\ell,\tilde{p}_{1}}\left(\boldsymbol{\theta}_{1}^{\left(j\right)}% \right)^{\mathbf{T}}\boldsymbol{S}_{1}^{\ell,\tilde{p}_{1}}\right)\otimes% \boldsymbol{I}_{P_{1}+1}\right)&i=1\\ {\displaystyle\sum_{j\in\mathcal{Z}_{1}}}\tilde{w}_{1}^{\left(j\right)}\tilde{% \boldsymbol{U}}^{d_{2}}\left(\boldsymbol{\theta}_{2}^{\left(j\right)}\right)% \left(\left(\boldsymbol{\phi}_{2}^{\ell,\tilde{p}_{2}}\left(\boldsymbol{\theta% }_{2}^{\left(j\right)}\right)^{\mathbf{T}}\boldsymbol{S}_{2}^{\ell,\tilde{p}_{% 2}}\right)\otimes\boldsymbol{I}_{P_{2}+1}\right)&i=2\end{cases}. \hb@xt@.01(3.53)

Subsequently, using Eq. 3.53, the global gPC coefficient matrix \hat{\boldsymbol{U}}^{p,d_{i},\tilde{p}_{i}} can be computed as follows.

 \hat{\boldsymbol{U}}^{p,d_{i},\tilde{p}_{i}}\approx\begin{cases}{\displaystyle% \sum_{j=1}^{Q_{2}}}w_{2}^{\left(j\right)}\hat{\boldsymbol{U}}^{d_{1}}\left(% \boldsymbol{\phi}_{1}^{\ell,\tilde{p}_{1}}\otimes\boldsymbol{I}_{P_{1}+1}% \right)\boldsymbol{\Pi}_{1}\left(\boldsymbol{\xi}_{2}^{\left(j\right)}\right)^% {\mathbf{T}}&i=1\\ {\displaystyle\sum_{j=1}^{Q_{1}}}w_{1}^{\left(j\right)}\hat{\boldsymbol{U}}^{d% _{2}}\left(\boldsymbol{\phi}_{2}^{\ell,\tilde{p}_{2}}\otimes\boldsymbol{I}_{P_% {2}+1}\right)\boldsymbol{\Pi}_{2}\left(\boldsymbol{\xi}_{1}^{\left(j\right)}% \right)^{\mathbf{T}}&i=2\end{cases}. \hb@xt@.01(3.54)

In general, the level and characteristics of the quadrature rule used in approximating \hat{\boldsymbol{U}}^{p,d_{i},\tilde{p}_{i}} can differ from the level and characteristics of the quadrature rule used in constructing the optimally sparse quadrature rules. Moreover, a strictly positive weight vector with r_{i}\leq\tilde{N}_{i}+1 non-zeros can also be computed using the Nelder-Mead method [31], in which case, the upper bound of \tilde{N}_{i}+1 on the number of non-zero weights would coincide with the upper bound proved by Tchakaloff [32]. Furthermore, the sparsity can also be explicitly controlled by prescribing a threshold on the diagonal elements of \boldsymbol{R}_{i}^{\ell,2\tilde{p}_{i}}.

#### 3.4.4 Selecting the reduced representation

\forall i\in\left\{1,2\right\}, we prescribe a tolerance \epsilon_{i,\mathrm{ord}}>0, such that at each iteration \ell, the reduced order \tilde{p}_{i} is selected as the smallest k\in\mathbb{N}, which satisfies

 \left\|\boldsymbol{U}_{i}^{\ell,p,d_{i},k+1}-\boldsymbol{U}_{i}^{\ell,p,d_{i},% k}\right\|_{\boldsymbol{G}_{i}}\leq\epsilon_{i,\mathrm{ord}}\left\|\boldsymbol% {U}_{i}^{\ell,p,d_{i},k+1}\right\|_{\boldsymbol{G}_{i}}, \hb@xt@.01(3.55)

where \left\|\cdot\right\|_{\boldsymbol{G}_{i}} denotes the \boldsymbol{G}_{i}-weighted Frobenius norm, such that \forall\hat{\boldsymbol{U}}\in\mathbb{R}^{n\times\left(P+1\right)}, \left\|\hat{\boldsymbol{U}}\right\|_{\boldsymbol{G}_{i}}=\sqrt{\mathrm{trace}% \left(\hat{\boldsymbol{U}}^{\mathbf{T}}\boldsymbol{G}_{i}\hat{\boldsymbol{U}}% \right)}.

We propose the following heuristic to select \tilde{p}_{i}. \tilde{p}_{i} is initialized to 0 at \ell=0, and subsequently incremented by 1 at any \ell>0, if \left\|\boldsymbol{U}_{i}^{\ell,p,d_{i},\tilde{p}_{i}+1}-\boldsymbol{U}_{i}^{% \ell,p,d_{i},\tilde{p}_{i}}\right\|_{\boldsymbol{G}_{i}}>\epsilon_{i,\mathrm{% ord}}\left\|\boldsymbol{U}_{i}^{\ell,p,d_{i},\tilde{p}_{i}+1}\right\|_{% \boldsymbol{G}_{i}}. Therefore, by choosing an appropriate value for the tolerance \epsilon_{i,\mathrm{ord}}, we can guarantee that \tilde{p}_{i}<p, and therefore, a reduction of the gPC approximation order. Moreover, the requirement of computing \boldsymbol{U}_{i}^{\ell,p,d_{i},\tilde{p}_{i}+1} implies that the level of the constructed optimal quadrature rule must be \geq\tilde{p}_{i}+1.

Theorem 2 proves an important relationship between the tolerance \epsilon_{i,\mathrm{ord}} and the upper bound on the error incurred by the reduced order approximation defined in Eq. 3.39.

#### Theorem 2

\forall i\in\left\{1,2\right\} and iteration \ell, let \boldsymbol{u},\tilde{\boldsymbol{u}}^{\tilde{p}_{i}}:\Theta_{i}^{\ell}% \rightarrow\mathbb{R}^{n} denote an infinitely regular vector valued function in \Theta_{i}^{\ell} and its reduced gPC approximation of order \tilde{p}_{i}\geq 0 respectively. Given a positive-definite Gramian \boldsymbol{G}\in\mathbb{R}^{n\times n}, if \boldsymbol{u}^{d_{i}}\equiv\boldsymbol{u}^{p,d_{i}} and \boldsymbol{u}^{d_{i},\tilde{p}_{i}}\equiv\boldsymbol{u}^{p,d_{i},\tilde{p}_{i}} denote the respective global gPC approximations of the composite functions \boldsymbol{u}\circ\boldsymbol{\theta}_{i}^{\ell} and \boldsymbol{u}^{\tilde{p}_{i}}\circ\boldsymbol{\theta}_{i}^{\ell}, then \exists\chi,\chi^{*}>0,\rho^{*}>1, such that \forall p,\tilde{p}_{i}\geq 0,

 \sqrt{\int_{\Xi}\left\|\boldsymbol{u}^{d_{i}}\left(\boldsymbol{\xi}\right)-% \boldsymbol{u}^{d_{i},\tilde{p}_{i}}\left(\boldsymbol{\xi}\right)\right\|_{% \boldsymbol{G}}^{2}\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}}\leq\chi% \epsilon_{i,\mathrm{ord}}\sqrt{\int_{\Xi}\left\|\boldsymbol{u}^{d_{i}}\left(% \boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}}^{2}\mu\left(\boldsymbol{\xi}% \right)d\boldsymbol{\xi}}+\chi^{*}\rho^{-p} \hb@xt@.01(3.56)

for some \rho\geq\rho^{*}.

#### Proof

Since \boldsymbol{u} is infinitely regular in \Theta_{i}^{\ell}, \boldsymbol{u}\circ\boldsymbol{\theta}_{i}^{\ell} is infinitely regular in \Xi. Therefore, from the Cameron-Martin theorem, \exists\chi^{*},\tilde{\chi}>0,\rho^{*}>1, such that for any positive-definite \boldsymbol{G}\in\mathbb{R}^{n\times n}, the approximation error between \boldsymbol{u}^{d_{i}} and \boldsymbol{u}^{d_{i},\tilde{p}_{i}} has the following upper bound.

 \displaystyle\sqrt{\int_{\Xi}\left\|\boldsymbol{u}^{d_{i}}\left(\boldsymbol{% \xi}\right)-\boldsymbol{u}^{d_{i},\tilde{p}_{i}}\left(\boldsymbol{\xi}\right)% \right\|_{\boldsymbol{G}}^{2}\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}} \displaystyle\leq\tilde{\chi}\sqrt{\int_{\Xi}\left\|\boldsymbol{u}^{d_{i},% \tilde{p}_{i}+1}\left(\boldsymbol{\xi}\right)-\boldsymbol{u}^{d_{i},\tilde{p}_% {i}}\left(\boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}}^{2}\mu\left(% \boldsymbol{\xi}\right)d\boldsymbol{\xi}}+\chi^{*}\rho^{-p} \displaystyle=\tilde{\chi}\sqrt{\int_{\Xi}\left\|\left(\hat{\boldsymbol{U}}^{d% _{i},\tilde{p}_{i}+1}-\hat{\boldsymbol{U}}^{d_{i},\tilde{p}_{i}}\right)% \boldsymbol{\psi}\left(\boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}}^{2}\mu% \left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}}+\chi^{*}\rho^{-p} \displaystyle=\tilde{\chi}\sqrt{\int_{\Xi}\left\|\boldsymbol{G}^{\frac{1}{2}}% \left(\hat{\boldsymbol{U}}^{d_{i},\tilde{p}_{i}+1}-\hat{\boldsymbol{U}}^{d_{i}% ,\tilde{p}_{i}}\right)\boldsymbol{\psi}\left(\boldsymbol{\xi}\right)\right\|_{% 2}^{2}\mu\left(\boldsymbol{\xi}\right)d\boldsymbol{\xi}}+\chi^{*}\rho^{-p} \displaystyle\leq\tilde{\chi}\left\|\boldsymbol{G}^{\frac{1}{2}}\left(\hat{% \boldsymbol{U}}^{d_{i},\tilde{p}_{i}+1}-\hat{\boldsymbol{U}}^{d_{i},\tilde{p}_% {i}}\right)\right\|_{2}\sqrt{\int_{\Xi}\left\|\boldsymbol{\psi}\left(% \boldsymbol{\xi}\right)\right\|_{2}^{2}\mu\left(\boldsymbol{\xi}\right)d% \boldsymbol{\xi}}+\chi^{*}\rho^{-p} \displaystyle\leq\chi\left\|\left(\hat{\boldsymbol{U}}^{d_{i},\tilde{p}_{i}+1}% -\hat{\boldsymbol{U}}^{d_{i},\tilde{p}_{i}}\right)\right\|_{\boldsymbol{G}}+% \chi^{*}\rho^{-p} \hb@xt@.01(3.57)

for some \chi>0,\rho\geq\rho^{*}. By substituting Eq. 3.57 into Eq. 3.55, we arrive at Eq. 3.56.\square

### 3.5 Algorithm and computational cost

The proposed reduced ISP based uncertainty propagation method is described in Algorithm 2. Let \tilde{Q}_{2} denote the size of the optimal quadrature rule corresponding to the reduced dimension d_{1} and reduced order \tilde{p}_{1}, and let \tilde{Q}_{1} denote the size of the optimal quadrature rule corresponding to the reduced dimension d_{2} and reduced order \tilde{p}_{2}. Therefore, still assuming that the computational costs are dominated by the repeated execution of module operators \tilde{\boldsymbol{M}}_{1} and \tilde{\boldsymbol{M}}_{2}, the computational cost of the reduced ISP method

 \mathcal{C}_{r}\approx\mathcal{O}\left(\bar{\mathcal{C}}_{1}P_{1}^{\alpha_{1}}% \tilde{Q}_{2}+\bar{\mathcal{C}}_{2}P_{2}^{\alpha_{2}}\tilde{Q}_{1}\right) \hb@xt@.01(3.58)

would grow exponentially with respect to the reduced dimensions s_{1}+d_{1}, s_{2}+d_{2} and reduced orders \tilde{p}_{1},\tilde{p}_{2}. Therefore, the proposed reduced ISP method indeed mitigates the curse of dimensionality associated with the standard ISP method.

### 3.6 Error analysis

\forall i\in\left\{1,2\right\}, let \varepsilon_{i}:\forall\boldsymbol{\xi}\in\Xi,

 \varepsilon_{i}\left(\boldsymbol{\xi}\right)=\left\|\boldsymbol{u}_{i}\left(% \boldsymbol{\xi}\right)-\boldsymbol{u}_{i}^{\ell,p,d_{i},\tilde{p}_{i}}\left(% \boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}_{i}} \hb@xt@.01(3.59)

denote the mean-square error between the component solution \boldsymbol{u}_{i} and its corresponding reduced gPC approximation \boldsymbol{u}_{i}^{\ell,p,d_{i},\tilde{p}_{i}}\left(\boldsymbol{\xi}\right).

Subsequently, \varepsilon_{i} can be decomposed as a sum of individual error terms as follows.

 \varepsilon_{i}=\varepsilon_{i,\mathrm{BGS}}+\varepsilon_{i,\mathrm{gPC}}+% \varepsilon_{i,\mathrm{dim}}+\varepsilon_{i,\mathrm{ord}}, \hb@xt@.01(3.60)

where \varepsilon_{i,\mathrm{BGS}} denotes the convergence error, \varepsilon_{i,\mathrm{gPC}} denotes the gPC truncation error, \varepsilon_{i,\mathrm{dim}} denotes the reduced dimension approximation error and \varepsilon_{i,\mathrm{ord}} denotes the reduced order approximation error.

Using the triangle inequality property of norms, an asymptotic upper bound for each constituent error term can be formulated as follows. \forall\boldsymbol{\xi}\in\Xi,

 \displaystyle\varepsilon_{i}\left(\boldsymbol{\xi}\right)= \displaystyle\left\|\boldsymbol{u}_{i}\left(\boldsymbol{\xi}\right)-% \boldsymbol{u}_{i}^{\ell}\left(\boldsymbol{\xi}\right)+\boldsymbol{u}_{i}^{% \ell}\left(\boldsymbol{\xi}\right)-\boldsymbol{u}_{i}^{\ell,p}\left(% \boldsymbol{\xi}\right)+\boldsymbol{u}_{i}^{\ell,p}\left(\boldsymbol{\xi}% \right)-\boldsymbol{u}_{i}^{\ell,p,d_{i}}\left(\boldsymbol{\xi}\right)+% \boldsymbol{u}_{i}^{\ell,p,d_{i}}\left(\boldsymbol{\xi}\right)\right. \displaystyle\left.-\boldsymbol{u}_{i}^{\ell,p,d_{i},\tilde{p}_{i}}\left(% \boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}_{i}} \displaystyle\leq \displaystyle\underset{\varepsilon_{i,\mathrm{BGS}}\left(\boldsymbol{\xi}% \right)\leq\mathcal{O}\left(\eta^{-\ell}\right)}{\underbrace{\left\|% \boldsymbol{u}_{i}\left(\boldsymbol{\xi}\right)-\boldsymbol{u}_{i}^{\ell}\left% (\boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}_{i}}}}+\underset{\varepsilon_% {i,\mathrm{gPC}}\left(\boldsymbol{\xi}\right)\leq\mathcal{O}\left(\rho^{-p}% \right)}{\underbrace{\left\|\boldsymbol{u}_{i}^{\ell}\left(\boldsymbol{\xi}% \right)-\boldsymbol{u}_{i}^{\ell,p}\left(\boldsymbol{\xi}\right)\right\|_{% \boldsymbol{G}_{i}}}}+\underset{\varepsilon_{i,\mathrm{dim}}\left(\boldsymbol{% \xi}\right)\leq\mathcal{O}\left(\epsilon_{i,\mathrm{dim}}\right)}{\underbrace{% \left\|\boldsymbol{u}_{i}^{\ell,p}\left(\boldsymbol{\xi}\right)-\boldsymbol{u}% _{i}^{\ell,p,d_{i}}\left(\boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}_{i}}}} \displaystyle+\underset{\varepsilon_{i,\mathrm{ord}}\left(\boldsymbol{\xi}% \right)\leq\mathcal{O}\left(\epsilon_{i,\mathrm{ord}}\right)+\mathcal{O}\left(% \tilde{\rho}^{-p}\right)}{\underbrace{\left\|\boldsymbol{u}_{i}^{\ell,p,d_{i}}% \left(\boldsymbol{\xi}\right)-\boldsymbol{u}_{i}^{\ell,p,d_{i},\tilde{p}_{i}}% \left(\boldsymbol{\xi}\right)\right\|_{\boldsymbol{G}_{i}}}}. \hb@xt@.01(3.61)

In the standard ISP method, the asymptotic upper bound on the approximationta error would simply be the sum of the first two terms on the right hand side of Eq. 3.61, implying that \boldsymbol{u}_{i}^{\ell,p} would converge to \boldsymbol{u}_{i} as \ell,p\rightarrow\infty. However, in the reduced ISP method, \varepsilon_{i} would converge to a non-zero quantity, and have an asymptotic upper bound of \mathcal{O}\left(\varepsilon_{i,\mathrm{dim}}\right)+\mathcal{O}\left(% \varepsilon_{i,\mathrm{ord}}\right).

This analysis suggests that the approximation error in \boldsymbol{u}_{i}^{\ell,p,d_{i},\tilde{p}_{i}} can be explicitly controlled by choosing the tolerances \varepsilon_{i,\mathrm{dim}} and \varepsilon_{i,\mathrm{ord}} appropriately.

### 3.7 Selecting the tolerance values

In practice, since the exact bounds on the approximation error are not known apriori, the tolerance values are selected based on the results computed in a preliminary study, wherein a lower fidelity multi-physics model is used and the various tolerance values can be tested. In our case, lower fidelity translates to a coarser spatial discretization of the original steady state coupled PDE system to formulate the coupled algebraic system. An illustration of this is provided in the numerical experiments in §4

## 4 Numerical examples

We will now demonstrate and compare the performance of the standard and reduced ISP methods using two numerical examples.

### 4.1 Thermal-Neutronics problem

The first numerical example we consider is inspired from recent benchmark studies using nuclear reactor models [14, 15]. Here, we study the steady-state neutron transport and heat transfer in a two-dimensional reactor [33], with uncertain thermal and reactive properties.

#### 4.1.1 Model setup

Let \Omega\equiv\left(0,1\right)_{x_{1}}\times\left(0,1\right)_{x_{2}} denote the spatial domain and T,\varphi denote the respective non-dimensional temperature, neutron flux in a reactor. The governing equations for the fluid variables are as follows. \forall\boldsymbol{\xi}\in\Xi,

 \displaystyle\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}T\left(% \boldsymbol{x},\boldsymbol{\xi}\right)-H_{T}\left(\boldsymbol{x},\boldsymbol{% \xi}_{1}\left(\boldsymbol{\xi}\right)\right)T\left(\boldsymbol{x},\boldsymbol{% \xi}\right)+\frac{E_{T}\varphi\left(\boldsymbol{x},\boldsymbol{\xi}\right)}{% \sqrt{T\left(\boldsymbol{x},\boldsymbol{\xi}\right)+1}} \displaystyle=0, \displaystyle\boldsymbol{\nabla}^{\mathbf{T}}\left(\sqrt{T\left(\boldsymbol{x}% ,\boldsymbol{\xi}\right)+1}\boldsymbol{\nabla}\varphi\left(\boldsymbol{x},% \boldsymbol{\xi}\right)\right)-\frac{\varSigma_{\varphi}\left(\boldsymbol{x},% \boldsymbol{\xi}_{2}\left(\boldsymbol{\xi}\right)\right)\varphi\left(% \boldsymbol{x},\boldsymbol{\xi}\right)}{\sqrt{T\left(\boldsymbol{x},% \boldsymbol{\xi}\right)+1}}+S_{\varphi} \displaystyle=0, \displaystyle\boldsymbol{x}\in\Omega, \hb@xt@.01(4.1)

with homogenous Neumann boundary conditions for T and \varphi at all boundaries. Figure 1 illustrates the multi-physics setup and computational domain.

Here, H_{T}, E_{T}, \varSigma_{\varphi} and S_{\varphi} denote the (non-dimensional) thermal transmittance, fission energy, reaction cross section and neutron source strength respectively. In this study, H_{T} and \varSigma_{\varphi} are assumed to be independent random fields, and modeled using the following KL expansions. \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}_{1}\in\Xi_{1},

 H_{T}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\right)=\bar{H}_{T}+\sqrt{3}% \delta_{H_{T}}\sum_{j=1}^{s_{1}}\gamma_{H_{T},j}\left(\boldsymbol{x}\right)\xi% _{1j}, \hb@xt@.01(4.2)

where \bar{H}_{T} denotes the mean of H_{T} and \left\{\xi_{1j}\sim U\left[-1.1\right]\right\}_{j=1}^{s_{1}} are i.i.d. random variables. Similarly, \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}_{2}\in\Xi_{2},

 \varSigma_{\varphi}\left(\boldsymbol{x},\boldsymbol{\xi}_{2}\right)=\bar{% \varSigma}_{\varphi}+\sqrt{3}\delta_{\varSigma_{\varphi}}\sum_{j=1}^{s_{2}}% \gamma_{\varSigma_{\varphi},j}\left(\boldsymbol{x}\right)\xi_{2j}, \hb@xt@.01(4.3)

where \bar{\varSigma}_{\varphi} denotes the mean of \varSigma_{\varphi} and \left\{\xi_{2j}\sim U\left[-1.1\right]\right\}_{j=1}^{s_{2}} are i.i.d. random variables. Moreover, we assume that both H_{T} and \varSigma_{\varphi} have exponential covariance kernels

 \displaystyle C_{H_{T}}\left(\boldsymbol{x},\boldsymbol{y}\right) \displaystyle=\delta_{H_{T}}^{2}\exp\left(-\frac{\left\|\boldsymbol{x}-% \boldsymbol{y}\right\|_{1}}{l_{H_{T}}}\right), \displaystyle C_{\varSigma_{\varphi}}\left(\boldsymbol{x},\boldsymbol{y}\right) \displaystyle=\delta_{\varSigma_{\varphi}}^{2}\exp\left(-\frac{\left\|% \boldsymbol{x}-\boldsymbol{y}\right\|_{1}}{l_{\varSigma_{\varphi}}}\right), \displaystyle\boldsymbol{x},\boldsymbol{y}\in\Omega \hb@xt@.01(4.4)

where \delta_{H_{T}},\delta_{\varSigma_{\varphi}} denote the respective coefficients of variation, and l_{H_{T}},l_{\varSigma_{\varphi}} denote the respective correlation lengths. The analytical expressions for \left\{\gamma_{H_{T},j}\right\}_{j>0} and \left\{\gamma_{\varSigma_{\varphi},j}\right\}_{j>0} are provided in Appendix A.

The governing PDE system is spatially discretized using bilinear finite elements [34] and m\times m equispaced nodes in \Omega. Let \boldsymbol{u}_{1},\boldsymbol{u}_{2}\in\mathbb{R}^{m^{2}} denote the respective vectors of nodal temperature and neutron flux values, which solve the nonlinear system

 \displaystyle\left(\boldsymbol{K}_{T}+\boldsymbol{H}_{T}\left(\boldsymbol{\xi}% _{1}\right)\right)\boldsymbol{u}_{1}-\boldsymbol{E}_{T}\left(\boldsymbol{u}_{1% }\right)\boldsymbol{u}_{2} \displaystyle=\boldsymbol{0}, \displaystyle\left(\boldsymbol{K}_{\varphi}\left(\boldsymbol{u}_{1}\right)+% \boldsymbol{\varSigma}_{\varphi}\left(\boldsymbol{u}_{1},\boldsymbol{\xi}_{2}% \right)\right)\boldsymbol{u}_{2}-\boldsymbol{s}_{\varphi} \displaystyle=\boldsymbol{0}, \hb@xt@.01(4.5)

where each term in Eq. 4.5 denotes its respective discretized operator in the coupled PDE system. Subsequently, a multi-physics setup is formulated by separating the thermal and neutronic components of the coupled algebraic system, wherein the component residuals, as per Eq. 2.1, are formulated as follows.

 \displaystyle\boldsymbol{f}_{1}\left(\boldsymbol{u}_{1};\boldsymbol{u}_{2},% \boldsymbol{\xi}_{1}\right) \displaystyle=\left(\boldsymbol{K}_{T}+\boldsymbol{H}_{T}\left(\boldsymbol{\xi% }_{1}\right)\right)\boldsymbol{u}_{1}-\boldsymbol{E}_{T}\left(\boldsymbol{u}_{% 1}\right)\boldsymbol{u}_{2}, \displaystyle\boldsymbol{f}_{2}\left(\boldsymbol{u}_{2};\boldsymbol{u}_{1},% \boldsymbol{\xi}_{2}\right) \displaystyle=\left(\boldsymbol{K}_{\varphi}\left(\boldsymbol{u}_{1}\right)+% \boldsymbol{\varSigma}_{\varphi}\left(\boldsymbol{u}_{1},\boldsymbol{\xi}_{2}% \right)\right)\boldsymbol{u}_{2}-\boldsymbol{s}_{\varphi}. \hb@xt@.01(4.6)

The quantities of interest in this study are the first two moments and sensitivity indices of the temperature and neutron flux fields. Table 1 lists the numerical values of the deterministic parameters used in this study.

#### 4.1.2 Modular deterministic solver: Setup and verification

To compare the results of the gPC solution approximations with Monte-Carlo solution samples, we implemented a modular deterministic solver for the coupled algebraic system, where a BGS method is used to converge to the solution. The corresponding iterations are computed as follows.

 \displaystyle\boldsymbol{u}_{1}^{\ell+1} \displaystyle=\left(\boldsymbol{K}_{T}+\boldsymbol{H}_{T}\left(\boldsymbol{\xi% }_{1}\right)\right)^{-1}\boldsymbol{E}_{T}\left(\boldsymbol{u}_{1}^{\ell}% \right)\boldsymbol{u}_{2}^{\ell}, \displaystyle\boldsymbol{u}_{2}^{\ell+1} \displaystyle=\left(\boldsymbol{K}_{\varphi}\left(\boldsymbol{u}_{1}^{\ell+1}% \right)+\boldsymbol{\varSigma}_{\varphi}\left(\boldsymbol{u}_{1}^{\ell+1},% \boldsymbol{\xi}_{2}\right)\right)^{-1}\boldsymbol{s}_{\varphi}. \hb@xt@.01(4.7)

Subsequently, a verification study was carried out on the modular deterministic solver, using the method of manufactured solutions (MMS) [35]. Further details are provided in Appendix B. Moreover, similar formulations define the BGS iterations in the standard and reduced ISP method implementations, which respectively solve the global and modular Galerkin forms of the coupled algebraic system formulated in Eq. 4.6. In each implementation, the the components of the solver were developed as \mathtt{MATLAB}^{\text{\texttrademark}} function modules,. Furthermore, the corresponding linear systems are solved using LSQR [36], with the mean of the stochastic left hand side matrices used in defining preconditioners [37].

#### 4.1.3 ISP based uncertainty propagation

Here, we compare the results and performance of the standard and reduced ISP method implementations. For m=21, s_{1}=s_{2}=3 and p=4, Figure 2 and Figure 3 respectively compare the mean and standard deviation of the temperature and neutron flux fields computed from their converged gPC coefficients. Moreover, Table 2 compares the sensitivity indices computed using the ANOVA decomposition technique [38]. Here, \forall i\in\left\{1,2\right\}, \hat{\mathcal{V}}_{i,1} and \hat{\mathcal{V}}_{i,2} denote the main effects of \boldsymbol{\xi_{1}} and \boldsymbol{\xi_{2}}, while \hat{\mathcal{V}}_{i,12} denotes the interaction effects on the variance of \boldsymbol{u}_{i}. Therefore, \hat{\mathcal{V}}_{i,1}+\hat{\mathcal{V}}_{i,2}+\hat{\mathcal{V}}_{i,12}=1:

 \hat{\mathcal{V}}_{i,1}=\frac{\displaystyle\sum_{j=1}^{P_{1}}\left\|\hat{% \boldsymbol{u}}_{i,j,0}\right\|_{\boldsymbol{G}}^{2}}{\displaystyle\sum_{j=1}^% {P}\left\|\hat{\boldsymbol{u}}_{i,\jmath_{1}\left(j\right),\jmath_{2}\left(j% \right)}\right\|_{\boldsymbol{G}}^{2}},\ \hat{\mathcal{V}}_{i,2}=\frac{% \displaystyle\sum_{j=1}^{P_{2}}\left\|\hat{\boldsymbol{u}}_{i,0,j}\right\|_{% \boldsymbol{G}}^{2}}{\displaystyle\sum_{j=1}^{P}\left\|\hat{\boldsymbol{u}}_{i% ,\jmath_{1}\left(j\right),\jmath_{2}\left(j\right)}\right\|_{\boldsymbol{G}}^{% 2}}. \hb@xt@.01(4.8)

Furthermore, for various values of s_{1}, s_{2} and p, a comparison of the respective errors \varepsilon_{s},\varepsilon_{r} and overall computational costs \mathcal{C}_{s}, \mathcal{C}_{r} (wall-times) is provided in Table 3. The tolerance values used in the reduced ISP method implementation are \epsilon_{1,\mathrm{dim}}=0.02, \epsilon_{2,\mathrm{dim}}=0.05, and \epsilon_{1,\mathrm{ord}}=\epsilon_{2,\mathrm{ord}}=10^{-4}. While the choice of these tolerances seems arbitrary, they were selected based on preliminary experiments using a coarser mesh, with m=11, s_{1}=s_{2}=3 and p=4, to study the effects of changing \epsilon_{1,\mathrm{dim}} and \epsilon_{2,\mathrm{dim}} on the computed gPC coefficients of the solutions. Figure 4 illustrates a comparison of the standard deviations of the random neutron flux.

In each case, both the standard and reduced ISP algorithms invariantly converged in 6 iterations for the chosen tolerance \epsilon_{\mathrm{BGS}}=10^{-6}. This observation indicates that the convergence rate the BGS iterations in both implementations were independent of the stochastic dimensions s_{1},s_{2} and the gPC order p.

The highest speedup factor observed is \approx 14.3. Moreover, the approximation errors in the standard ISP method implementation are observed to decay exponentially, which indicates a high degree of regularity in the stochastic solutions. Furthermore, for the reduced ISP method implementation, the asymptotic upper bound predicted in §3.6, is observed in its approximation errors.

### 4.2 Boussinesq flow problem.

The Boussinesq model describes thermally driven, incompressible flows and is widely used in oceanic and atmospheric modeling [39, 40]. Here, we consider a multi-physics setup with uncertain fluid properties and boundary conditions.

#### 4.2.1 Model setup

Let \Omega\equiv\left(0,1\right)_{x_{1}}\times\left(0,1\right)_{x_{2}} denote the spatial domain and \boldsymbol{u}=\left[\begin{array}[]{cc}u_{1}&u_{2}\end{array}\right]^{\mathbf% {T}} , p ,T denote the non-dimensional [40] fluid velocity, pressure and temperature respectively. The governing equations for the fluid variables are as follows. \forall\boldsymbol{\xi}\in\Xi,

 \displaystyle\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{u}\left(\boldsymbol{x% },\boldsymbol{\xi}\right) \displaystyle=0, \displaystyle\left(\boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)^% {\mathbf{T}}\boldsymbol{\nabla}\right)\boldsymbol{u}\left(\boldsymbol{x},% \boldsymbol{\xi}\right)+\boldsymbol{\nabla}p\left(\boldsymbol{x},\boldsymbol{% \xi}\right) \displaystyle-\mathrm{Pr}\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}% \boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-\mathrm{Pr}\mathrm{% Ra}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\left(\boldsymbol{\xi}\right)% \right)T\left(\boldsymbol{x},\boldsymbol{\xi}\right)\boldsymbol{e}_{2} \displaystyle=\boldsymbol{0}, \displaystyle\left(\boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)^% {\mathbf{T}}\boldsymbol{\nabla}\right)T\left(\boldsymbol{x},\boldsymbol{\xi}% \right)-\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}T\left(\boldsymbol{% x},\boldsymbol{\xi}\right) \displaystyle=0, \displaystyle\boldsymbol{x}\in\Omega, \hb@xt@.01(4.9)

with homogenous Dirichlet boundary conditions for \boldsymbol{u} and Neumann boundary conditions for p at all boundaries. The boundary conditions for temperature, as shown in Figure 5, are as follows.

 \displaystyle\frac{\partial T}{\partial x_{2}}\left(x_{1},0,\boldsymbol{\xi}% \right)=\frac{\partial T}{\partial x_{2}}\left(x_{1},1,\boldsymbol{\xi}\right) \displaystyle=0, \displaystyle x_{1}\in\left[0,1\right], \displaystyle T\left(0,x_{2},\boldsymbol{\xi}\right)-T_{h}\left(x_{2},% \boldsymbol{\xi}_{2}\right)=T\left(1,x_{2},\boldsymbol{\xi}\right) \displaystyle=0, \displaystyle x_{2}\in\left[0,1\right]. \hb@xt@.01(4.10)

Here, \boldsymbol{e}_{2} denotes \left[\begin{array}[]{cc}0&1\end{array}\right]^{\mathbf{T}} while \mathrm{Pr} and \mathrm{Ra} denote the Prandtl and Rayleigh numbers respectively. T_{h} denotes the hot-wall temperature such that \forall x_{2}\in\left[0,1\right],\boldsymbol{\xi}_{2}\in\Xi_{2},

 T_{h}\left(x_{2},\boldsymbol{\xi}_{2}\right)=\bar{T}_{h}+h\left(x_{2},% \boldsymbol{\xi}_{2}\right)\sin^{2}\left(\pi x_{2}\right), \hb@xt@.01(4.11)

where \bar{T}_{h} is the mean hot-wall temperature and h denotes the perturbation amplitude.

In this study, \mathrm{Ra} and h are assumed to be independent random fields, and modeled using the following KL expansions. \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}_{1}\in\Xi_{1},

 \mathrm{Ra}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\right)=\bar{\mathrm{Ra}}+% \sqrt{3}\delta_{\mathrm{Ra}}\sum_{j=1}^{s_{1}}\gamma_{\mathrm{Ra},j}\left(% \boldsymbol{x}\right)\xi_{1j}, \hb@xt@.01(4.12)

where \bar{\mathrm{Ra}} denotes the mean of \mathrm{Ra} and \left\{\xi_{1j}\sim U\left[-1,1\right]\right\}_{j=1}^{s_{1}} are i.i.d. random variables. Similarly, \forall x_{2}\in\left(0,1\right),\boldsymbol{\xi}_{2}\in\Xi_{2},

 h\left(x_{2},\boldsymbol{\xi}_{2}\right)=\sqrt{3}\delta_{h}\sum_{j=1}^{s_{2}}% \gamma_{h,j}\left(\boldsymbol{x}\right)\xi_{2j}, \hb@xt@.01(4.13)

where \left\{\xi_{2j}\sim U\left[-1,1\right]\right\}_{j=1}^{s_{2}} are i.i.d. random variables. Moreover, we assume that both \mathrm{Ra} and h have exponential covariance kernels

 \displaystyle C_{\mathrm{Ra}}\left(\boldsymbol{x},\boldsymbol{y}\right) \displaystyle=\delta_{\mathrm{Ra}}^{2}\exp\left(-\frac{\left\|\boldsymbol{x}-% \boldsymbol{y}\right\|_{1}}{l_{\mathrm{Ra}}}\right), \displaystyle\boldsymbol{x},\boldsymbol{y}\in\Omega, \displaystyle C_{h}\left(x_{2},y_{2}\right) \displaystyle=\delta_{h}^{2}\exp\left(-\frac{\left|x_{2}-y_{2}\right|}{l_{h}}% \right), \displaystyle x_{2},y_{2}\in\left[0,1\right], \hb@xt@.01(4.14)

where \delta_{\mathrm{Ra}}, \delta_{h} denote the respective coefficients of variations, and l_{\mathrm{Ra}}, l_{h} denote the respective correlation lengths. The analytic expressions for \left\{\gamma_{\mathrm{Ra},j}\right\}_{j>0} and \left\{\gamma_{h,j}\right\}_{j>0} are provided in Appendix A.

The pressure Poisson equation

 \boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}p\left(\boldsymbol{x},% \boldsymbol{\xi}\right)+\boldsymbol{\nabla}^{\mathbf{T}}\left(\left(% \boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)^{\mathbf{T}}% \boldsymbol{\nabla}\right)\boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}% \right)-\mathrm{Pr}\mathrm{Ra}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\right)% T\left(\boldsymbol{x},\boldsymbol{\xi}\right)\boldsymbol{e}_{2}\right)=0 \hb@xt@.01(4.15)

is solved in place of the continuity equation, to close the momentum component of the PDE system.

Each component of the PDE system is spatially discretized using a finite volume method, with linear central-differencing schemes [42], on a uniform grid with m\times m cells. Let \boldsymbol{u}_{1}^{\prime},\boldsymbol{u}_{2}^{\prime},\boldsymbol{p}^{\prime% },\boldsymbol{t}^{\prime}\in\mathbb{R}^{m^{2}} denote the respective vectors of cell-centroidal horizontal velocity, vertical velocity, pressure and temperature, which solve the nonlinear system

 \displaystyle\left(\boldsymbol{K}_{u}+\boldsymbol{A}\left(\boldsymbol{u}_{1}^{% \prime},\boldsymbol{u}_{2}^{\prime}\right)\right)\boldsymbol{u}_{1}^{\prime}+% \boldsymbol{B}_{1}\boldsymbol{p}^{\prime} \displaystyle=\boldsymbol{0}, \displaystyle\left(\boldsymbol{K}_{u}+\boldsymbol{A}\left(\boldsymbol{u}_{1}^{% \prime},\boldsymbol{u}_{2}^{\prime}\right)\right)\boldsymbol{u}_{2}^{\prime}+% \boldsymbol{B}_{2}\boldsymbol{p}^{\prime}-\boldsymbol{R}\left(\boldsymbol{\xi}% _{1}\right)\boldsymbol{t}^{\prime} \displaystyle=\boldsymbol{0}, \displaystyle\boldsymbol{K}_{p}\boldsymbol{p}^{\prime}+\boldsymbol{C}_{1}\left% (\boldsymbol{u}_{1}^{\prime},\boldsymbol{u}_{2}^{\prime}\right)\boldsymbol{u}_% {1}^{\prime}+\boldsymbol{C}_{2}\left(\boldsymbol{u}_{1}^{\prime},\boldsymbol{u% }_{2}^{\prime}\right)\boldsymbol{u}_{2}^{\prime}-\boldsymbol{S}\left(% \boldsymbol{\xi}_{1}\right)\boldsymbol{t}^{\prime} \displaystyle=\boldsymbol{0}, \displaystyle\left(\boldsymbol{K}_{T}+\boldsymbol{A}\left(\boldsymbol{u}_{1}^{% \prime},\boldsymbol{u}_{2}^{\prime}\right)\right)\boldsymbol{t}^{\prime}-% \boldsymbol{h}\left(\boldsymbol{\xi}_{2}\right) \displaystyle=\boldsymbol{0}. \hb@xt@.01(4.16)

where each term in Eq. 4.16 denotes its respective discretized operator in the coupled PDE system. Subsequently, we formulate a modular multi-physics setup, as per Eq. 2.1, by separating the momentum and energy components of the coupled algebraic system. Let \boldsymbol{u}_{1}=\left[\boldsymbol{u}_{1}^{\prime};\boldsymbol{u}_{2}^{% \prime};\boldsymbol{p}^{\prime}\right]\in\mathbb{R}^{n_{1}}\equiv\mathbb{R}^{3% m^{2}}, \boldsymbol{u}_{2}=\boldsymbol{t}^{\prime}\in\mathbb{R}^{n_{2}}\equiv\mathbb{R% }^{m^{2}} denote the respective solution variables in the modular algebraic system. Therefore, the component residuals are formulated as follows.

 \displaystyle\boldsymbol{f}_{1}\left(\boldsymbol{u}_{1};\boldsymbol{u}_{2},% \boldsymbol{\xi}_{1}\right) \displaystyle=\left[\begin{array}[]{cc}\boldsymbol{K}_{u}+\boldsymbol{A}\left(% \boldsymbol{u}_{1}^{\prime}\left(\boldsymbol{u}_{1}\right),\boldsymbol{u}_{2}^% {\prime}\left(\boldsymbol{u}_{1}\right)\right)&\boldsymbol{0}\\ \boldsymbol{0}&\boldsymbol{K}_{u}+\boldsymbol{A}\left(\boldsymbol{u}_{1}^{% \prime}\left(\boldsymbol{u}_{1}\right),\boldsymbol{u}_{2}^{\prime}\left(% \boldsymbol{u}_{1}\right)\right)\\ \boldsymbol{C}_{1}\left(\boldsymbol{u}_{1}^{\prime}\left(\boldsymbol{u}_{1}% \right),\boldsymbol{u}_{2}^{\prime}\left(\boldsymbol{u}_{1}\right)\right)&% \boldsymbol{C}_{2}\left(\boldsymbol{u}_{1}^{\prime}\left(\boldsymbol{u}_{1}% \right),\boldsymbol{u}_{2}^{\prime}\left(\boldsymbol{u}_{1}\right)\right)\end{% array}\right. \displaystyle\left.\begin{array}[]{c}\boldsymbol{B}_{1}\\ \boldsymbol{B}_{2}\\ \boldsymbol{K}_{p}\end{array}\right]\boldsymbol{u}_{1}-\left[\begin{array}[]{c% }\boldsymbol{0}\\ \boldsymbol{R}\left(\boldsymbol{\xi}_{1}\right)\\ \boldsymbol{S}\left(\boldsymbol{\xi}_{1}\right)\end{array}\right]\boldsymbol{u% }_{2}, \displaystyle\boldsymbol{f}_{2}\left(\boldsymbol{u}_{2};\boldsymbol{u}_{1},% \boldsymbol{\xi}_{2}\right) \displaystyle=\left(\boldsymbol{K}_{T}+\boldsymbol{A}\left(\boldsymbol{u}_{1}^% {\prime}\left(\boldsymbol{u}_{1}\right),\boldsymbol{u}_{2}^{\prime}\left(% \boldsymbol{u}_{1}\right)\right)\right)\boldsymbol{u}_{2}-\boldsymbol{h}\left(% \boldsymbol{\xi}_{2}\right). \hb@xt@.01(4.17)

Table 4 lists the numerical values of the deterministic parameters used in this study. The quantities of interest are the probability density functions of the (scaled) kinetic energy K and thermal energy E, which are defined as follows. \forall\boldsymbol{\xi}\in\Xi,

 K\left(\boldsymbol{\xi}\right)=\frac{1}{2}\left(\int_{\Omega}u_{1}\left(% \boldsymbol{x},\boldsymbol{\xi}\right)^{2}d\boldsymbol{x}+\int_{\Omega}u_{2}% \left(\boldsymbol{x},\boldsymbol{\xi}\right)^{2}d\boldsymbol{x}\right),\ E% \left(\boldsymbol{\xi}\right)=\int_{\Omega}T\left(\boldsymbol{x},\boldsymbol{% \xi}\right)d\boldsymbol{x}, \hb@xt@.01(4.18)

and the statistics of the fluid velocity and temperature.

#### 4.2.2 Modular deterministic solver: Setup and verification

Following the same procedure as in the previous numerical example, a modular deterministic solver was implemented, wherein the BGS iterations are formulated as follows.

 \displaystyle\boldsymbol{u}_{1}^{\ell+1} \displaystyle=\boldsymbol{u}_{1}^{\ell}-\left(\frac{\partial\boldsymbol{f}_{1}% }{\partial\boldsymbol{u}_{1}}\left(\boldsymbol{u}_{1}^{\ell};\boldsymbol{u}_{2% }^{\ell},\boldsymbol{\xi}_{1}\right)\right)^{-1}\boldsymbol{f}_{1}\left(% \boldsymbol{u}_{1}^{\ell};\boldsymbol{u}_{2}^{\ell},\boldsymbol{\xi}_{1}\right), \displaystyle\boldsymbol{u}_{2}^{\ell+1} \displaystyle=\boldsymbol{u}_{2}^{\ell}-\left(\frac{\partial\boldsymbol{f}_{2}% }{\partial\boldsymbol{u}_{2}}\left(\boldsymbol{u}_{2}^{\ell};\boldsymbol{u}_{1% }^{\ell+1},\boldsymbol{\xi}_{2}\right)\right)^{-1}\boldsymbol{f}_{2}\left(% \boldsymbol{u}_{2}^{\ell};\boldsymbol{u}_{1}^{\ell+1},\boldsymbol{\xi}_{2}% \right). \hb@xt@.01(4.19)

Similar formulations are used in the standard and reduced ISP method implementations, wherein the corresponding linear systems are solved using LSQR, and mean based preconditioners. Once again, in each implementation, the components of the solver were developed as \mathtt{MATLAB}^{\text{\texttrademark}} function modules.

#### 4.2.3 ISP based uncertainty propagation

We implemented both ISP based uncertainty propagation methods to compute the quantities of interest. Subsequently, for m=25, s_{1}=s_{2}=3 and p=4, the probability density function of K and E, and the first two moments of u_{1},u_{2},T, were computed and compared. The results are shown in Figure 6, Figure 7 and Figure 8 respectively. Moreover, in the reduced ISP method implementation, the joint probability density functions of the reduced random variables was computed at the last iteration. The results are illustrated in Figure 9. The tolerance values used in this study are \epsilon_{1,\mathrm{dim}}=0.01,\epsilon_{2,\mathrm{dim}}=0.02 and \epsilon_{1,\mathrm{ord}}=\epsilon_{2,\mathrm{ord}}=10^{-4}.

For various values of s_{1}, s_{2} and p, the errors \varepsilon_{s},\varepsilon_{r} and overall costs \mathcal{C}_{s}, \mathcal{C}_{r} (wall-times) of both methods are compared in Table 5. In both ISP method implementations, the rate of convergence of the BGS iterations is observed to be invariant with respect to the stochastic dimensions s_{1},s_{2} and gPC order p. For \epsilon_{\mathrm{BGS}}=10^{-6}, convergence is achieved in 9 iterations.

The highest speedup factor observed is \approx 12.1. The convergence rate of the approximation error in the standard ISP method decays exponentially, which indicates a high degree of regularity in the solutions. Moreover, the predicted asymptotic upper bound in the approximation error in the reduced ISP method is observed as well.

## 5 Conclusions and future work

A reduced ISP based uncertainty propagation method for stochastic multi-physics models is presented, and compared against the standard ISP method using two numerical examples. At the expense of relatively small approximation errors, the reduced ISP method exhibited significant speedup over the standard ISP method. This can be primarily attributed to the dimension and order reduction steps for constructing an approximation of the input data in each module, which mitigates the rate of exponential growth of overall computational costs.

The two most significant overheads observed while implementing the reduced ISP algorithm were the intermediate SVD computation towards constructing the reduced dimensional approximation space, and the computation of the global gPC coefficient matrices from the reduced and modular gPC coefficient matrices. Both these overheads can be easily eliminated due to the embarrassingly parallel nature of the computations involved. Moreover, alternatives, for example, Schur complement based elimination, to the BGS partitioning approach can be explored for reducing the overall computational costs.

Since the global gPC coefficients still need to be computed, manipulated and stored, the applicability of our proposed method is limited to models in which the global stochastic dimension is manageably low (<20). If a particular module contributes a large number of uncertainties to the global set, a Monte-Carlo based sampling approach can be employed in that module alone, while other modules could still afford the use of spectral methods. This method has been recently demonstrated in [43]. Moreover, for tackling models in which the solution regularity is low, the proposed method can also be easily adapted towards multi-element gPC [44] and discontinuous wavelet [45] based spectral representations.

Other active areas being currently investigated also include derivative or active-subspace based dimension reduction methods [45] as alternative reduction strategies. Moreover, studying the effects of uncertainties on the approximation errors in a multi-physics simulation context is a subject of future research.

## Acknowledgement

This research was funded by the US Department of Energy, Office of Advanced Computing Research and Applied Mathematics Program and partially funded by the US Department of Energy NNSA ASC Program.

## References

• [1] G. Casella and C. Robert, Monte Carlo statistical methods, Springer, New York, 1999.
• [2] P. Glasserman, Quasi-Monte Carlo, Monte Carlo Methods in Financial Engineering, Springer, New York, 2003, pp. 281-337.
• [3] H. Rauhut and R. Ward, Sparse Legendre expansions via \ell1-minimization, J. Approx. Theory, 164.5 (2012), pp. 517-533.
• [4] J. Peng, J. Hampton, and A. Doostan, A weighted \ell1-minimization approach for sparse polynomial chaos expansions, J. Comput. Phys., 267 (2014), pp. 92-111.
• [5] D. Xiu and G. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, J. Comput. Phys., 187.1 (2003), pp. 137-167.
• [6] H. Matthies, R. Niekamp and J. Steindorf, Algorithms for strong coupling procedures. Computer Methods Appl. Mech. Engrg., 195 (2006), pp. 2028-2049.
• [7] C. Felippa, K. Park and C. Farhat, Partitioned analysis of coupled mechanical systems. Computer Methods Appl. Mech. Engrg., 190 (2006), pp. 3247-3270.
• [8] P. Constantine, A. Doostan and G. Iaccarino, A hybrid collocation/Galerkin scheme for convective heat transfer problems with stochastic boundary conditions, International Journal for Numerical Methods in Engineering, 80.7 (2009), pp. 868-880.
• [9] X. Chen, B. Ng, Y. Sun and C. Tong, A flexible uncertainty quantification method for linearly coupled multi-physics systems, J. Comput. Phys., Physics 248 (2013), pp. 383-401.
• [10] X. Chen, B. Ng, Y. Sun and C. Tong, A computational method for simulating subsurface flow and reactive transport in heterogeneous porous media embedded with flexible uncertainty quantification, Water Resources Research, 49 (2013), pp. 5740-5755.
• [11] M. Hadigol, A. Doostan, H. Matthies and R. Niekamp, Partitioned treatment of uncertainty in coupled domain problems: A separated representation approach, Computer Methods Appl. Mech. Engrg., 274 (2014), pp. 103-124.
• [12] P. Constantine, E. Phipps and T. Wildey, Efficient uncertainty propagation for network multi-physics systems, International Journal for Numerical Methods in Engineering, (2014).
• [13] M. Arnst, R. Ghanem, E. Phipps and J. Red-Horse, Dimension reduction in stochastic modeling of coupled problems, International Journal for Numerical Methods in Engineering, 92 (2012), pp. 940-968.
• [14] M. Arnst, R. Ghanem, E. Phipps and J. Red-Horse, Measure transformation and efficient quadrature in reduced-dimensional stochastic modeling of coupled problems, International Journal for Numerical Methods in Engineering, 92 (2012), pp. 1044-1080.
• [15] M. Arnst, R. Ghanem, E. Phipps and J. Red-Horse, Reduced chaos expansions with random coefficientsin reduced-dimensional stochastic modeling of coupled problems, International Journal for Numerical Methods in Engineering, 97 (2014), pp. 352-376.
• [16] C. Soize and R. Ghanem, Reduced chaos decomposition with random coefficients of vector-valued random variables and random fields, Computer Methods Appl. Mech. Engrg., 198 (2009), pp. 1926-1934.
• [17] P. Davis, Interpolation and approximation, Dover Publications, 1975.
• [18] W. Gautschi, Orthogonal polynomials: applications and computation, Acta Numer., 5 (1996), pp. 45-119.
• [19] W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Comp., 3 (1982), pp. 289-317.
• [20] R. Cameron and W. Martin, The orthogonal development of non-linear functionals in series of Fourier-Hermite functionals, Ann. of Math., (1947), pp. 385-392. 45-119.
• [21] B. Silverman, Density estimation for statistics and data analysis,, CRC press, 1986.
• [22] I. Babsuka, R. Tempone and G. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations, SIAM J. Numer. Anal., 42 (2004), pp. 800-825.
• [23] O. Ernst and U. Ellmann, Stochastic galerkin matrices., SIAM J. Matrix Anal. App., 31 (2010), pp. 1848-1872.
• [24] K. Park, C. Carlos, A. Felippa and R. Ohayon, Partitioned formulation of internal fluid–structure interaction problems by localized Lagrange multipliers, Computer Methods Appl. Mech. Engrg., 190.24 (2001), pp. 2989-3007.
• [25] H. Hartley, The modified Gauss-Newton method for the fitting of non-linear regression functions by least squares, Technometrics, 3 (1961), pp. 269-280.
• [26] J. More, The Levenberg-Marquardt algorithm: implementation and theory, in Numerical analysis, Springer, Heidelberg, 1978, pp. 105-116.
• [27] M. Joosten, W. Dettmer and D. Peric, Analysis of the block Gauss-Seidel solution procedure for a strongly coupled model problem with reference to fluid-structure interaction, International Journal for Numerical Methods in Engineering, 78.7 (2009), pp. 757-778.
• [28] F. Nobile, Fabio, R. Tempone and C. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J. Numer. Anal., 46 (2008), pp. 2309-2345.
• [29] M. Loeve, Probability Theory. Foundations. Random Sequences,, D. Van Nostrand Company, New York, 1955.
• [30] K. Petersen and M. Pedersen, The matrix cookbook, Technical University of Denmark, (2008), pp. 7-15.
• [31] J. Nelder and R. Mead, A simplex method for function minimization, The Computer Journal, 7 (1965), pp. 308-313.
• [32] V. Tchakaloff, Formules de cubatures mÃ©caniquesa coefficients non nÃ©gatifs, Bull. Sci. Math., 81 (1957), pp. 123-134.
• [33] J. Lamarsh, Introduction to nuclear reactor theory, Addison-Wesley, Reading, 1966.
• [34] T. Hughes, The finite element method: linear static and dynamic finite element analysis,, Dover Publications, 2012.
• [35] P. Roache, Code verification by the method of manufactured solutions, Journal of Fluids Engineering, 124 (2002), pp. 4-10.
• [36] C. Paige and M. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8.1 (1982), pp. 43-71.
• [37] P. Constantine, D. Gleich, and G. Iaccarino, A Factorization of the Spectral Galerkin System for Parameterized Matrix Equations: Derivation and Applications, SIAM J. Matrix Anal. App., 33.5 (2011), pp. 2995-3009.
• [38] A. Saltelli, K. Chan and E. Scott, Sensitivity analysis, Wiley, New York, 2000.
• [39] L. Kantha and C. Clayson, Numerical models of oceans and oceanic processes, Academic press, 2000.
• [40] J. Massaguer and J. Zahn, Cellular convection in a stratified atmosphere, Astronom. and Astrophys. Lib., 87 (1980), pp. 315-327.
• [41] D. de Vahl, Laminar natural convection in an enclosed rectangular cavity, International Journal of Heat and Mass Transfer, 11 (1968), pp. 1675-1693.
• [42] R. LeVeque, Finite volume methods for hyperbolic problems, Cambridge University Press, 2002.
• [43] M. Arnst, C. Soize and R. Ghanem, A Hybrid Sampling/Spectral Method for Solving Stochastic Coupled Problems, SIAM Journal on Uncertainty Quantification, (2013), pp. 218-243.
• [44] X. Wan and G. Karniadakis, Multi-element generalized polynomial chaos for arbitrary probability measures, SIAM J. Sci. Comp., 28 (2006), pp. 901-928.
• [45] O Le Maitre, O. Knio, H. Najm, and R. Ghanem, Uncertainty propagation using Wiener-Haar expansions, J. Comput. Phys., 197 (2004), pp. 28-57.
• [46] T. Lukaczyk, F. Palacios, J. Alonso and P. Constantine, Active Subspaces for Shape Optimization, 10th AIAA Multidisciplinary Design Optimization Conference, Maryland, 2014.

## Appendix A - Karhunen-Loeve expansion

### Exponential covariance kernel

Given an n-dimensional spatial domain \Omega\subseteq\mathbb{R}^{n}, let C_{u}:\Omega\times\Omega\rightarrow\mathbb{R}^{+} denote the exponential covariance kernel of a spatially varying random field u\in\Omega\rightarrow\mathbb{R}. Therefore, C_{u}:\forall\boldsymbol{x}=\left[\begin{array}[]{ccc}x_{1}&\cdots&x_{n}\end{% array}\right]^{\mathbf{T}},\boldsymbol{y}=\left[\begin{array}[]{ccc}y_{1}&% \cdots&y_{n}\end{array}\right]^{\mathbf{T}}\in\Omega,

 \displaystyle C_{u}\left(\boldsymbol{x},\boldsymbol{y}\right)=\exp\left(-\frac% {\left\|\boldsymbol{x}-\boldsymbol{y}\right\|_{1}}{l}\right)=\prod_{j=1}^{n}% \exp\left(-\frac{\left|x_{j}-y_{j}\right|}{l}\right), \hb@xt@.01(A.1)

where l denotes the correlation length. Subsequently, we can define the KL expansion of u using an infinite set of random variables \left\{\xi_{\boldsymbol{j}}:\boldsymbol{j}\in\mathbb{N}^{n}\right\}, each having zero mean and unit variance, as follows. \forall\boldsymbol{x}\in\Omega,

 \displaystyle u\left(\boldsymbol{x}\right)-\bar{u}\left(\boldsymbol{x}\right) \displaystyle=\sum_{\boldsymbol{j}\in\mathbb{R}^{n}}\gamma_{\boldsymbol{j}}% \left(\boldsymbol{x}\right)\xi_{\boldsymbol{j}} \displaystyle=\sum_{j_{1}\in\mathbb{R}}\cdots\sum_{j_{n}\in\mathbb{R}}\gamma_{% j_{1}\ldots j_{n}}\left(\boldsymbol{x}\right)\xi_{j_{1}\ldots j_{n}} \displaystyle=\sum_{j_{1}\in\mathbb{R}}\cdots\sum_{j_{n}\in\mathbb{R}}\prod_{k% =1}^{n}g_{j_{k}}\left(x_{k}\right)\xi_{j_{1}\ldots j_{n}} \hb@xt@.01(A.2)

where \forall j>0, if \zeta_{j} solves

 l\zeta_{j}+\tan\left(\frac{\zeta_{j}}{2}\right)=0,

and \zeta_{j+1}>\zeta_{j}>0, then \forall x\in\mathbb{R},

 g_{j}\left(x\right)=\begin{cases}2{\displaystyle\sqrt{\frac{l\zeta_{j}}{1+l^{2% }\zeta_{j}^{2}}}\frac{\cos\left(\zeta_{j}x\right)}{\sqrt{\zeta_{j}+\sin\left(% \zeta_{j}\right)}}}&j\ \mathrm{is\ odd},\\ 2{\displaystyle\sqrt{\frac{l\zeta_{j}}{1+l^{2}\zeta_{j}^{2}}}\frac{\sin\left(% \zeta_{j}x\right)}{\sqrt{\zeta_{j}-\sin\left(\zeta_{j}\right)}}}&j\ \mathrm{is% \ even}.\end{cases}

Therefore, as is required in §4.1 and §4.2, a truncated KL expansion can be easily obtained from the single index form of the expansion in Eq. A.2.

## Appendix B - Verification of modular deterministic solvers

### B1 - Thermal-Neutronics problem

Given the stochastic transmittance H_{T} and cross section \varSigma_{\varphi}, we choose analytical functions T^{*},\varphi^{*}:\forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}\in\Xi,

 \displaystyle T^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=\left(2+\cos\left(\pi x_{1}\right)\right)^{2}-1, \displaystyle\varphi^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=3+\cos\left(2\pi x_{2}\right), \hb@xt@.01(B.1)

which solve the modified system of equations: \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}\in\Xi,

 \displaystyle\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}T\left(% \boldsymbol{x},\boldsymbol{\xi}\right)-H_{T}\left(\boldsymbol{x},\boldsymbol{% \xi}_{1}\left(\boldsymbol{\xi}\right)\right)T\left(\boldsymbol{x},\boldsymbol{% \xi}\right)+\frac{E_{T}\varphi\left(\boldsymbol{x},\boldsymbol{\xi}\right)}{% \sqrt{T\left(\boldsymbol{x},\boldsymbol{\xi}\right)+1}}+f_{T}^{*}\left(% \boldsymbol{x},\boldsymbol{\xi}_{1}\left(\boldsymbol{\xi}\right)\right) \displaystyle=0, \displaystyle\boldsymbol{\nabla}^{\mathbf{T}}\left(\sqrt{T\left(\boldsymbol{x}% ,\boldsymbol{\xi}\right)+1}\boldsymbol{\nabla}\varphi\left(\boldsymbol{x},% \boldsymbol{\xi}\right)\right)-\frac{\varSigma_{\varphi}\left(\boldsymbol{x},% \boldsymbol{\xi}_{2}\left(\boldsymbol{\xi}\right)\right)\varphi\left(% \boldsymbol{x},\boldsymbol{\xi}\right)}{\sqrt{T\left(\boldsymbol{x},% \boldsymbol{\xi}\right)+1}}+f_{\varphi}^{*}\left(\boldsymbol{x},\boldsymbol{% \xi}_{2}\left(\boldsymbol{\xi}\right)\right) \displaystyle=0. \hb@xt@.01(B.2)

Here, \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}_{1}\in\Xi_{1},\boldsymbol{\xi}% _{2}\in\Xi_{2},

 \displaystyle f_{T}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\right) \displaystyle=2\pi^{2}\left(2\cos\left(\pi x_{1}\right)+\cos\left(2\pi x_{1}% \right)\right)+H_{T}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\right)\left(% \left(2+\cos\left(\pi x_{1}\right)\right)^{2}-1\right) \displaystyle-E_{T}\frac{3+\cos\left(2\pi x_{2}\right)}{2+\cos\left(\pi x_{1}% \right)}, \displaystyle f_{\varphi}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}_{2}\right) \displaystyle=4\pi^{2}\left(2+\cos\left(\pi x_{1}\right)\right)\cos\left(2\pi x% _{2}\right)+\varSigma_{\varphi}\left(\boldsymbol{x},\boldsymbol{\xi}_{2}\right% )\frac{3+\cos\left(2\pi x_{2}\right)}{2+\cos\left(\pi x_{1}\right)}. \hb@xt@.01(B.3)

Let \varepsilon^{m}\left(\boldsymbol{\xi}\right):\forall\boldsymbol{\xi}\in\Xi,

 \varepsilon^{m}\left(\boldsymbol{\xi}\right)=\sqrt{\frac{\int_{\Omega}\left\|% \left[\begin{array}[]{c}T^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-T^{m% }\left(\boldsymbol{x},\boldsymbol{\xi}\right)\\ \varphi^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-\varphi^{m}\left(% \boldsymbol{x},\boldsymbol{\xi}\right)\end{array}\right]\right\|_{2}^{2}d% \boldsymbol{x}}{\int_{\Omega}\left\|\left[\begin{array}[]{c}T^{*}\left(% \boldsymbol{x},\boldsymbol{\xi}\right)\\ \varphi^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)\end{array}\right]% \right\|_{2}^{2}d\boldsymbol{x}}}

denote the mean-square error between the exact and approximate solutions T^{m} and \varphi^{m} computed using m\times m nodes. Subsequently, with a convergence tolerance value \epsilon_{\mathrm{BGS}}=10^{-6} and stochastic dimensions to s_{1}=s_{2}=4, the sample average of \varepsilon^{m}, denoted here as \bar{\varepsilon}^{m} was computed using 100 Monte-Carlo solution samples, for various values of m. Figure B1 illustrates the expected second order rate of decay in \bar{\varepsilon}^{m}.

### B2 - Boussinesq flow problem

Following the same procedure in Appendix B1, we choose analytical functions u_{1}^{*},u_{2}^{*},p^{*},t^{*}:\forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi% }\in\Xi,

 \displaystyle u_{1}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=-\sin^{2}\left(\pi x_{1}\right)\sin\left(2\pi x_{2}\right), \displaystyle u_{2}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=\sin\left(2\pi x_{1}\right)\sin^{2}\left(\pi x_{2}\right), \displaystyle p^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=\cos\left(\pi x_{1}\right)\cos\left(\pi x_{2}\right), \displaystyle t^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=\cos\left(\frac{\pi}{2}x_{1}\right)T_{h}\left(x_{2},\boldsymbol{% \xi}_{2}\left(\boldsymbol{\xi}\right)\right), \hb@xt@.01(B.5)

which solve the modified Boussinesq equations: \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}\in\Xi,

 \displaystyle\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{u}\left(\boldsymbol{x% },\boldsymbol{\xi}\right) \displaystyle=0, \displaystyle\left(\boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)^% {\mathbf{T}}\boldsymbol{\nabla}\right)\boldsymbol{u}\left(\boldsymbol{x},% \boldsymbol{\xi}\right)+\boldsymbol{\nabla}p\left(\boldsymbol{x},\boldsymbol{% \xi}\right) \displaystyle-\mathrm{Pr}\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}% \boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-\mathrm{Pr}\mathrm{% Ra}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\left(\boldsymbol{\xi}\right)% \right)T\left(\boldsymbol{x},\boldsymbol{\xi}\right)\boldsymbol{e}_{2}+% \boldsymbol{f}_{u}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=\boldsymbol{0}, \displaystyle\left(\boldsymbol{u}\left(\boldsymbol{x},\boldsymbol{\xi}\right)^% {\mathbf{T}}\boldsymbol{\nabla}\right)T\left(\boldsymbol{x},\boldsymbol{\xi}% \right)-\boldsymbol{\nabla}^{\mathbf{T}}\boldsymbol{\nabla}T\left(\boldsymbol{% x},\boldsymbol{\xi}\right)+f_{T}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=0, \hb@xt@.01(B.6)

Here, \forall\boldsymbol{x}\in\Omega,\boldsymbol{\xi}\in\Xi,

 \displaystyle\boldsymbol{f}_{u}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=2\pi\left[\begin{array}[]{c}\sin\left(2\pi x_{1}\right)\sin^{2}% \left(\pi x_{1}\right)\sin^{2}\left(2\pi x_{2}\right)\\ \sin\left(2\pi x_{2}\right)\sin^{2}\left(\pi x_{2}\right)\sin^{2}\left(2\pi x_% {1}\right)\end{array}\right] \displaystyle-\pi\left[\begin{array}[]{c}\sin\left(2\pi x_{1}\right)\sin^{2}% \left(\pi x_{1}\right)\sin^{2}\left(2\pi x_{2}\right)-\sin\left(\pi x_{1}% \right)\cos\left(\pi x_{2}\right)\\ \sin\left(2\pi x_{2}\right)\sin^{2}\left(\pi x_{2}\right)\sin^{2}\left(2\pi x_% {1}\right)-\sin\left(\pi x_{2}\right)\cos\left(\pi x_{1}\right)\end{array}\right] \displaystyle+2\pi^{2}\mathrm{Pr}\left[\begin{array}[]{c}\cos\left(2\pi x_{1}% \right)\sin\left(2\pi x_{2}\right)-2\sin^{2}\left(\pi x_{1}\right)\sin\left(2% \pi x_{2}\right)\\ -\cos\left(2\pi x_{2}\right)\sin\left(2\pi x_{1}\right)+2\sin^{2}\left(\pi x_{% 2}\right)\sin\left(2\pi x_{1}\right)\end{array}\right] \displaystyle+\Pr\mathrm{Ra}\left(\boldsymbol{x},\boldsymbol{\xi}_{1}\left(% \boldsymbol{\xi}\right)\right)\cos\left(\frac{\pi}{2}x_{1}\right)T_{h}\left(x_% {2},\boldsymbol{\xi}_{2}\left(\boldsymbol{\xi}\right)\right)\boldsymbol{e}_{2}, \displaystyle f_{T}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right) \displaystyle=-\frac{\pi}{2}\left(\sin^{2}\left(\pi x_{1}\right)\sin\left(2\pi x% _{2}\right)\sin\left(\frac{\pi}{2}x_{1}\right)+\frac{\pi}{2}\cos\left(\frac{% \pi}{2}x_{1}\right)\right)T_{h}\left(x_{2},\boldsymbol{\xi}_{2}\left(% \boldsymbol{\xi}\right)\right) \displaystyle-\sin\left(2\pi x_{1}\right)\cos\left(\frac{\pi}{2}x_{1}\right)% \sin^{2}\left(\pi x_{2}\right)\frac{\partial T_{h}}{\partial x_{2}}\left(x_{2}% ,\boldsymbol{\xi}_{2}\left(\boldsymbol{\xi}\right)\right) \displaystyle+\cos\left(\frac{\pi}{2}x_{1}\right)\frac{\partial^{2}T_{h}}{% \partial x_{2}^{2}}\left(x_{2},\boldsymbol{\xi}_{2}\left(\boldsymbol{\xi}% \right)\right). \hb@xt@.01(B.7)

Let \varepsilon^{m}\left(\boldsymbol{\xi}\right):\forall\boldsymbol{\xi}\in\Xi,

 \varepsilon^{m}\left(\boldsymbol{\xi}\right)=\sqrt{\frac{\int_{\Omega}\left\|% \left[\begin{array}[]{c}u_{1}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-% u_{1}^{m}\left(\boldsymbol{x},\boldsymbol{\xi}\right)\\ u_{2}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-u_{2}^{m}\left(% \boldsymbol{x},\boldsymbol{\xi}\right)\\ p^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-p^{m}\left(\boldsymbol{x},% \boldsymbol{\xi}\right)\\ t^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)-t^{m}\left(\boldsymbol{x},% \boldsymbol{\xi}\right)\end{array}\right]\right\|_{2}^{2}d\boldsymbol{x}}{\int% _{\Omega}\left\|\left[\begin{array}[]{c}u_{1}^{*}\left(\boldsymbol{x},% \boldsymbol{\xi}\right)\\ u_{2}^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)\\ p^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)\\ t^{*}\left(\boldsymbol{x},\boldsymbol{\xi}\right)\end{array}\right]\right\|_{2% }^{2}d\boldsymbol{x}}} \hb@xt@.01(B.8)

denote the mean-square error between the exact and approximate solutions u_{1}^{m}, u_{2}^{m}, p^{m}, t^{m}, computed using m\times m cells.

Subsequently, the average error \bar{\varepsilon}^{m}, was computed using 100 Monte-Carlo samples for various values of m, shown in Figure B2, keeping the tolerance at \epsilon_{\mathrm{BGS}}=10^{-6} and stochastic dimensions to s_{1}=s_{2}=4. As expected, a second order rate of decay rate is observed in \bar{\varepsilon}^{m}.

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters