Block preconditioning of stochastic Galerkin problems:New two-sided guaranteed spectral boundsThis version dated: April 30, 2019. \fundingThe work of M. K. was supported by the Czech Academy of Sciences through the project L100861901 (Programme for promising human resources – postdocs) and by the Ministry of Education, Youth and Sports of the Czech Republic through the project LQ1602 (IT4Innovations excellence in science). The work of I. P. was supported by the Grant Agency of the Czech Republic under the contract No. 17-04150J.

# Block preconditioning of stochastic Galerkin problems: New two-sided guaranteed spectral bounds††thanks: This version dated: April 30, 2019. \fundingThe work of M. K. was supported by the Czech Academy of Sciences through the project L100861901 (Programme for promising human resources – postdocs) and by the Ministry of Education, Youth and Sports of the Czech Republic through the project LQ1602 (IT4Innovations excellence in science). The work of I. P. was supported by the Grant Agency of the Czech Republic under the contract No. 17-04150J.

Marie Kubínová Institute of Geonics of the CAS, Ostrava, Czech Republic (, http://www.ugn.cas.cz/~kubinova/).    Ivana Pultarová Faculty of Civil Engineering, Czech Technical University, Prague, Czech Republic (, http://mat.fsv.cvut.cz/ivana/).
###### Abstract

The paper focuses on numerical solution of parametrized second order partial differential equations with scalar parameter-dependent coefficient function by the stochastic (spectral) Galerkin method. We study preconditioning of the related discretized problems using preconditioners obtained by modifying the stochastic part of the partial differential equation. We present a simple but general approach for obtaining two-sided bounds to the spectrum of the resulting matrices, based on a particular splitting of the discretized operator. Using this tool and considering the stochastic approximation space formed by classical orthogonal polynomials, we obtain new spectral bounds depending solely on the properties of the coefficient function and the type of the approximation polynomials for several classes of block-diagonal preconditioners. These bounds are guaranteed and applicable to various distributions of parameters. Moreover, the conditions on the parameter-dependent coefficient function are only local, and therefore less restrictive than those usually assumed in the literature. The technique introduced in the paper can be employed also in a posteriori error estimation and in adaptive algorithms.

s
\newsiamremark

remarkRemark \newsiamremarkassumptionAssumption \newsiamremarkexampleExample \headersBlock preconditioning in stochastic GalerkinM. Kubínová, I. Pultarová

tochastic Galerkin method, preconditioning, block-diagonal preconditioning, spectral bounds

{AMS}

65F08, 65N22

## 1 Introduction

Growing interest in uncertainty quantification of numerical solutions of partial differential equations stimulates new modifications of standard numerical methods. A popular choice for partial differential equations with parametrized or uncertain data is the stochastic Galerkin method [4, 35]. Similarly to deterministic problems, approximate solutions, which depend on physical and stochastic variables (parameters), are searched for in finite-dimensional subspaces of the original Hilbert space. More precisely, the approximate solutions are orthogonal projections of the exact solution to the finite-dimensional subspaces with respect to the energy inner product defined by the operator of the equation; see, e.g., [5, 9, 10, 23]. The approximation subspaces are considered in the form of a tensor product of a physical variable space (finite-element functions) and a stochastic variable space (polynomials); see, e.g., [4, 14]. The form and qualities of the system matrix of the discretized problem are determined by the structure of the uncertain data and the type of the finite-dimensional solution spaces. For special classes of parameters, it was shown, see, e.g., [23, 31], that certain block-diagonal matrices are spectrally equivalent to independently of the degree of polynomials and the number of random parameters, and thus they can be used for preconditioning. Having a good preconditioning method or, in other words, a good and feasible approximation of , we may also efficiently estimate a posteriori the energy norm of the error during iterative solution processes [1, 5, 6, 9, 17]. This estimate can be used in adaptive algorithms [5, 7, 8]. In practice, matrix is never built explicitly, only matrix-vector products are evaluated ([24]).

In this paper, we focus on matrices arising in the discretized stochastic Galerkin method and present new guaranteed two-sided bounds to the spectra of the preconditioned matrices for several types of preconditioner. We consider only preconditioning with respect to the stochastic parts of problems, and thus we assume that a suitable preconditioning method or an efficient solver for the underlying deterministic problem is available; see, e.g., [12, 22, 32]. We formulate an idea of obtaining bounds to the spectra of the preconditioned matrix from the spectrum of small Gram matrices depending solely on the stochastic part of the approximation space. The motivation, however, comes from techniques and tools of the algebraic multilevel preconditioning introduced in [11, 2]. Similar idea was, in a simpler form, used already in [27, 28]. In the current paper, it is applied in a more general setting, and we believe that the derived technique may lead to an improvement of some other recently introduced estimates, such as [17, 22]. The derived technique is also applicable to systems in the form of multi-term matrix equation (see [24, eq. (1.8)]).

The paper is organized as follows. In Section 2, we briefly recall the stochastic Galerkin method and the structure of the matrices of the resulting systems of linear equations for the tensor product polynomials and complete polynomials. Since the structure of plays a crucial role in the analysis, theoretical considerations will be accompanied by illustrative examples throughout the paper. Section 3 formulates a general concept of proving spectral equivalence for a broad class of (not only) stochastic Galerkin preconditioners. In Section 4, we apply this idea to preconditioners which are represented by a special type of block-diagonal (or Schur complement) approximations of , and show how to obtain the spectral bounds of the preconditioned problems from the spectral bounds of small Gram matrices of the corresponding polynomial chaos. We also evaluate those bounds explicitly for the considered polynomial chaoses. Simple numerical examples demonstrating the obtained theoretical outcomes are presented at the end of the section.

Throughout the paper, we denote by , where and are symmetric positive definite, the spectral condition number of , i.e., the standard condition number of or, in other words, . By we denote the -th column of the identity matrix, where its size follows from the context.

## 2 Stochastic Galerkin matrices

Consider the variational problem of finding , such that

 ∫Γ∫Da(x,ξ)∇u(x,ξ)⋅∇v(x,ξ)ρ(ξ)dxdξ=∫Γ∫Df(x)v(x,ξ)ρ(ξ)dxdξfor allv∈˜V (1)

where is a bounded polygonal domain, or , is a parametric measure space, , , , and . The gradient is applied only with respect to the (physical) variable . Let , where are outcomes of independent random variables with probability densities , . The joint probability density is then . In the following, we consider defined on such that outside . Thus, instead of and , we further write and , respectively. For the convenience of notation, the probability densities are not normalized, see also Table 1, and we further refer to them as weights.

We assume in the affine form

 a(x,ξ)=a0(x)+K∑k=1ak(x)ξk, (2)

where , . While it is usually assumed that there exist constants and such that

 0

in this paper we consider more general functions . We will only require that the left-hand side of Eq. 1 defines an inner product on a finite-dimensional approximation space ; see Section 2.3. This will allow us to use random variables with unbounded images and still obtain positive definite system matrices. In other words, we can avoid truncation of supports of distribution functions or any other modification of them. Of course, under such (weaker) condition on , Eq. 1 may not be well-defined. In this paper we, however, focus only on the discretized problem obtained from Eq. 1; see also the discussion in [23].

We consider discretization using the tensor product space [3, 4, 14] of the form , where is an -dimensional space spanned by the finite-element (FE) functions , and is an -dimensional space spanned by -variate polynomials , …, of variables , …, . Denoting the basis functions of by a couple of coordinates and , we obtain the matrix of the system of linear equations of the discretized Galerkin problem Eq. 1 with elements

 Ari,sj = ∫RK∫Da(x,ξ)∇ϕs(x)⋅∇ϕr(x)Ψj(ξ)Ψi(ξ)ρ(ξ)dxdξ = ∫Da0(x)∇ϕs(x)⋅∇ϕr(x)dx∫RKΨj(ξ)Ψi(ξ)ρ(ξ)dξ +K∑k=1∫Dak(x)∇ϕs(x)⋅∇ϕr(x)dx∫RKξkΨj(ξ)Ψi(ξ)ρ(ξ)dξ =: (F0)rs(G0)ij+K∑k=1(Fk)rs(Gk)ij,

where for

 (Fk)rs=∫Dak(x)∇ϕs(x)⋅∇ϕr(x)dxand(Gk)ij=∫RKξkΨj(ξ)Ψi(ξ)ρ(ξ)dξ, (4)

where we formally set . If the numbering of the basis functions is anti-lexicographical, the structure of is

 A=K∑k=0Gk⊗Fk. (5)

In other words, the matrix is composed of blocks, each of size .

###### Example 1

Assume , the uniform distribution , and let , and be the normalized Legendre orthogonal polynomials of degrees 0, 1, and 2, see Table 1. Then and

 A=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝F01√3F101√3F1F02√15F102√15F1F0⎞⎟ ⎟ ⎟ ⎟ ⎟⎠=I⊗F0+G1⊗F1. (6)

### 2.1 Approximation spaces and their bases

For approximation of the physical part of the solution, we use an -dimensional space . To approximate the stochastic part of the solution we use the -dimensional space of -variate polynomials , . To simplify the notation, we assume that the parameters , …, are identically distributed, i.e., . Thus, we omit the superscripts and subscripts in and , respectively. The extension of the results to polynomial bases with different is straightforward.

In practice, sets of complete polynomials (C) or tensor product polynomials (TP) are usually used; see, e.g., [14, 23]. The set of the tensor product polynomials of the degree at most in variable , , is defined as

 PTPs1,…,sK={p(ξ)=K∏k=1pk(ξk);deg(pk)≤sk−1,k=1,…,K}andNP=K∏k=1sk.

Let us denote by the corresponding approximation space of Eq. 1. The set of complete polynomials of the maximum total degree is defined as

 PCs={p(ξ)=K∏k=1pk(ξk);K∑k=1deg(pk)≤s−1}andNP=(K+s−1K).

Let us denote by the corresponding approximation space of Eq. 1.

For both and , the bases are usually constructed as products of classical orthogonal polynomials. More precisely , , where are normalized orthogonal polynomials of the degrees , with respect to the weight function , i.e.,

 ∫Rψi(ξ)ψj(ξ)ρ(ξ)dξ=δij. (7)

The basis functions of the discretization space are then of the form

 ϕn(x)Ψj(ξ)=ϕn(x)ψj1(ξ1)…ψjK(ξK). (8)

For the tensor product polynomials, we consider the anti-lexicographical ordering of the basis functions, i.e., the leftmost index () in Eq. 8 is changing the fastest, while the rightmost index () is changing the slowest. For the complete polynomials, we consider ordering by the total degree of the polynomials, going from the smallest to the largest.

Another popular choice of the basis functions of is a set of double orthogonal polynomials. Polynomials , , are called double orthogonal [3, 4, 14] with respect to the weight function if for , ,

 ∫Rνj(ξ)νk(ξ)ρ(ξ)dξ=0and% ∫Rξνj(ξ)νk(ξ)ρ(ξ)dξ=0. (9)

If are orthogonal polynomials of degrees with respect to , then the double orthogonal polynomials are obtained as the Lagrange polynomials corresponding to the zeros of . If we use the double orthogonal polynomials as a basis of , the matrix becomes block-diagonal with the diagonal blocks of the sizes . Such block-diagonal matrix can be also obtained by simultaneous diagonalization of all matrices , see [14]. This diagonal structure of the resulting matrices seems favourable for practical computations. However, the double orthogonal polynomials cannot be used as a basis for complete polynomials [14]. Moreover, for this basis, we cannot obtain methods a posteriori error estimation or adaptivity control in a straightforward way. In addition, to refine the space , all diagonal blocks of the matrix must be recomputed, because a new set of zeros of (instead of the zeros of ) and thus a new set of basis polynomials must be used. Therefore, in this paper, we only consider the classical orthogonal polynomials to construct the bases of or .

### 2.2 Matrices for classical orthogonal polynomials

The form of the matrices , , in Eq. 4 depends on the choice of the basis of or and will be important for our future analysis. As will be described later, the matrices can be constructed from (the elements of) a sequence of smaller matrices

 (Gs,j)l+1,m+1≡∫Rξjψl(ξ)ψm(ξ)ρ(ξ)dξ,j=0,1,l,m=0,1,…,s−1. (10)

Let the normalized orthogonal polynomials satisfy the well-known three-term recurrence

 √βn+1ψn+1(ξ)=(ξ−αn)ψn(ξ)−√βnψn−1(ξ),n=1,2,…,ψ−1≡0; (11)

then , where is the identity matrix, and have the form of the Jacobi matrix

 Gs,1=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝α0√β1√β1α1⋱⋱⋱√βs−1√βs−1αs−1⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (12)

The eigenvalues of this matrix are given by the roots of the polynomial , which are distinct and lie in the support of ; see, e.g., [15]. In Table 1, we list the classical orthogonal polynomials with symmetric statistical distribution considered here together with the weight function corresponding to the non-normalized probability density. Note that due to the symmetry, the diagonal entries of in Eq. 12 become trivially zero. These matrices will play a crucial role in deriving spectral bounds, see Section 4.

For the tensor product polynomials, the matrices , , are obtained as

 G0 =GsK,0⊗GsK−1,0⊗⋯⊗G2,0⊗G1,0 (13) G1 =GsK,0⊗GsK−1,0⊗⋯⊗G2,0⊗G1,1 (14) ⋮ (15) GK =GsK,1⊗GsK−1,0⊗⋯⊗G2,0⊗G1,0, (16)

see, e.g., [14, 25].

###### Example 2

Consider the tensor product Legendre polynomials of two variables and , with , then and the matrix has the form

 A=\footnotesize⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝F01√3F101√3F2000001√3F1F02√15F101√3F2000002√15F1F0001√3F20001√3F200F01√3F102√15F20001√3F201√3F1F02√15F102√15F20001√3F202√15F1F0002√15F20002√15F200F01√3F1000002√15F201√3F1F02√15F1000002√15F202√15F1F0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠=2∑k=0Gk⊗Fk, (17)

where the blocks corresponding to the changing degree of the approximation polynomials of the variable are separated graphically.

For complete polynomials, the matrices lose the Kronecker product structure, since is not a tensor product space. However, since , each matrix is permutation-similar to a submatrix of the matrices in Eq. 13, [14, Lemma 3].

###### Example 3

Consider the complete Legendre polynomials of two variables and and , then and the relevant submatrix of the tensor-product matrix Eq. 17 is

 A=\footnotesize⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝F01√3F101√3F2001√3F1F02√15F101√3F2002√15F1F00001√3F200F01√3F12√15F201√3F201√3F1F000002√15F20F0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (18)

Reordering the entries by the total degree of the corresponding polynomial, we obtain

 A=\footnotesize⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝F01√3F11√3F20001√3F1F002√15F11√3F201√3F20F001√3F12√15F202√15F10F00001√3F21√3F10F00002√15F200F0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠=2∑k=0Gk⊗Fk, (19)

where the blocks corresponding to the total degrees 0, 1, and 2 are separated graphically.

### 2.3 Positive definiteness

The left-hand side of the equation Eq. 1 defines the bilinear form on . We present sufficient conditions on the function , under which becomes an inner product (called energy inner product; see, e.g., [5, 9, 10]) on the finite-dimensional space . To achieve positive definiteness of the bilinear form , we need to assume some dominance of the deterministic part over the stochastic part , . In this paper, we will assume that there exists a constant such that

 K∑k=1|ak(x)|≤¯¯¯μa0(x),for % a.a. x∈D, (20)

where the particular choice of    depends on the weight . For the Beta distribution on , is suffices to take , while for the Gauss distribution, we take for tensor product polynomials and for complete polynomials.111Since the eigenvalues of matrix are the eigenvalues of the Hermite polynomials and thus lie in the interval [33, p.120], the eigenvalues of are strictly positive. Note that this choice of also trivially implies that is positive definite. For further discussion on bounds of Hermite and Lagrange polynomials see, e.g., [23].

We emphasize that the assumption Eq. 20 is weaker than the classical assumption widely used to obtain spectral estimates, e.g.,

 K∑k=1∥ak(x)∥L∞(D)≤μclassessinfx∈Da0(x) (21)

for uniform distribution; see [13, 17, 22, 23, 34]. The main difference between Eq. 20 and Eq. 21 is that the former is considered point-wise, while the latter uses the norms of over . The condition Eq. 20 allows us to obtain not only more accurate two-sided guaranteed bounds to the spectra, but these bounds also apply to parameter distribution and functions for which no estimate could be obtained using the standard approach; see Section 4.4. Assumption Eq. 20 is sufficient to achieve positive definiteness of . In some applications, we can assume a stronger dominance of , i.e.,

 K∑k=1|ak(x)|≤μa0(x),for a.a. x∈D,0≤μ≤¯¯¯μ. (22)

The smaller the , the more favourable spectral bounds of the matrices and of the preconditioned matrices are generally achieved. We will further assume that is the smallest number for which Eq. 22 is satisfied.

## 3 Proving spectral equivalence of inner products on V

We consider preconditioning methods based on inner products that are spectrally equivalent to the energy inner product on , but are represented by matrices with more favourable non-zero structures such as, for example, block-diagonal matrices. We base our approach on a splitting of the inner products to subdomains (Lemma 1) and on a preconditioning of a tensor product matrix (Lemma 2).

Let be partitioned into arbitrary non-overlapping elements (subdomains) , . Consider the following decomposition of from Eq. 5

 A=K∑k=0Gk⊗Fk=Nelem∑j=1K∑k=0Gk⊗F(j)k=:Nelem∑j=1A(j), (23)

where

 (F(j)k)rs=∫τjak(x)∇ϕs(x)⋅∇ϕr(x)dxandA(j)=K∑k=0Gk⊗F(j)k. (24)
{assumption}

We further assume that the functions , , (and thus the function ) are constant on every element (subdomain) , . We define

 a(j)k≡ak(x),x∈τj,j=1,…,Nelem. (25)

If are not constant on elements, we would assume a stronger, element-wise, dominance of over , i.e.,

 K∑k=1esssupx∈τj|ak(x)|≤μessinfx∈τja0(x),j=1,…,Nelem, (26)

instead of Eq. 22, which would result in a slight modification of the spectral estimates derived in subsequent sections. To simplify the presentation, we do not describe these modifications in more detail.

Using Eq. 25, we obtain

 (F(j)k)rs=∫τjak(x)∇ϕs(x)⋅∇ϕr(x)dx=a(j)k∫τj∇ϕs(x)⋅∇ϕr(x)dx=:a(j)k(F(j))rs. (27)

Therefore, we can write

 A(j)=K∑k=0Gk⊗F(j)k=K∑k=0Gk⊗a(j)kF(j)=(K∑k=0a(j)kGk)⊗F(j), (28)

which gives

 A=Nelem∑j=1A(j)=Nelem∑j=1(K∑k=0a(j)kGk)⊗F(j). (29)

In other words, we obtained a decomposition of in which the dependence of the FE matrices on is compensated by splitting of the operator to elements.

In this paper, we consider preconditioners corresponding to an inner product defined on whose matrix representation (with respect to the same basis) is of the form analogous to Eq. 29, in particular

 M=K∑k=0˜Gk⊗˜Fk =Nelem∑j=1K∑k=0˜Gk⊗˜F(j)k=Nelem∑j=1K∑k=0˜Gk⊗˜a(j)kF(j) (30) =Nelem∑j=1(K∑k=0˜a(j)k˜Gk)⊗F(j)=:Nelem∑j=1M(j), (31)

where and are such that the matrices are positive semidefinite for all . We will see later that many of the preconditioners that are used in practice are indeed of the form Eq. 30.

The following theorem shows that the spectral equivalence between and can be obtained from the spectral equivalence between and on each element , . The obtained spectral bounds do not depend on the type and the number of the FE basis functions.

###### Theorem 1

Let the matrices and be defined by Eq. 29 and Eq. 30, respectively, and let be such that

 c–vT(K∑k=0˜a(j)k˜Gk)v≤vT(K∑k=0a(j)kGk)v≤¯¯cvT(K∑k=0˜a(j)k˜Gk)v,for all v∈RNP, (32)

. Then also

 c–vTMv≤vTAv≤¯¯cvTMv,for all v∈RNFENP. (33)

The proof of Theorem 1 is based on the two following lemmas.

###### Lemma 1

Let and be two inner products on a Hilbert space . Let the inner products be composed as

 (u,v)A=N∑j=1(u,v)A,j,(u,v)M=N∑j=1(u,v)M,j,u,v∈V, (34)

where and , , are positive semidefinite bilinear forms on . Let there exist two positive real constants and such that the induced seminorms are uniformly equivalent in the following sense

 c–(u,u)M,j≤(u,u)A,j≤¯¯c(u,u)M,j,for allu∈V,j=1,…,N. (35)

Then the induced (cumulative) norms are also equivalent with the same constants, i.e.,

 0

###### Proof

The proof follows trivially from:

 c–(u,u)M\lx@crefcreftype refnumeq:localinprodassumption=c–N∑j=j(u,u)M,j \lx@crefcreftype refnumeq:ineqelem≤ \lx@crefcreftype refnumeq:localinprodassumption=(u,u)AN∑j=1(u,u)A,j \lx@crefcreftype refnumeq:ineqelem≤¯¯cN∑j=1(u,u)M,j\lx@crefcreftype refnumeq:localinprodassumption=¯¯c(u,u)M. (37)

###### Remark 1

Lemma 1 can also be formulated in terms of matrices: If , and for all and , then for all .

###### Lemma 2

Let be symmetric positive definite and be symmetric positive semidefinite. Let

 c–vTYv≤vTXv≤¯¯cvTYv,for allv∈RN1, (38)

hold for some positive real constants and . Then also

 c–uT(Y⊗Z)u≤uT(X⊗Z)u≤¯¯cuT(Y⊗Z)u,for allu∈RN1N2. (39)

###### Proof

If is invertible, then the proof follows trivially from

 (Y⊗Z)−1(X⊗Z)=(Y−1⊗Z−1)