Low-rank computation of posterior covariance matrices in Bayesian inverse problems

# Low-rank computation of posterior covariance matrices in Bayesian inverse problems††thanks:

Peter Benner Computational Methods in Systems and Control Theory Group, Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany (benner@mpi-magdeburg.mpg.de)    Yue Qiu Computational Methods in Systems and Control Theory Group, Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany (qiu@mpi-magdeburg.mpg.de)    Martin Stoll Numerical Linear Algebra for Dynamical Systems Group, Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany (stollm@mpi-magdeburg.mpg.de)
###### Abstract

We consider the problem of estimating the uncertainty in statistical inverse problems using Bayesian inference. When the probability density of the noise and the prior are Gaussian, the solution of such a statistical inverse problem is also Gaussian. Therefore, the underlying solution is characterized by the mean and covariance matrix of the posterior probability density. However, the covariance matrix of the posterior probability density is full and large. Hence, the computation of such a matrix is impossible for large dimensional parameter spaces. It is shown that for many ill-posed problems, the Hessian matrix of the data misfit part has low numerical rank and it is therefore possible to perform a low-rank approach to approximate the posterior covariance matrix. For such a low-rank approximation, one needs to solve a forward partial differential equation (PDE) and the adjoint PDE in both space and time. This in turn gives complexity for both, computation and storage, where is the dimension of the spatial domain and is the dimension of the time domain. Such computations and storage demand are infeasible for large problems. To overcome this obstacle, we develop a new approach that utilizes a recently developed low-rank in time algorithm together with the low-rank Hessian method. We reduce both the computational complexity and storage requirement from to . We use numerical experiments to illustrate the advantages of our approach.

B

Low-rank computation in Bayesian inverse problemsPeter Benner, Yue Qiu, and Martin Stoll

ayesian inverse problems, PDE-constrained optimization, low-rank methods, space-time methods, preconditioning, matrix equations.

{AMS}

65F15 , 65F10, 65F50, 93C20, 62F15

## 1 Introduction

Computational mathematicians dealing with simulations of large-scale discretizations describing physical phenomena have made tremendous success over the last decades. This has enabled scientists from various areas of engineering, chemistry, geophysics, et al. to ask more relevant and complex questions. One area that has seen a dramatic increase in the number of published results is the field of statistical inverse problems [36, 20, 8]. In particular, the consideration of partial differential equations (PDEs) as models in statistical inverse problems dramatically increases the problem complexity as a refinement of the model in space and time results in an exponential increase in the problem degrees of freedom. By this we mean that a discretized problem is typically represented by a spatial system matrix where the number of degrees of freedom is typically with being the spatial dimension, and is the mesh size. It is easily seen that halving the parameter means the matrix size will grow by a factor of depending on the spatial dimension. This complexity is further increased when the temporal dimension is incorporated.

While numerical analysis has provided many techniques that allow the efficient handling of such problems, e.g. Krylov methods [28], multigrid techniques [15], we are faced with an even steeper challenge when uncertainty in the parameters of the model is incorporated. For this we consider the approach of Bayesian inverse problems where the goal is to use prior knowledge to obtain information about the conditional mean or the posterior covariance given a set of measured data. While computing the conditional mean typically corresponds to a problem formulation frequently encountered in PDE-constrained optimization [37, 17, 6], the problem of computing the posterior covariance matrix is much more challenging as this matrix is dense and involves the inverse of high-dimensional discretized PDE problems. In [12] and subsequent works, the authors proposed a low-rank approximation of the posterior covariance matrix. While this already reduces the complexity dramatically, the storage requirements for the resulting vectors still suffer from high dimensionality with respect to and . Our aim in this paper is to introduce a method based on low-rank techniques [35] that allows to reduce the complexity from to .

In order to do this, we will first derive the basic problem following [12]. This will be followed by the presentation of a low-rank technique that we previously introduced for PDE-constrained optimization. We then use this to establish a low-rank eigenvalue method based on the classical Lanczos procedure [24] or Arnoldi procedure [29]. After introducing different choices of covariance matrices, we show that our approach can be theoretically justified. We then illustrate the applicability of our proposed methodology to a diffusion problem and a convection diffusion problem, and present numerical results illustrating the performance of the scheme.

## 2 Statistical/Bayesian Inverse Problems

We refer to [20, 36] for excellent introductions into the subject of statistical inverse problems. We follow [12, 7] in the derivation of our model setup and start with the parameter-to-observable map defined as

 Y=g(U,E), (1)

where are vectors of random variables. Note that here, , our vector of model parameters to be recovered, is a realization of the error vector is a realization of and is a realization of the vector of observables . The vector contains the observed values. As discussed in [7], even when using the ‘true’ model parameters the observables will differ from the measurements due to measurement noise and the inadequacy of the underlying PDE model.

In a typical application such as the one discussed later, evaluating requires the solution of a PDE potentially coupled to an observation operator representing a domain of interest.

The Bayes’ theorem, which plays a key role in the Bayesian inference, is written as

 πpost:=π(u|yobs)=πprior(u)π(yobs|u)π(yobs)∝πprior(u)π(yobs|u), (2)

where we used the prior probability density function (PDF) , the likelihood function , and the data with . The function is the posterior probability density function. The likelihood is derived under the assumption of additive noise

 Y=f(U)+E (3)

where and is the additive noise and given as We once more follow [12], assuming that and are statistically independent and we can use

 πnoise(e)=πnoise(yobs−f(u)).

Therefore, Bayes’ theorem can be written as

 πpost∝πprior(u)πnoise(yobs−f(u)). (4)

Assuming that both probability density functions for and are Gaussian, we can rewrite the PDFs in the form

 (5)

where is the mean of the model parameter prior PDF and is the mean of the noise PDF. We further have the two covariance matrices for the prior and for the noise. The Gaussian assumption allows us to rewrite Bayes’ theorem further to get

 πpost∝exp(−12(u−¯uprior)TΓ−1prior(u−¯uprior)−12(e−¯e)TΓ−1noise(e−¯e))=exp(−12∥u−¯uprior∥2Γ−1prior−12∥e−¯e∥2Γ−1noise). (6)

Let us further assume that the parameter-to-observable map is given as in (3) with . The matrix represents a linear map from the parameters to the observables . We will later see that often this matrix involves the inverse of a discretized representation of a PDE operator. In [33], the authors incorporate a quantity of interest operator into . Therefore, it will typically be dense and very large. We arrive now at a restated version of Bayes theorem (3)

 πpost∝exp(−12∥u−¯uprior∥2Γ−1prior−12∥yobs−Au−¯e∥2Γ−1noise). (7)

From this relation we can express several relevant statistical quantities. For example, we can compute the maximum a posteriori point (MAP), which is defined via

 ¯upost=argmaxuπpost(u) (8)

and to compute it, one can solve the following optimization problem

 ¯upost=argminu(12∥u−¯uprior∥2Γ−1prior+12∥yobs−Au−¯e∥2Γ−1noise).

Note that this problem is a deterministic inverse problem and resembles the structure one finds in PDE-constrained optimization problems [37, 19]. Many strategies are known how to solve this efficiently and in particular underlying function space considerations help in the development of efficient numerical algorithms. An infinite-dimensional discussion of the above problem is given in [7, 36] and we only refer to the infinite-dimensional setup when needed. Our goal in this paper will not be the solution of the MAP problem. The goal of devising low-rank methods for this case has recently been established in [35] and the techniques there are likely to be applicable as the only difference is the use of the weighting matrices which are for the classical PDE-constrained optimization problem mass matrices or matrices involving mass matrices. The more challenging question lies in the approximation of the posterior covariance matrix

 Γpost=(ATΓ−1noiseA+Γ−1prior)−1. (9)

The approximation of is in general very costly and, without further approximation, intractable. The approach presented in [12, 7] computes a low-rank approximation to this matrix using the following relation

 (10)

The authors in [12, 7] then compute a low-rank approximation to the so-called prior-preconditioned Hessian of the data misfit

 ~Hmis=Γ1/2priorATΓ−1noiseAΓ1/2prior (11)

with the approximation

 ~Hmis≈VΛVT,

where and represent the dominant eigenvectors and eigenvalues, respectively. Using this approximation and the Sherman-Morrison-Woodbury formula [13] one obtains for the prior-preconditioned system

 (Γ1/2priorATΓ−1noiseAΓ1/2prior+I)−1≈(VΛVT+I)−1=I−V~ΛVT

with , and is the -th diagonal entry of . A similar approach for

 (ATΓ−1noiseA+Γ−1prior)−1≈(VΛVT+Γprior)−1=Γprior−ΓpriorV(Λ−1+VΓpriorVT)−1~ΛVTΓprior, (12)

becomes a feasible alternative if can be evaluated in reasonable time.

Our main goal of this paper will be the derivation of an efficient scheme to approximate the matrix from the low-rank approximation to the misfit Hessian. Before discussing this problem, we introduce an idea that becomes instrumental in realizing this and is motivated by a PDE-constrained optimization problem.

## 3 A Low-Rank Technique for PDE-Constrained Optimization

In order to better understand the stochastic inverse problem, we investigate it in relation to a PDE-constrained optimization problem. We start the derivation of the low-rank in time method by considering an often used model problem in PDE-constrained optimization (see [18, 19, 37]), minimization of

 miny,u 12∥y−yobs∥2Q+β2∥u∥2P, (13)

with and space-time cylinders. The constraint linking state and control of this problem is given by the heat equation with a distributed control term

 yt−∇2y=u,% in Ω, y=f,on ∂Ω.

Here denotes the domain and corresponds to the boundary of the domain. For a more detailed discussion on the well-posedness, existence of solutions, discretization et al., we refer the interested reader to [18, 19, 37]. The solution of such an optimization problem is obtained using a Lagrangian approach and considering the first order conditions, which for our problem results in a linear system of the form

 ⎡⎢ ⎢ ⎢⎣D1⊗τM10−(Int⊗L+CT⊗M)0D2⊗βτM2D3⊗τNT−(Int⊗L+C⊗M)D3⊗τN0⎤⎥ ⎥ ⎥⎦A\definecolorpgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0⎡⎢⎣yup⎤⎥⎦=⎡⎢⎣D1⊗τM1yobs0d⎤⎥⎦, (14)

where and come from the discretization of the temporal parts of the objective function or the right hand side of the PDE-constraint (cf. [5, 27, 34]). The matrices and are mass matrices corresponding to observation and control domain. The matrix is essentially representing the incorporation of the control into the constraint, i.e., is a mass matrix in the above example. The matrix represents the all-at-once discretization of the time-derivative in the PDE and the discretized Laplacian. Here, the state, control, and adjoint state are represented by the following space-time vectors

 y=⎡⎢ ⎢⎣y1⋮ynt⎤⎥ ⎥⎦,u=⎡⎢ ⎢⎣u1⋮unt⎤⎥ ⎥⎦, and p=⎡⎢ ⎢⎣p1⋮pnt⎤⎥ ⎥⎦.

We point out again that the Kronecker product is defined as

 W⊗V=⎡⎢ ⎢⎣w11V…w1mV⋮⋱⋮wn1V…wnmV⎤⎥ ⎥⎦

and remind the reader of the definition of the operator via

 vec(W)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣w11⋮wn1⋮wnm⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

as well as the relation

In [35], it was shown that the solution to the PDE-constrained optimization problem can be computed in low-rank form

 Y=[y1,y2,…,ynt]≈WYVTY with WY∈Rn1,k1,VY∈Rnt,k1,U=[u1,u2,…,unt]≈WUVTU with WU∈Rn2,k2,VU∈Rnt,k2,P=[p1,p2,…,pnt]≈WPVTP with WP∈Rn1,k3,VP∈Rnt,k3, (15)

where the are small in comparison to the spatial and temporal dimensions. The authors in [35] illustrated that the low-rank structure of a right hand side is maintained throughout a Krylov subspace iteration and the above described representation. Low-rank techniques for Krylov-subspace methods have recently received much attention and we refer the reader to [21, 22, 1] and for tensor structured equations [14, 26, 9, 10].

We obtain a significant storage reduction if we can base our approximation of the solution using such low-rank factors. It is easily seen that due to the low-rank nature of the factors, we have to perform fewer multiplications with the submatrices by also maintaining smaller storage requirements.

There are several similarities of the problem (14) and the statistical inverse problem presented earlier. It is clear that with the choice

 Γprior=(D2⊗βτM2)−1 and Γnoise=(D1⊗τM1)−1, (16)

the PDE-constrained optimization problem can be interpreted as a statistical inverse problem and the posterior covariance matrix is given by eliminating both state and adjoint state from the system matrix (14) to obtain a reduced Hessian system. Furthermore, it is clear that for many choices of prior and noise covariance, we can utilize the tensor structure to compute low-rank solutions. For this we state the posterior covariance matrix of the PDE optimization problem

 Γpost=[(D2⊗βτM2)+(D3⊗τNT)(Int⊗L+CT⊗M)−1(D1⊗τM1)(Int⊗L+C⊗M)−1(D3⊗τN)]−1 (17)

and the misfit Hessian

 (18)

We keep this example in mind when we now discuss a low-rank technique to approximate the eigenvectors of the posterior covariance matrix in low-rank form. For this we propose a low-rank Krylov subspace method to compute the dominating eigenvectors and eigenvalues.

Before discussing the eigenvalue approximation strategy, we want to comment on the scaling of the PDE-constrained optimization problem in relation to the statistical inverse problem discussed in [12]. The authors there consider

 miny,u βnoise2∥y−yobs∥2Q+βprior2∥u∥2P, (19)

which in the simple case of full observations and control leads to the following rescaling of (16)

 Γprior=(D2⊗βpriorτM2)−1 and Γnoise=(D1⊗τβnoiseM1)−1. (20)

Assuming that and we get

 Γ−1prior=βpriorτhdI and Γ−1noise=βnoiseτhdI. (21)

In PDE-constrained optimization one typically reduces in (13) to allow for a more expensive control that drives the state closer to the desired state. This would mean that in the statistical inverse setting and decreasing the value of which implies that in (9) the role of the prior covariance gets diminished and most contributions are coming from the noise. We have similar settings and observations for stochastic inverse problems in this manuscript. These will be shown by the analysis in Section 5 and numerical experiments in Section 6. The right choice of parameters and depends on the underlying application and we refer to [8] for a discussion of the roles of the parameters and as regularization parameters.

## 4 Low-rank Lanczos/Arnoldi Method

We recall that our goal is to find a low-rank approximation of the eigenvectors of the posterior covariance matrix. The goal is to compute an approximation to with and much smaller than the dimension of For the PDE-constrained optimization problem .

Our main assumption at this point is that storing each and especially a number of such space-time vectors can pose serious problems. Additionally, in order to perform the matrix vector multiplication with , a large number of PDE-solutions need to be computed. For this we point out that in order to apply the matrix in an Arnoldi procedure, we need to solve the spatial system over the whole time-domain. A major advantage of our approach is motived by the fact that

 vj=vec(Vj)∀j=1,…,k% with Vj∈Rnx,nt,

which we assume is well approximated via

 Vj≈Wj,1WTj,2 (22)

with with If the Arnoldi- or Lanczos vectors are of this form, then the application of the matrix to such vectors requires fewer PDE solves than in the full case.

Note that we need to compute the dominant eigenvectors of the prior-preconditioned data misfit Hessian  (11), therefore the Lanczos method can be used. At the -th Lanczos iteration, we need to perform using the low-rank approach to get a form like  (22). Here is the -th Lanczos vector. Due to the low-rank approximation, the orthogonality of Lanczos vectors is lost. Reorthogonalization should be used to orthogonalize Lanczos vectors. Meanwhile, the symmetric property of cannot be preserved for the low-rank form of the matrix-vector product. Therefore, we make use of the more general Arnoldi method to compute the dominant eigenvectors of . We also observe that when applying the Arnoldi method with the truncation error appropriately chosen we still get real eigenvalues. This will be shown in Section 6.

We now briefly recall the Arnoldi method, which is the more general procedure. We refer to [13] for details. We recall that the Arnoldi process for a matrix can be written as

 BVk=Vk+1Hk+1,k,

where consists of orthonormal columns and is a Hessenberg matrix. The iterative build-up of the columns of is captured by the recursion

 ~vk+1=Bvk−k∑i=1hi,kvi, (23)

where . The vector is then normalized using the scalar While this is well-known our goal here is to illustrate how this method is amenable to the use within a low-rank framework. For the Arnoldi process considered in this manuscript, and the application of to results in a low-rank matrix, i.e.,

 Bvk=vec(W1,BWT2,B)

with small rank. This is because is related with the inverse of a PDE operator in space and time, which has shown in [35] that applying such an operator to a low-rank vector again gives a low-rank vector. We can then write the right hand side of (23) as

 (24)

and write the last expression as

 vec([W1,B,−αkW1,k,−βkWk−1,B][W2,B,W2,k,W2,k−1]T).

The size of the matrix

 [W1,B,−αkW1,k,−βkWk−1,B]∈Rnx,rB+rk+rk−1

is increased to Using truncation techniques this can typically be controlled. For example, one could achieve the truncation by utilizing skinny QR factorizations [21] or truncated singular value decompositions [35].

In our numerical experiments, we show that maintaining the orthogonality of the Lanczos vectors is crucial. Meanwhile, since we use low-rank methods to approximate the Lanzcos vectors, the orthogonality and the symmetry of the Lanzcos process get lost. We hence opt for the more expensive but stable Arnoldi process, where we orthogonalize with respect to all previous vectors. The full reorthogalization also demands more storage than in the Lanczos case. This is another advantage of our approach. With full reorthogonalization, the storage cost are increasing for both full and low-rank scheme but in the low-rank framework stay significantly below the full scheme. Here, we give the low-rank Arnoldi method in Algorithm 1.

We note that the biggest challenge for the low-rank Arnoldi method is to perform the low-rank matrix vector product in line of Algorithm 1 since is large and dense. We propose the tensor-train (TT) format in Section 6 to perform such computations efficiently. The full orthogonalization procedure in line - of Algorithm 1 is also performed with the TT format.

We use the standard Arnoldi method for low-rank eigenvector computations in Algorithm 1. This is practical for the problems studied in this manuscript since we just need to compute up to a few hundred Arnoldi vectors. Since the computational complexity of full orthogonalization increases with the number of Arnoldi vectors, if more Arnoldi vectors are needed, the restarted Arnoldi method can be implemented with a low-rank version [13].

## 5 Analysis of the Eigenfunctions

The eigenfunction analysis for the general case presented above is not straightforward. Our goal in this section is to give a theoretical justification for simple cases. We start with the case of a steady state problem involving the two-dimensional Poisson equation. For this we consider the misfit Hessian

 ~Hmis=Γ1/2priorATΓ−1noiseAΓ1/2prior.

Assuming for now that and so that we are left with

 ~Hmis=βpriorβnoiseATA.

Note that we have assumed for the data misfit Hessian to be defined in function space with the prior and noise covariance operators to be multiples of the identity operator. Our goal is to find eigenfunctions of with the inverse of the PDE operator and the inverse of the adjoint PDE operator. For this we use the eigenfunctions of the two-dimensional Laplacian operator satisfying

 −Δy=λy

with zero Dirichlet boundary conditions (see [23]). Note that for this problem . We here consider the eigenvalue problem on a rectangular domain and state the eigenfunctions as

 ym,n=sin(mπx1a)sin(nπx2b)

with and . The associated eigenvalues are given by

 λm,n=π2[(ma)2+(nb)2]

with (see [23]). Coming back to the misfit Hessian we can write this as

 ~Hmis=βpriorβnoiseA2

with the inverse Laplacian. Assuming zero Dirichlet boundary conditions it holds that

 ~Hmisym,n=μm,nym,n

is satisfied for the eigenfunctions of the Laplacian

 ym,n(x1,x2)=sin(mπx1a)sin(nπx2b)

with the eigenvalues given by

 μm,n=βpriorβnoiseλ−2m,n,

where the decay of is quite rapid. This justifies the approximation of by a small number of eigenfunctions. In [12], the authors also justify this for a one-dimensional convection diffusion equation.

The goal of our approach is to add even more low-rankness to the computation. We want to illustrate that the eigenfunctions used to represent the misfit Hessian are of small rank. When studying the eigenfunctions above it is clear that the eigenfunction

 ym,n(x1,x2)=sin(mπx1a)sin(nπx2b)

is already separated into and components with separation rank (cf. [16]), where it is the sum of two functions with one depending on the first and the other on the second variable. For this case we have established the following lemma

###### Lemma

The eigenfunctions of the misfit Hessian

 ~Hmis=βpriorβnoiseA2

with the inverse Laplacian with zero Dirichlet conditions defined on the rectangle , are separated and given by

 ym,n(x1,x2)=sin(mπx1a)sin(nπx2b)

and hence are of separation rank .

It is not so straightforward to establish similar results for more complicated equations.

For the space-time PDE-constrained optimization problem discussed earlier, we note that

 VDVT≈α(h,τ,β)(Int⊗L+CT⊗I)−1(Int⊗L+C⊗I)−1,

where we used and collected all scalars in . Our aim is to establish eigenvalue and eigenvector results for

 (Int⊗L+C⊗I)(Int⊗L+CT⊗I).

We note that this fits the well known relation that the singular values of a matrix are the square roots of the eigenvalues of the matrix which, assuming full rank of , is a symmetric and positive definite matrix. Now assuming the SVDs

 C=UCΣCVTC and L=ULΣLVTL,

we obtain

 (Int⊗ULΣLVTL+VCΣCUTC⊗I)A=(UL⊗VC)U(Int⊗ΣL+ΣC⊗I)Σ(VTL⊗UTC)VT.

From this it follows that

 ATA=VΣUTUΣVT=VΣ2VT

is the eigendecomposition of which has the same eigenvectors as As the eigenvalues of quickly decay to zero because of the compactness of the operator, we only need a small number of columns of . Our aim in this paper is to express each column of further in a low-rank fashion. For this we note that and ignoring superindices we get

and hence a vector of rank one if the eigenvectors are all real. Complex eigenvectors would further introduce a small rank increase and the consideration of instead of can with a simultaneous diagonalization of the pencil lead to small eigenvector ranks of the overall system. This justifies our choice of approximating the eigenvectors in low-rank form.

## 6 Numerical Results

In this section, we study the performance of the low-rank Lanczos algorithm presented in Section 4. The results presented in this section are based on an implementation of the above described algorithms within MATLAB®. We perform the discretization of the PDE-operators within the IFISS [30] framework using finite elements for the heat equation and the streamline upwind PetrovâGalerkin (SUPG) method for the convection diffusion equation. Our experiments are performed for a final time with a varying number of time-steps. As the domain we consider the unit square but other domains are of course possible. We specify the boundary conditions for each problem separately. Throughout the results section we fixed the truncation at for which we observed good results. Additionally, we also performed not listed experiments with a tolerance of for which we also observed fast convergence. Larger tolerances should be combined with a deeper analysis of the algorithms and a combination with flexible outer solvers. All results are performed on a standard Ubuntu desktop Linux machine with Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz and 8GB of RAM.

The mathematical model we consider in this section is given by the following instationary PDE,

 ∂∂ty+Ly=0,Ω×(0,T)y=u,Ω×{t=0}y=0,∂ΩD×(0,T)▽y⋅n=0,∂ΩN×(0,T) (25)

where is a PDE operator and for the numerical experiments we consider the case of the heat equation and the convection-diffusion equation . For all the numerical tests, the initial concentration represents the unknown parameter , and the observation data are collected by sensors, which are distributed in part of the domain .

As stated in Section 2, the statistical inverse problem with Gaussian noise and prior using a Bayesian formulation is related to a weighted least squares problem. Here we use the same functional as in [12], which is given by the following functional

 minu(βnoise2∫T0∫Ω(y−yobs)2b(x,t)dxdt+βprior2∫Ωu2dx), (26)

in which satisfies the PDE model Eq. 25, is the observation operator, and is the uncertainty term, which is the initial condition for this numerical example. Here, we study the sparse observation case, where is defined by

 b(x,t)=∑jδj.

Here is the regional function of sensors, at the region of the -th sensor and elsewhere.

Discretizing the functional Eq. 26 in turn gives

 (27)

where is the discretization of , the variable is the discrete variant of in Eq. 25 stacked in time, and it satisfies . Here and come from the discretization of the PDE model Eq. 25.

Discretization of the objective function gives that , and , where is the mass matrix, is an identity matrix. Here, is the number of time variables. Since we need to compute , we use . Here, is the mesh size, is the spatial dimension, is a identity matrix, and is the number of spatial variables. Other settings such as smoothing operators used for have been studied in [7] and can be used with our approach as well.

As analyzed in Section 2, the posterior covariance matrix is given by the inverse of the Hessian of Eq. 27. Therefore,

 Γpost=(CTK−TBTΓ−1noiseBK−1C+Γ−1%prior)−1 (28)

Note that for different PDE models, is also different after discretization. In this section, we use two types of PDE models, i.e., the heat equation and the convection-diffusion equation to show the performance of our low-rank algorithm for the approximation of . We argue that our low-rank algorithm also applies to other time-dependent PDE operators. Here we apply our method to symmetric systems and unsymmetric systems.

We also point out that due to the uncertainty being given as the initial condition, the posterior Hessian is only of spatial dimension. Nevertheless, the Arnoldi process applied to this matrix requires the solution of space-time problems and the low-rank form of our approach results in a much reduced number of spatial solves. The complexity reduction is even more pronounced when the uncertainty is part of the system as a space-time variable such as the right hand side of the PDE.

### 6.1 Implementation Details

According to (28), the prior-preconditioned data misfit part after discretization of (26) is given by

 ~Hmis=Γ\sfrac12priorCTK−TBTΓ−1noiseBK−1CΓ\sfrac12prior. (29)

To apply the Lanczos iteration to (29), we need to solve the space-time discretized PDE and adjoint PDE . Here we take as an example. This asks us to solve a linear system of the following type,

 (Int⊗L+C⊗M)x=f,

or

 (Int⊗L+C⊗M)vec(X)=vec(F). (30)

Here and are matrices/tensors of appropriate sizes. Numerical solutions of (30) have been studied intensively from the matrix equation point of view, c.f. [31, 2, 4] for an overview.

In this manuscript, we solve (30) based on its tensor structure and use the alternative minimal energy (AMEn) approach [10] implemented in the tensor-train (TT) toolbox [25] to solve the tensor equation (30). At each AMEn iteration, either a left Galerkin projection or a right Galerkin projection is applied to the system (30). Therefore, after Galerkin projection, we need to solve a linear system either of the format

 (^In⊗L+^C⊗M)x=~b, (31)

or

 (In⊗~L+C⊗~M)~x=^b. (32)

Here , , , are matrices of appropriate dimensions after Galerkin projection (c.f. [10] for details).

After Galerkin projection, the size of the system (31) is still relatively large while the size of (32) is quite moderate. Therefore, Krylov solvers such as the generalized minimized residual (GMRES) [28] method or the induced dimension reduction IDR(s) [32] method can be used to solve (31) while a direct method can be used to solve (32).

To accelerate the convergence of the Krylov solver, we use the following preconditioner

 P=diag(^In)⊗L+diag(C)⊗M, (33)

to solve (31). Here is an operation that extracts the diagonal entries of a matrix and form a diagonal matrix. One can use standard techniques such as multigrid methods [38] or incomplete LU factorization (ilu) [13] to approximate the preconditioner (33). Here we use backslash implemented in MATLAB®.

We also want to point out that there are many other methods to efficiently solve (30), such as the low-rank factored alternating directions implicit (ADI) method (cf. [3]).

### 6.2 The Heat Equation

In this part, we use the 2D time-dependent heat equation in a unit square as an example to study the performance of our low-rank algorithm for the heat equation. Discretizing the equation in space using finite elements and in time using the implicit Euler method gives us an linear system, where is the number of spatial variables while is the number of time steps. First we study spectral properties of the prior-preconditioned data misfit part and the posterior covariance matrix. Using a grid to discretize the heat equation and set , respectively, we plot the largest eigenvalues of in Fig. 1. Here .

As shown in Fig. 1, there are only a few dominant eigenvalues of . For most cases, a threshold of , or even is acceptable to approximate and to reduce the uncertainty of the system, which will be shown later. Meanwhile, the number of time steps does not influence the decay rate of . This makes it possible to compute a fixed number of Arnoldi vectors for even more time steps.

We have seen from Fig. 1 that the eigenvalues of have a sufficiently fast decay. Meanwhile, each eigenvector also exhibits a low-rank property. Here we plot the maximum rank used to compute the low-rank approximation of eigenvectors corresponding to the largest eigenvalues for , respectively. In Fig. 2, it is shown that the increase of time steps keeps the rank bounded for the low-rank approximation of the eigenvectors of . The threshold for the low-rank approximation is set to be .

As illustrated in Section 5, the eigenvector also admits a low-rank property. We perform a low-rank approximation on each Arnoldi vector throughout the Arnoldi iteration. Since the low-rank approximation is employed, the orthogonality of the basis of Arnoldi vectors is lost. We just need to compute a few Arnoldi vectors in practice. Here, we use a modified Gram-Schmidt method to perform the full reorthogonalization. Other types of reorthogonalization such as selective orthogonalization [13] or periodic orthogonalization [12] are also possible.

Here we use examples discretized by a grid, with and time steps to illustrate the effectiveness of the low-rank Arnoldi method. First, we plot the largest eigenvalues of computed using the low-rank Arnoldi method. Next, we compute explicitly and use ‘eigs’ in MATLAB to compute its largest eigenvalues. These results are shown in Fig. 3. It is clearly shown that the low-rank Arnoldi method can recover the eigenvalues exactly by using reorthogonalization.

As shown in Fig. 1, the increase of time steps does not influence the decay rate of the eigenvectors of . Next we will show that an increase of the spatial parameters does not influence the decay rate of eigenvalues of either.

We fix the number of time steps to , and compute largest eigenvalues of , the results are shown in Fig. (a)a. The maximum rank used for the low-rank Arnoldi method with different number of spatial parameters is shown in Fig. (b)b.

Figure (a)a shows that the increase number of parameters do not influence the decay property of eigenvalues of . It is expected that for different number of parameters, the eigenvalues of converge to the eigenvalues of the prior-preconditioned operator as long as the discretization of parameter field is good enough. This is illustrated by the eigenvalues shown in Fig. (a)a. Meanwhile, maximum ranks used in the low-rank Arnoldi method are also bounded by a constant with the increase of number of parameters, which is shown in Fig. (b)b.

As stated before, a threshold of for the eigenvalue computations of is enough to reduce the uncertainty and to approximate the posterior covariance matrix. Next, we plot the diagonal entries of the approximated posterior covariance matrix, i.e., the variance of the points for a mesh with a different truncation threshold for eigenvalue computations of . We use a sensors setting for the sparse observation inverse problem, where sensors are uniformly distributed inside the domain and the size of each sensor is of the domain. Here we set and the prior covariance matrix , where is an identity matrix with appropriate size.

For the sparse observation case, covariance updates are mostly clustered around the area where data are observed while the rest are dominated by the prior. Uncertainty can only be reduced at areas around the location of sensors. This is clearly shown by Fig. (a)a - Fig. (d)d, where the dark colored areas are placed at the location of the sensors and have the lowest variance. Decreasing the threshold , we observe that the variance is further reduced around the location of sensors. For smaller values of no more reduction in the variance is achieved as all essential information are already captured. Figure 5 shows that as long as the computations of is convergent, using a threshold is enough to approximate the posterior covariance matrix and to reduce the uncertainty.

As analyzed in Section 5, the eigenvalues of are related to and this ratio gives different updates of the posterior covariance matrix. Next, we use different ratios to plot the variance of the parameters. These results are shown in Figure 6 - Figure 7. The prior covariance matrix is set to be , where is an identity matrix with appropriate size.

For the case , we need Arnoldi iterations for while Arnoldi iterations for . With , we need Arnoldi iterations for while Arnoldi iterations for .

Fig. 6 and Fig. 7 show that with the increase of the ratio between and , the diagonal entries of becomes smaller. This implies that to further reduce the posterior variance, we need bigger ratio between and . This can be explained as follows, the weight for the data misfit part in the optimization problem (26) is getting bigger for bigger . This means that the data misfit part is more strictly optimized than for smaller . Therefore the error between the estimation and observed data is getting smaller. However, this yields a more ill-conditioned problem and more Arnoldi iterations are needed. Therefore, a balance between covariance reduction and computational effort is needed with our approach enabling the storage of many Arnoldi vectors due to the complexity reduction of the low-rank approach.

### 6.3 Convection-Diffusion Equation

In this section, we study our low-rank approach for a stochastic inverse convection-diffusion problem. Here, the convection-diffusion operator is given by

 L=−νΔu+→ω⋅∇u.

The computational domain is chosen as a square domain given by , , and the inflow is posed on the down boundary while the outflow is posed on the upper boundary. Boundary conditions are prescribed according to the analytic solution of the convection-diffusion equation, which is described as Example 3.3.1 in [11]. We use the streamline upwind PetrovâGalerkin (SUPG) finite element method to discretize the convection-diffusion equation.

First, we show the eigenvalue decay of for different settings of the viscosity parameter . Here we set the number of time steps to be 30, and . We plot the largest eigenvalues of for different in Fig. (a)a.

As shown by Fig. (a)a, the eigenvalues of decay rapidly for big while this decay rate slows down when gets smaller. Therefore, more Arnoldi iterations are needed to get a satisfactory approximation of . For smaller , the largest eigenvalue is also bigger than that for bigger as shown in Fig. (a)a. The first few eigenvectors form a more dominant subspace than for bigger . It is therefore possible to choose a larger truncation threshold for smaller , which will be shown later.

The maximum rank for the low-rank Arnoldi iteration with different is shown in Fig. (b)b. It shows that the maximum rank does not increase with the decrease of and is bounded by a small constant. Therefore, the complexity for both computations and storage is for the low-rank Arnoldi approach.

As shown in Fig. (a)a, the eigenvalues of decay slower for smaller , and more Arnoldi iterations are needed, which is due to the property of the problem. For such a problem with smaller , our low-rank approach is much more superior to the standard Arnoldi method introduced in [12] since we need to compute and store more Arnoldi vectors. Note that this is doable with the approach presented here.

Next, we set to be and compute the largest eigenvalues of with different , which are shown in Fig. (a)a. We are also interested in the relation between the maximum rank at each Arnoldi iteration and the number of time steps . This is shown in Fig. (b)b.

As shown by Fig. (a)a, with the increase in the number of time steps, the eigenvalue decay of behaves similar. Maximum ranks at each low-rank Arnoldi iteration with different are bounded by a moderately small constant, which is independent of .

Fig. (b)b and Fig. (b)b show that the maximum rank for each low-rank Arnoldi iteration is almost invariant w.r.t. the number of time steps and the viscosity parameter . This makes our low-rank Arnoldi method quite appealing for even complicated stochastic convection dominated inverse problems over a long time horizon.

Next, we show the diagonal entries of for different settings of and the threshold () of eigenvalues truncation. Here we set the number of time steps to be , use a uniform grid to discretize the convection-diffusion equation, and . The results are given by Fig. 10 and Fig. 11.

For the case , we need Arnoldi iterations when we use a threshold , while we need Arnoldi iterations for . For the case , we need low-rank Arnoldi iterations by setting and low-rank Lanczos iterations when we use . Note that often a further reduction in the truncation parameter does not yield better results as all the essential information is already captured.

Fig. (b)b and Fig. (b)b illustrate that the uncertainty (variance of unknowns) is already reduced dramatically even if we choose a relatively large threshold.

As shown in Fig. (a)a, the smaller becomes, the bigger the largest eigenvalue of is going to be. Therefore, the first few eigenvectors for smaller form a more dominant subspace. This in turn implies that the uncertainty is much more reduced for smaller when we use the same truncation threshold of eigenvalues. We observe this in Fig. 10 and Fig. 11.

We can conclude that for the stochastic convection-diffusion inverse problem, our low-rank Arnoldi approach is very flexible and efficient for different time horizon lengths and viscosity parameters. It is even preferred for convection dominated stochastic inverse problems with long time horizon.

## 7 Conclusions

In this manuscript, we propose a low-rank Arnoldi method to approximate the posterior covariance matrix that appears in stochastic inverse problems. Compared with the standard Arnoldi approach, our approach exploits the low-rank property of each Arnoldi vector and makes a low-rank approximation of such a vector. This reduces the complexity for both computations and storage demand from to . Here is the degree of freedom in space and is the degree of freedom in time. This makes solving large scale stochastic inverse problems possible.

Our low-rank approach introduced in this manuscript solves linear stochastic inverse problems that can be put into the Bayesian framework. The next step of our work is to extend the low-rank approach introduced in this manuscript to nonlinear stochastic inverse problems, which is still a big challenge.

## Acknowledgements

This work was supported by the European Regional Development Fund (ERDF/ EFRE: ZS/2016/04/78156) within the research center dynamic systems (CDS).

## References

• [1] R. Andreev and C. Tobler, Multilevel preconditioning and low-rank tensor iteration for space-time simultaneous discretizations of parabolic PDEs, Numer. Linear Algebra Appl., 22 (2015), pp. 317–337.
• [2] P. Benner, M. Köhler, and J. Saak, M.E.S.S.–The Matrix Equations Sparse Solvers library.
• [3] P. Benner and P. Kürschner, Computing real low-rank solutions of Sylvester equations by the factored ADI method, Computers and Mathematics with Applications, 67 (2014), pp. 1656–1672.
• [4] P. Benner and J. Saak, Numerical solution of large and sparse continuous time algebraic matrix Riccati and Lyapunov equations: a state of the art survey, GAMM-Mitt., 36 (2013), pp. 32–52.
• [5] M. Benzi, E. Haber, and L. Taralli, A preconditioning technique for a class of PDE-constrained optimization problems, Advances in Computational Mathematics, 35 (2011), pp. 149–173.
• [6] A. Borzı, Multigrid methods for parabolic distributed optimal control problems, Journal of Computational and Applied Mathematics, 157 (2003), pp. 365–382.
• [7] T. Bui-Thanh, O. Ghattas, J. Martin, and G. Stadler, A computational framework for infinite-dimensional Bayesian inverse problems Part I: The linearized case, with application to global seismic inversion, SIAM J. Sci. Comput., 35 (2013), pp. A2494–A2523.
• [8] D. Calvetti and E. Somersalo, Introduction to Bayesian scientific computing, vol. 2 of Surveys and Tutorials in the Applied Mathematical Sciences, Springer-Verlag, New York, 2007.
• [9] S. V. Dolgov, TT-GMRES: Solution to a linear system in the structured tensor format, Russian Journal of Numerical Analysis and Mathematical Modelling, 28 (2013), pp. 149–172.
• [10] S. V. Dolgov and D. V. Savostyanov, Alternating minimal energy methods for linear systems in higher dimensions, SIAM J. Sci. Comput., 36 (2014), pp. A2248–A2271.
• [11] H. C. Elman, D. J. Silvester, and A. J. Wathen, Finite Elements and Fast Iterative Solvers with Applications in Incompressible Fluid Dynamics, Numerical Mathematics and Scientific Computation, Oxford University Press, New York, 2005.
• [12] H. P. Flath, L. C. Wilcox, V. Akçelik, J. Hill, B. van Bloemen Waanders, and O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput., 33 (2011), pp. 407–432.
• [13] G. H. Golub and C. F. van Loan, Matrix computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, third ed., 1996.
• [14] L. Grasedyck, D. Kressner, and C. Tobler, A literature survey of low-rank tensor approximation techniques, GAMM-Mitt., 36 (2013), pp. 53–78.
• [15] W. Hackbusch, Multigrid methods and applications, vol. 4 of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, 1985.
• [16] W. Hackbusch, Tensor spaces and numerical tensor calculus, vol. 42 of Springer Series in Computational Mathematics, Springer, Heidelberg, 2012.
• [17] R. Herzog and K. Kunisch, Algorithms for PDE-constrained optimization, GAMM-Mitt., 33 (2010), pp. 163–176.
• [18] M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich, Optimization with PDE Constraints, Springer-Verlag, New York, 2009.
• [19] K. Ito and K. Kunisch, Lagrange multiplier approach to variational problems and applications, vol. 15 of Advances in Design and Control, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2008.
• [20] J. Kaipio and E. Somersalo, Statistical and computational inverse problems, vol. 160 of Applied Mathematical Sciences, Springer-Verlag, New York, 2005.
• [21] D. Kressner and C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM J. Matrix Anal. Appl, 31 (2010), pp. 1688–1714.
• [22] D. Kressner and C. Tobler, Low-rank tensor Krylov subspace methods for parametrized linear systems, SIAM J. Matrix Anal. Appl, 32 (2011), pp. 1288–1316.
• [23] J. R. Kuttler and V. G. Sigillito, Eigenvalues of the Laplacian in two dimensions, SIAM Rev., 26 (1984), pp. 163–193.
• [24] C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, Journal of Research of the National Bureau of Standards, 45 (1950), pp. 255–282.
• [25] I. V. Oseledets, S. Dolgov, V. Kazeev, D. Savostyanov, O. Lebedeva, P. Zhlobich, T. Mach, and L. Song, TT-Toolbox.
• [26] I. V. Oseledets and S. V. Dolgov, Solution of linear systems and matrix inversion in the TT-format, SIAM J. Sci. Comput., 34 (2012), pp. A2718–A2739.
• [27] J. W. Pearson, M. Stoll, and A. J. Wathen, Regularization-robust preconditioners for time-dependent PDE-constrained optimization problems, SIAM J. Matrix Anal. Appl, 33 (2012), pp. 1126–1152.
• [28] Y. Saad, Iterative methods for sparse linear systems, Society for Industrial and Applied Mathematics, Philadelphia, PA, second ed., 2003.
• [29] Y. Saad, Numerical methods for large eigenvalue problems, vol. 66 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011.
• [30]