Low-rank computation of posterior covariance matrices in Bayesian inverse problems

Low-rank computation of posterior covariance matrices in Bayesian inverse problems

Abstract

We consider the problem of estimating the uncertainty in statistical inverse problems using Bayesian inference. When the probability density of the noise and the prior are Gaussian, the solution of such a statistical inverse problem is also Gaussian. Therefore, the underlying solution is characterized by the mean and covariance matrix of the posterior probability density. However, the covariance matrix of the posterior probability density is full and large. Hence, the computation of such a matrix is impossible for large dimensional parameter spaces. It is shown that for many ill-posed problems, the Hessian matrix of the data misfit part has low numerical rank and it is therefore possible to perform a low-rank approach to approximate the posterior covariance matrix. For such a low-rank approximation, one needs to solve a forward partial differential equation (PDE) and the adjoint PDE in both space and time. This in turn gives complexity for both, computation and storage, where is the dimension of the spatial domain and is the dimension of the time domain. Such computations and storage demand are infeasible for large problems. To overcome this obstacle, we develop a new approach that utilizes a recently developed low-rank in time algorithm together with the low-rank Hessian method. We reduce both the computational complexity and storage requirement from to . We use numerical experiments to illustrate the advantages of our approach.

1Introduction

Computational mathematicians dealing with simulations of large-scale discretizations describing physical phenomena have made tremendous success over the last decades. This has enabled scientists from various areas of engineering, chemistry, geophysics, et al. to ask more relevant and complex questions. One area that has seen a dramatic increase in the number of published results is the field of statistical inverse problems [36]. In particular, the consideration of partial differential equations (PDEs) as models in statistical inverse problems dramatically increases the problem complexity as a refinement of the model in space and time results in an exponential increase in the problem degrees of freedom. By this we mean that a discretized problem is typically represented by a spatial system matrix where the number of degrees of freedom is typically with being the spatial dimension, and is the mesh size. It is easily seen that halving the parameter means the matrix size will grow by a factor of depending on the spatial dimension. This complexity is further increased when the temporal dimension is incorporated.

While numerical analysis has provided many techniques that allow the efficient handling of such problems, e.g. Krylov methods [28], multigrid techniques [15], we are faced with an even steeper challenge when uncertainty in the parameters of the model is incorporated. For this we consider the approach of Bayesian inverse problems where the goal is to use prior knowledge to obtain information about the conditional mean or the posterior covariance given a set of measured data. While computing the conditional mean typically corresponds to a problem formulation frequently encountered in PDE-constrained optimization [37], the problem of computing the posterior covariance matrix is much more challenging as this matrix is dense and involves the inverse of high-dimensional discretized PDE problems. In [12] and subsequent works, the authors proposed a low-rank approximation of the posterior covariance matrix. While this already reduces the complexity dramatically, the storage requirements for the resulting vectors still suffer from high dimensionality with respect to and . Our aim in this paper is to introduce a method based on low-rank techniques [35] that allows to reduce the complexity from to .

In order to do this, we will first derive the basic problem following [12]. This will be followed by the presentation of a low-rank technique that we previously introduced for PDE-constrained optimization. We then use this to establish a low-rank eigenvalue method based on the classical Lanczos procedure [24] or Arnoldi procedure [29]. After introducing different choices of covariance matrices, we show that our approach can be theoretically justified. We then illustrate the applicability of our proposed methodology to a diffusion problem and a convection diffusion problem, and present numerical results illustrating the performance of the scheme.

2Statistical/Bayesian Inverse Problems

We refer to [20] for excellent introductions into the subject of statistical inverse problems. We follow [12] in the derivation of our model setup and start with the parameter-to-observable map defined as

where are vectors of random variables. Note that here, , our vector of model parameters to be recovered, is a realization of the error vector is a realization of and is a realization of the vector of observables . The vector contains the observed values. As discussed in [7], even when using the ‘true’ model parameters the observables will differ from the measurements due to measurement noise and the inadequacy of the underlying PDE model.

In a typical application such as the one discussed later, evaluating requires the solution of a PDE potentially coupled to an observation operator representing a domain of interest.

The Bayes’ theorem, which plays a key role in the Bayesian inference, is written as

where we used the prior probability density function (PDF) , the likelihood function , and the data with . The function is the posterior probability density function. The likelihood is derived under the assumption of additive noise

where and is the additive noise and given as We once more follow [12], assuming that and are statistically independent and we can use

Therefore, Bayes’ theorem can be written as

Assuming that both probability density functions for and are Gaussian, we can rewrite the PDFs in the form

where is the mean of the model parameter prior PDF and is the mean of the noise PDF. We further have the two covariance matrices for the prior and for the noise. The Gaussian assumption allows us to rewrite Bayes’ theorem further to get

Let us further assume that the parameter-to-observable map is given as in with . The matrix represents a linear map from the parameters to the observables . We will later see that often this matrix involves the inverse of a discretized representation of a PDE operator. In [33], the authors incorporate a quantity of interest operator into . Therefore, it will typically be dense and very large. We arrive now at a restated version of Bayes theorem From this relation we can express several relevant statistical quantities. For example, we can compute the maximum a posteriori point (MAP), which is defined via and to compute it, one can solve the following optimization problem Note that this problem is a deterministic inverse problem and resembles the structure one finds in PDE-constrained optimization problems [37]. Many strategies are known how to solve this efficiently and in particular underlying function space considerations help in the development of efficient numerical algorithms. An infinite-dimensional discussion of the above problem is given in [7] and we only refer to the infinite-dimensional setup when needed. Our goal in this paper will not be the solution of the MAP problem. The goal of devising low-rank methods for this case has recently been established in [35] and the techniques there are likely to be applicable as the only difference is the use of the weighting matrices which are for the classical PDE-constrained optimization problem mass matrices or matrices involving mass matrices. The more challenging question lies in the approximation of the posterior covariance matrix

The approximation of is in general very costly and, without further approximation, intractable. The approach presented in [12] computes a low-rank approximation to this matrix using the following relation

The authors in [12] then compute a low-rank approximation to the so-called prior-preconditioned Hessian of the data misfit

with the approximation

where and represent the dominant eigenvectors and eigenvalues, respectively. Using this approximation and the Sherman-Morrison-Woodbury formula [13] one obtains for the prior-preconditioned system

with , and is the -th diagonal entry of . A similar approach for becomes a feasible alternative if can be evaluated in reasonable time.

Our main goal of this paper will be the derivation of an efficient scheme to approximate the matrix from the low-rank approximation to the misfit Hessian. Before discussing this problem, we introduce an idea that becomes instrumental in realizing this and is motivated by a PDE-constrained optimization problem.

3A Low-Rank Technique for PDE-Constrained Optimization

In order to better understand the stochastic inverse problem, we investigate it in relation to a PDE-constrained optimization problem. We start the derivation of the low-rank in time method by considering an often used model problem in PDE-constrained optimization (see [18]), minimization of

with and space-time cylinders. The constraint linking state and control of this problem is given by the heat equation with a distributed control term

Here denotes the domain and corresponds to the boundary of the domain. For a more detailed discussion on the well-posedness, existence of solutions, discretization et al., we refer the interested reader to [18]. The solution of such an optimization problem is obtained using a Lagrangian approach and considering the first order conditions, which for our problem results in a linear system of the form

where and come from the discretization of the temporal parts of the objective function or the right hand side of the PDE-constraint (cf. [5]). The matrices and are mass matrices corresponding to observation and control domain. The matrix is essentially representing the incorporation of the control into the constraint, i.e., is a mass matrix in the above example. The matrix represents the all-at-once discretization of the time-derivative in the PDE and the discretized Laplacian. Here, the state, control, and adjoint state are represented by the following space-time vectors

We point out again that the Kronecker product is defined as

and remind the reader of the definition of the operator via

as well as the relation

In [35], it was shown that the solution to the PDE-constrained optimization problem can be computed in low-rank form

where the are small in comparison to the spatial and temporal dimensions. The authors in [35] illustrated that the low-rank structure of a right hand side is maintained throughout a Krylov subspace iteration and the above described representation. Low-rank techniques for Krylov-subspace methods have recently received much attention and we refer the reader to [21] and for tensor structured equations [14].

We obtain a significant storage reduction if we can base our approximation of the solution using such low-rank factors. It is easily seen that due to the low-rank nature of the factors, we have to perform fewer multiplications with the submatrices by also maintaining smaller storage requirements.

There are several similarities of the problem and the statistical inverse problem presented earlier. It is clear that with the choice

the PDE-constrained optimization problem can be interpreted as a statistical inverse problem and the posterior covariance matrix is given by eliminating both state and adjoint state from the system matrix to obtain a reduced Hessian system. Furthermore, it is clear that for many choices of prior and noise covariance, we can utilize the tensor structure to compute low-rank solutions. For this we state the posterior covariance matrix of the PDE optimization problem

and the misfit Hessian

We keep this example in mind when we now discuss a low-rank technique to approximate the eigenvectors of the posterior covariance matrix in low-rank form. For this we propose a low-rank Krylov subspace method to compute the dominating eigenvectors and eigenvalues.

Before discussing the eigenvalue approximation strategy, we want to comment on the scaling of the PDE-constrained optimization problem in relation to the statistical inverse problem discussed in [12]. The authors there consider

which in the simple case of full observations and control leads to the following rescaling of

Assuming that and we get

In PDE-constrained optimization one typically reduces in to allow for a more expensive control that drives the state closer to the desired state. This would mean that in the statistical inverse setting and decreasing the value of which implies that in the role of the prior covariance gets diminished and most contributions are coming from the noise. We have similar settings and observations for stochastic inverse problems in this manuscript. These will be shown by the analysis in and numerical experiments in . The right choice of parameters and depends on the underlying application and we refer to [8] for a discussion of the roles of the parameters and as regularization parameters.

4Low-rank Lanczos/Arnoldi Method

We recall that our goal is to find a low-rank approximation of the eigenvectors of the posterior covariance matrix. The goal is to compute an approximation to with and much smaller than the dimension of For the PDE-constrained optimization problem .

Our main assumption at this point is that storing each and especially a number of such space-time vectors can pose serious problems. Additionally, in order to perform the matrix vector multiplication with , a large number of PDE-solutions need to be computed. For this we point out that in order to apply the matrix in an Arnoldi procedure, we need to solve the spatial system over the whole time-domain. A major advantage of our approach is motived by the fact that

which we assume is well approximated via

with with If the Arnoldi- or Lanczos vectors are of this form, then the application of the matrix to such vectors requires fewer PDE solves than in the full case.

Note that we need to compute the dominant eigenvectors of the prior-preconditioned data misfit Hessian , therefore the Lanczos method can be used. At the -th Lanczos iteration, we need to perform using the low-rank approach to get a form like . Here is the -th Lanczos vector. Due to the low-rank approximation, the orthogonality of Lanczos vectors is lost. Reorthogonalization should be used to orthogonalize Lanczos vectors. Meanwhile, the symmetric property of cannot be preserved for the low-rank form of the matrix-vector product. Therefore, we make use of the more general Arnoldi method to compute the dominant eigenvectors of . We also observe that when applying the Arnoldi method with the truncation error appropriately chosen we still get real eigenvalues. This will be shown in .

We now briefly recall the Arnoldi method, which is the more general procedure. We refer to [13] for details. We recall that the Arnoldi process for a matrix can be written as

where consists of orthonormal columns and is a Hessenberg matrix. The iterative build-up of the columns of is captured by the recursion

where . The vector is then normalized using the scalar While this is well-known our goal here is to illustrate how this method is amenable to the use within a low-rank framework. For the Arnoldi process considered in this manuscript, and the application of to results in a low-rank matrix, i.e.,

with small rank. This is because is related with the inverse of a PDE operator in space and time, which has shown in [35] that applying such an operator to a low-rank vector again gives a low-rank vector. We can then write the right hand side of as and write the last expression as The size of the matrix is increased to Using truncation techniques this can typically be controlled. For example, one could achieve the truncation by utilizing skinny QR factorizations [21] or truncated singular value decompositions [35].

In our numerical experiments, we show that maintaining the orthogonality of the Lanczos vectors is crucial. Meanwhile, since we use low-rank methods to approximate the Lanzcos vectors, the orthogonality and the symmetry of the Lanzcos process get lost. We hence opt for the more expensive but stable Arnoldi process, where we orthogonalize with respect to all previous vectors. The full reorthogalization also demands more storage than in the Lanczos case. This is another advantage of our approach. With full reorthogonalization, the storage cost are increasing for both full and low-rank scheme but in the low-rank framework stay significantly below the full scheme. Here, we give the low-rank Arnoldi method in .

We note that the biggest challenge for the low-rank Arnoldi method is to perform the low-rank matrix vector product in line of since is large and dense. We propose the tensor-train (TT) format in to perform such computations efficiently. The full orthogonalization procedure in line - of is also performed with the TT format.

We use the standard Arnoldi method for low-rank eigenvector computations in . This is practical for the problems studied in this manuscript since we just need to compute up to a few hundred Arnoldi vectors. Since the computational complexity of full orthogonalization increases with the number of Arnoldi vectors, if more Arnoldi vectors are needed, the restarted Arnoldi method can be implemented with a low-rank version [13].

5Analysis of the Eigenfunctions

The eigenfunction analysis for the general case presented above is not straightforward. Our goal in this section is to give a theoretical justification for simple cases. We start with the case of a steady state problem involving the two-dimensional Poisson equation. For this we consider the misfit Hessian

Assuming for now that and so that we are left with

Note that we have assumed for the data misfit Hessian to be defined in function space with the prior and noise covariance operators to be multiples of the identity operator. Our goal is to find eigenfunctions of with the inverse of the PDE operator and the inverse of the adjoint PDE operator. For this we use the eigenfunctions of the two-dimensional Laplacian operator satisfying with zero Dirichlet boundary conditions (see [23]). Note that for this problem . We here consider the eigenvalue problem on a rectangular domain and state the eigenfunctions as with and . The associated eigenvalues are given by with (see [23]). Coming back to the misfit Hessian we can write this as with the inverse Laplacian. Assuming zero Dirichlet boundary conditions it holds that is satisfied for the eigenfunctions of the Laplacian with the eigenvalues given by where the decay of is quite rapid. This justifies the approximation of by a small number of eigenfunctions. In [12], the authors also justify this for a one-dimensional convection diffusion equation.

The goal of our approach is to add even more low-rankness to the computation. We want to illustrate that the eigenfunctions used to represent the misfit Hessian are of small rank. When studying the eigenfunctions above it is clear that the eigenfunction

is already separated into and components with separation rank (cf. [16]), where it is the sum of two functions with one depending on the first and the other on the second variable. For this case we have established the following lemma

It is not so straightforward to establish similar results for more complicated equations.

For the space-time PDE-constrained optimization problem discussed earlier, we note that

where we used and collected all scalars in . Our aim is to establish eigenvalue and eigenvector results for

We note that this fits the well known relation that the singular values of a matrix are the square roots of the eigenvalues of the matrix which, assuming full rank of , is a symmetric and positive definite matrix. Now assuming the SVDs

we obtain

From this it follows that

is the eigendecomposition of which has the same eigenvectors as As the eigenvalues of quickly decay to zero because of the compactness of the operator, we only need a small number of columns of . Our aim in this paper is to express each column of further in a low-rank fashion. For this we note that and ignoring superindices we get

and hence a vector of rank one if the eigenvectors are all real. Complex eigenvectors would further introduce a small rank increase and the consideration of instead of can with a simultaneous diagonalization of the pencil lead to small eigenvector ranks of the overall system. This justifies our choice of approximating the eigenvectors in low-rank form.

6Numerical Results

In this section, we study the performance of the low-rank Lanczos algorithm presented in . The results presented in this section are based on an implementation of the above described algorithms within MATLAB. We perform the discretization of the PDE-operators within the IFISS [30] framework using finite elements for the heat equation and the streamline upwind Petrov–Galerkin (SUPG) method for the convection diffusion equation. Our experiments are performed for a final time with a varying number of time-steps. As the domain we consider the unit square but other domains are of course possible. We specify the boundary conditions for each problem separately. Throughout the results section we fixed the truncation at for which we observed good results. Additionally, we also performed not listed experiments with a tolerance of for which we also observed fast convergence. Larger tolerances should be combined with a deeper analysis of the algorithms and a combination with flexible outer solvers. All results are performed on a standard Ubuntu desktop Linux machine with Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz and 8GB of RAM.

The mathematical model we consider in this section is given by the following instationary PDE,

where is a PDE operator and for the numerical experiments we consider the case of the heat equation and the convection-diffusion equation . For all the numerical tests, the initial concentration represents the unknown parameter , and the observation data are collected by sensors, which are distributed in part of the domain .

As stated in , the statistical inverse problem with Gaussian noise and prior using a Bayesian formulation is related to a weighted least squares problem. Here we use the same functional as in [12], which is given by the following functional

in which satisfies the PDE model , is the observation operator, and is the uncertainty term, which is the initial condition for this numerical example. Here, we study the sparse observation case, where is defined by

Here is the regional function of sensors, at the region of the -th sensor and elsewhere.

Discretizing the functional in turn gives

where is the discretization of , the variable is the discrete variant of in stacked in time, and it satisfies . Here and come from the discretization of the PDE model .

Discretization of the objective function gives that , and , where is the mass matrix, is an identity matrix. Here, is the number of time variables. Since we need to compute , we use . Here, is the mesh size, is the spatial dimension, is a identity matrix, and is the number of spatial variables. Other settings such as smoothing operators used for have been studied in [7] and can be used with our approach as well.

As analyzed in , the posterior covariance matrix is given by the inverse of the Hessian of . Therefore,

Note that for different PDE models, is also different after discretization. In this section, we use two types of PDE models, i.e., the heat equation and the convection-diffusion equation to show the performance of our low-rank algorithm for the approximation of . We argue that our low-rank algorithm also applies to other time-dependent PDE operators. Here we apply our method to symmetric systems and unsymmetric systems.

We also point out that due to the uncertainty being given as the initial condition, the posterior Hessian is only of spatial dimension. Nevertheless, the Arnoldi process applied to this matrix requires the solution of space-time problems and the low-rank form of our approach results in a much reduced number of spatial solves. The complexity reduction is even more pronounced when the uncertainty is part of the system as a space-time variable such as the right hand side of the PDE.

6.1Implementation Details

According to , the prior-preconditioned data misfit part after discretization of is given by

To apply the Lanczos iteration to , we need to solve the space-time discretized PDE and adjoint PDE . Here we take as an example. This asks us to solve a linear system of the following type,

or

Here and are matrices/tensors of appropriate sizes. Numerical solutions of have been studied intensively from the matrix equation point of view, c.f. [31] for an overview.

In this manuscript, we solve based on its tensor structure and use the alternative minimal energy (AMEn) approach [10] implemented in the tensor-train (TT) toolbox [25] to solve the tensor equation . At each AMEn iteration, either a left Galerkin projection or a right Galerkin projection is applied to the system . Therefore, after Galerkin projection, we need to solve a linear system either of the format

or

Here , , , are matrices of appropriate dimensions after Galerkin projection (c.f. [10] for details).

After Galerkin projection, the size of the system is still relatively large while the size of is quite moderate. Therefore, Krylov solvers such as the generalized minimized residual (GMRES) [28] method or the induced dimension reduction IDR(s) [32] method can be used to solve while a direct method can be used to solve .

To accelerate the convergence of the Krylov solver, we use the following preconditioner

to solve . Here is an operation that extracts the diagonal entries of a matrix and form a diagonal matrix. One can use standard techniques such as multigrid methods [38] or incomplete LU factorization (ilu) [13] to approximate the preconditioner . Here we use backslash implemented in MATLAB.

We also want to point out that there are many other methods to efficiently solve , such as the low-rank factored alternating directions implicit (ADI) method (cf. [3]).

6.2The Heat Equation

In this part, we use the 2D time-dependent heat equation in a unit square as an example to study the performance of our low-rank algorithm for the heat equation. Discretizing the equation in space using finite elements and in time using the implicit Euler method gives us an linear system, where is the number of spatial variables while is the number of time steps. First we study spectral properties of the prior-preconditioned data misfit part and the posterior covariance matrix. Using a grid to discretize the heat equation and set , respectively, we plot the largest eigenvalues of in . Here .

As shown in , there are only a few dominant eigenvalues of . For most cases, a threshold of , or even is acceptable to approximate and to reduce the uncertainty of the system, which will be shown later. Meanwhile, the number of time steps does not influence the decay rate of . This makes it possible to compute a fixed number of Arnoldi vectors for even more time steps.

Figure 1: Largest 50 eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} computed using low-rank Arnoldi
Figure 1: Largest eigenvalues of computed using low-rank Arnoldi

We have seen from that the eigenvalues of have a sufficiently fast decay. Meanwhile, each eigenvector also exhibits a low-rank property. Here we plot the maximum rank used to compute the low-rank approximation of eigenvectors corresponding to the largest eigenvalues for , respectively. In , it is shown that the increase of time steps keeps the rank bounded for the low-rank approximation of the eigenvectors of . The threshold for the low-rank approximation is set to be .

Figure 2: Maximum rank for eigenvectors of \tilde{\mathcal{H}}_{\text{mis}}.
Figure 2: Maximum rank for eigenvectors of

As illustrated in , the eigenvector also admits a low-rank property. We perform a low-rank approximation on each Arnoldi vector throughout the Arnoldi iteration. Since the low-rank approximation is employed, the orthogonality of the basis of Arnoldi vectors is lost. We just need to compute a few Arnoldi vectors in practice. Here, we use a modified Gram-Schmidt method to perform the full reorthogonalization. Other types of reorthogonalization such as selective orthogonalization [13] or periodic orthogonalization [12] are also possible.

Here we use examples discretized by a grid, with and time steps to illustrate the effectiveness of the low-rank Arnoldi method. First, we plot the largest eigenvalues of computed using the low-rank Arnoldi method. Next, we compute explicitly and use ’eigs’ in MATLAB to compute its largest eigenvalues. These results are shown in . It is clearly shown that the low-rank Arnoldi method can recover the eigenvalues exactly by using reorthogonalization.

As shown in , the increase of time steps does not influence the decay rate of the eigenvectors of . Next we will show that an increase of the spatial parameters does not influence the decay rate of eigenvalues of either.

We fix the number of time steps to , and compute largest eigenvalues of , the results are shown in . The maximum rank used for the low-rank Arnoldi method with different number of spatial parameters is shown in .

Figure 3: Eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} computed using eigs and low-rank Arnoldi.
Eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} computed using eigs and low-rank Arnoldi.
Figure 3: Eigenvalues of computed using ’eigs’ and low-rank Arnoldi.
Figure 4: Eigenvalues for \tilde{\mathcal{H}}_{\text{mis}} and maximum rank for low-rank Arnoldi method, n_t=30.
Eigenvalues for \tilde{\mathcal{H}}_{\text{mis}} and maximum rank for low-rank Arnoldi method, n_t=30.
Figure 4: Eigenvalues for and maximum rank for low-rank Arnoldi method, .

shows that the increase number of parameters do not influence the decay property of eigenvalues of . It is expected that for different number of parameters, the eigenvalues of converge to the eigenvalues of the prior-preconditioned operator as long as the discretization of parameter field is good enough. This is illustrated by the eigenvalues shown in . Meanwhile, maximum ranks used in the low-rank Arnoldi method are also bounded by a constant with the increase of number of parameters, which is shown in .

As stated before, a threshold of for the eigenvalue computations of is enough to reduce the uncertainty and to approximate the posterior covariance matrix. Next, we plot the diagonal entries of the approximated posterior covariance matrix, i.e., the variance of the points for a mesh with a different truncation threshold for eigenvalue computations of . We use a sensors setting for the sparse observation inverse problem, where sensors are uniformly distributed inside the domain and the size of each sensor is of the domain. Here we set and the prior covariance matrix , where is an identity matrix with appropriate size.

Figure 5: Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^4\beta_{\text{prior}}.
Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^4\beta_{\text{prior}}.
Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^4\beta_{\text{prior}}.
Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^4\beta_{\text{prior}}.
Figure 5: Diagonal entries of , ,

For the sparse observation case, covariance updates are mostly clustered around the area where data are observed while the rest are dominated by the prior. Uncertainty can only be reduced at areas around the location of sensors. This is clearly shown by - , where the dark colored areas are placed at the location of the sensors and have the lowest variance. Decreasing the threshold , we observe that the variance is further reduced around the location of sensors. For smaller values of no more reduction in the variance is achieved as all essential information are already captured. shows that as long as the computations of is convergent, using a threshold is enough to approximate the posterior covariance matrix and to reduce the uncertainty.

As analyzed in , the eigenvalues of are related to and this ratio gives different updates of the posterior covariance matrix. Next, we use different ratios to plot the variance of the parameters. These results are shown in - . The prior covariance matrix is set to be , where is an identity matrix with appropriate size.

For the case , we need Arnoldi iterations for while Arnoldi iterations for . With , we need Arnoldi iterations for while Arnoldi iterations for .

Figure 6: Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^6\beta_{\text{prior}}.
Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^6\beta_{\text{prior}}.
Figure 6: Diagonal entries of , , .
Figure 7: Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^8\beta_{\text{prior}}
Diagonal entries of \Gamma_{\text{post}}, n_t=30, \beta_{\text{noise}}=10^8\beta_{\text{prior}}
Figure 7: Diagonal entries of , ,

and show that with the increase of the ratio between and , the diagonal entries of becomes smaller. This implies that to further reduce the posterior variance, we need bigger ratio between and . This can be explained as follows, the weight for the data misfit part in the optimization problem is getting bigger for bigger . This means that the data misfit part is more strictly optimized than for smaller . Therefore the error between the estimation and observed data is getting smaller. However, this yields a more ill-conditioned problem and more Arnoldi iterations are needed. Therefore, a balance between covariance reduction and computational effort is needed with our approach enabling the storage of many Arnoldi vectors due to the complexity reduction of the low-rank approach.

6.3Convection-Diffusion Equation

In this section, we study our low-rank approach for a stochastic inverse convection-diffusion problem. Here, the convection-diffusion operator is given by

The computational domain is chosen as a square domain given by , , and the inflow is posed on the down boundary while the outflow is posed on the upper boundary. Boundary conditions are prescribed according to the analytic solution of the convection-diffusion equation, which is described as Example 3.3.1 in [11]. We use the streamline upwind Petrov–Galerkin (SUPG) finite element method to discretize the convection-diffusion equation.

First, we show the eigenvalue decay of for different settings of the viscosity parameter . Here we set the number of time steps to be 30, and . We plot the largest eigenvalues of for different in .

As shown by , the eigenvalues of decay rapidly for big while this decay rate slows down when gets smaller. Therefore, more Arnoldi iterations are needed to get a satisfactory approximation of . For smaller , the largest eigenvalue is also bigger than that for bigger as shown in . The first few eigenvectors form a more dominant subspace than for bigger . It is therefore possible to choose a larger truncation threshold for smaller , which will be shown later.

Figure 8: 50 largest eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} and maximum ranks at each low-rank Arnoldi iteration with different \nu.
50 largest eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} and maximum ranks at each low-rank Arnoldi iteration with different \nu.
Figure 8: largest eigenvalues of and maximum ranks at each low-rank Arnoldi iteration with different .

The maximum rank for the low-rank Arnoldi iteration with different is shown in . It shows that the maximum rank does not increase with the decrease of and is bounded by a small constant. Therefore, the complexity for both computations and storage is for the low-rank Arnoldi approach.

As shown in , the eigenvalues of decay slower for smaller , and more Arnoldi iterations are needed, which is due to the property of the problem. For such a problem with smaller , our low-rank approach is much more superior to the standard Arnoldi method introduced in [12] since we need to compute and store more Arnoldi vectors. Note that this is doable with the approach presented here.

Next, we set to be and compute the largest eigenvalues of with different , which are shown in . We are also interested in the relation between the maximum rank at each Arnoldi iteration and the number of time steps . This is shown in .

Figure 9: 50 largest eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} and maximum ranks at each low-rank Arnoldi iteration with different n_t.
50 largest eigenvalues of \tilde{\mathcal{H}}_{\text{mis}} and maximum ranks at each low-rank Arnoldi iteration with different n_t.
Figure 9: largest eigenvalues of and maximum ranks at each low-rank Arnoldi iteration with different

As shown by , with the increase in the number of time steps, the eigenvalue decay of behaves similar. Maximum ranks at each low-rank Arnoldi iteration with different are bounded by a moderately small constant, which is independent of .

and show that the maximum rank for each low-rank Arnoldi iteration is almost invariant w.r.t. the number of time steps and the viscosity parameter . This makes our low-rank Arnoldi method quite appealing for even complicated stochastic convection dominated inverse problems over a long time horizon.

Next, we show the diagonal entries of for different settings of and the threshold () of eigenvalues truncation. Here we set the number of time steps to be , use a uniform grid to discretize the convection-diffusion equation, and . The results are given by and .

For the case , we need Arnoldi iterations when we use a threshold , while we need Arnoldi iterations for . For the case , we need low-rank Arnoldi iterations by setting and low-rank Lanczos iterations when we use . Note that often a further reduction in the truncation parameter does not yield better results as all the essential information is already captured.

and illustrate that the uncertainty (variance of unknowns) is already reduced dramatically even if we choose a relatively large threshold.

As shown in , the smaller becomes, the bigger the largest eigenvalue of is going to be. Therefore, the first few eigenvectors for smaller form a more dominant subspace. This in turn implies that the uncertainty is much more reduced for smaller when we use the same truncation threshold of eigenvalues. We observe this in and .

Figure 10: Diagonal entries of \Gamma_{\text{post}}, \nu=10^{-2}.
Diagonal entries of \Gamma_{\text{post}}, \nu=10^{-2}.
Figure 10: Diagonal entries of ,
Figure 11: Diagonal entries of \Gamma_{\text{post}}, \nu=10^{-3}.
Diagonal entries of \Gamma_{\text{post}}, \nu=10^{-3}.
Figure 11: Diagonal entries of ,

We can conclude that for the stochastic convection-diffusion inverse problem, our low-rank Arnoldi approach is very flexible and efficient for different time horizon lengths and viscosity parameters. It is even preferred for convection dominated stochastic inverse problems with long time horizon.

7Conclusions

In this manuscript, we propose a low-rank Arnoldi method to approximate the posterior covariance matrix that appears in stochastic inverse problems. Compared with the standard Arnoldi approach, our approach exploits the low-rank property of each Arnoldi vector and makes a low-rank approximation of such a vector. This reduces the complexity for both computations and storage demand from to . Here is the degree of freedom in space and is the degree of freedom in time. This makes solving large scale stochastic inverse problems possible.

Our low-rank approach introduced in this manuscript solves linear stochastic inverse problems that can be put into the Bayesian framework. The next step of our work is to extend the low-rank approach introduced in this manuscript to nonlinear stochastic inverse problems, which is still a big challenge.

Acknowledgements

This work was supported by the European Regional Development Fund (ERDF/ EFRE: ZS/2016/04/78156) within the research center dynamic systems (CDS).

References

  1. R. Andreev and C. Tobler, Multilevel preconditioning and low-rank tensor iteration for space-time simultaneous discretizations of parabolic PDEs, Numer. Linear Algebra Appl., 22 (2015), pp. 317–337.
  2. http://svncsc.mpi-magdeburg.mpg.de/trac/messtrac.
    P. Benner, M. Köhler, and J. Saak, M.E.S.S.–The Matrix Equations Sparse Solvers library.
  3. P. Benner and P. Kürschner, Computing real low-rank solutions of Sylvester equations by the factored ADI method, Computers and Mathematics with Applications, 67 (2014), pp. 1656–1672.
  4. P. Benner and J. Saak, Numerical solution of large and sparse continuous time algebraic matrix Riccati and Lyapunov equations: a state of the art survey, GAMM-Mitt., 36 (2013), pp. 32–52.
  5. M. Benzi, E. Haber, and L. Taralli, A preconditioning technique for a class of PDE-constrained optimization problems, Advances in Computational Mathematics, 35 (2011), pp. 149–173.
  6. A. Borzi, Multigrid methods for parabolic distributed optimal control problems, Journal of Computational and Applied Mathematics, 157 (2003), pp. 365–382.
  7. T. Bui-Thanh, O. Ghattas, J. Martin, and G. Stadler, A computational framework for infinite-dimensional Bayesian inverse problems Part I: The linearized case, with application to global seismic inversion, SIAM J. Sci. Comput., 35 (2013), pp. A2494–A2523.
  8. D. Calvetti and E. Somersalo, Introduction to Bayesian scientific computing, vol. 2 of Surveys and Tutorials in the Applied Mathematical Sciences, Springer-Verlag, New York, 2007.
  9. S. V. Dolgov, TT-GMRES: Solution to a linear system in the structured tensor format, Russian Journal of Numerical Analysis and Mathematical Modelling, 28 (2013), pp. 149–172.
  10. S. V. Dolgov and D. V. Savostyanov, Alternating minimal energy methods for linear systems in higher dimensions, SIAM J. Sci. Comput., 36 (2014), pp. A2248–A2271.
  11. H. C. Elman, D. J. Silvester, and A. J. Wathen, Finite Elements and Fast Iterative Solvers with Applications in Incompressible Fluid Dynamics, Numerical Mathematics and Scientific Computation, Oxford University Press, New York, 2005.
  12. H. P. Flath, L. C. Wilcox, V. Akçelik, J. Hill, B. van Bloemen Waanders, and O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput., 33 (2011), pp. 407–432.
  13. G. H. Golub and C. F. van Loan, Matrix computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, third ed., 1996.
  14. L. Grasedyck, D. Kressner, and C. Tobler, A literature survey of low-rank tensor approximation techniques, GAMM-Mitt., 36 (2013), pp. 53–78.
  15. W. Hackbusch, Multigrid methods and applications, vol. 4 of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, 1985.
  16. W. Hackbusch, Tensor spaces and numerical tensor calculus, vol. 42 of Springer Series in Computational Mathematics, Springer, Heidelberg, 2012.
  17. R. Herzog and K. Kunisch, Algorithms for PDE-constrained optimization, GAMM-Mitt., 33 (2010), pp. 163–176.
  18. M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich, Optimization with PDE Constraints, Springer-Verlag, New York, 2009.
  19. K. Ito and K. Kunisch, Lagrange multiplier approach to variational problems and applications, vol. 15 of Advances in Design and Control, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2008.
  20. J. Kaipio and E. Somersalo, Statistical and computational inverse problems, vol. 160 of Applied Mathematical Sciences, Springer-Verlag, New York, 2005.
  21. D. Kressner and C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM J. Matrix Anal. Appl, 31 (2010), pp. 1688–1714.
  22. D. Kressner and C. Tobler, Low-rank tensor Krylov subspace methods for parametrized linear systems, SIAM J. Matrix Anal. Appl, 32 (2011), pp. 1288–1316.
  23. J. R. Kuttler and V. G. Sigillito, Eigenvalues of the Laplacian in two dimensions, SIAM Rev., 26 (1984), pp. 163–193.
  24. C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, Journal of Research of the National Bureau of Standards, 45 (1950), pp. 255–282.
  25. https://github.com/oseledets/TT-Toolbox.
    I. V. Oseledets, S. Dolgov, V. Kazeev, D. Savostyanov, O. Lebedeva, P. Zhlobich, T. Mach, and L. Song, TT-Toolbox.
  26. I. V. Oseledets and S. V. Dolgov, Solution of linear systems and matrix inversion in the TT-format, SIAM J. Sci. Comput., 34 (2012), pp. A2718–A2739.
  27. J. W. Pearson, M. Stoll, and A. J. Wathen, Regularization-robust preconditioners for time-dependent PDE-constrained optimization problems, SIAM J. Matrix Anal. Appl, 33 (2012), pp. 1126–1152.
  28. Y. Saad, Iterative methods for sparse linear systems, Society for Industrial and Applied Mathematics, Philadelphia, PA, second ed., 2003.
  29. Y. Saad, Numerical methods for large eigenvalue problems, vol. 66 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011.
  30. http://www.manchester.ac.uk/ifiss/.
    D. Silvester, H. Elman, and A. Ramage, Incompressible Flow and Iterative Solver Software (IFISS) version 3.4, August 2015.
  31. V. Simoncini, Computational methods for linear matrix equations, SIAM Rev., 58 (2016), pp. 377–441.
  32. P. Sonneveld and M. B. van Gijzen, IDR (s): A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations, SIAM J. Sci. Comput., 31 (2008), pp. 1035–1062.
  33. A. Spantini, T. Cui, K. Willcox, L. Tenorio, and Y. Marzouk, Goal-oriented optimal approximations of Bayesian linear inverse problems, arXiv preprint arXiv:1607.01881, (2016).
  34. M. Stoll, All-at-once solution of a time-dependent time-periodic PDE-constrained optimization problems, IMA J Numer Anal, 34 (2014), pp. 1554–1577.
  35. M. Stoll and T. Breiten, A low-rank in time approach to PDE-constrained optimization, SIAM J. Sci. Comput., 37 (2015), pp. B1–B29.
  36. A. M. Stuart, Inverse problems: a Bayesian perspective, Acta Numerica, 19 (2010), pp. 451–559.
  37. F. Tröltzsch, Optimal Control of Partial Differential Equations: Theory, Methods and Applications, American Mathematical Society, Providence, RI, 2010.
  38. P. Wesseling, An introduction to multigrid methods, Pure and Applied Mathematics (New York), John Wiley & Sons Ltd., Chichester, 1992.