Neural network augmented inverse problems for PDEs

Neural network augmented inverse problems for PDEs

Jens Berg  jens.berg@math.uu.se Department of Mathematics, Uppsala University
S-751 06 Uppsala, Sweden
Kaj Nyström kaj.nystrom@math.uu.se Department of Mathematics, Uppsala University
S-751 06 Uppsala, Sweden
Abstract

In this paper we show how to augment classical methods for inverse problems with artificial neural networks. The neural network acts as a prior for the coefficient to be estimated from noisy data. Neural networks are global, smooth function approximators and as such they do not require explicit regularization of the error functional to recover smooth solutions and coefficients. We give detailed examples using the Poisson equation in 1, 2, and 3 space dimensions and show that the neural network augmentation is robust with respect to noisy and incomplete data, mesh, and geometry.

1 Introduction

In this paper we study the classical coefficient approximation problem for partial differential equations (PDEs). The inverse problem consists of determining the coefficient(s) of a PDE given more or less noisy measurements of its solution. A typical example is the heat distribution in a material with unknown thermal conductivity. Given measurements of the temperature at certain locations, we are to estimate the thermal conductivity of the material by solving the inverse problem for the stationary heat equation.

The problem is of both practical and theoretical interest. From a practical point of view, the governing equation for some physical process is often known, but the material, electrical, or other properties are not. From a theoretical point of view, the inverse problem is a challenging often ill-posed problem in the sense of Hadamard. Inverse problems have been studied for a long time, starting with [24] in the 40’s, [29] and [20] in the 60’s, to being popularized by [36] in the 70’s by their work on regularization techniques. Today there is a vast amount of literature available on inverse, and PDE constrained optimization problems in general, and we refer the reader to the many survey papers and textbooks, and the references therein, for an overview. See for example [19, 6, 16, 2].

As the PDE cannot be solved analytically in general, a numerical method is required. A common choice is the finite element method (FEM) which have been frequently used with success for a large class of inverse problems. See for example [21, 37, 11, 30]. Recently, there has been a lot of activity in a branch of numerical analysis known as probabilistic numerics, with successful applications in inverse problems and PDE constrained optimization in general. The probabilistic numerical methods are similar, yet different, to the approach taken in this paper. In the probabilistic setting, one places a statistical prior, typically a Gaussian prior, on the unknown coefficient and uses Bayesian inference to compute statistics of the unknown coefficient. As more data is used in the inference, the variance of the coefficient is reduced and one ends up with a confidence interval for the true coefficient. See for example [8, 14, 12, 13, 31, 32, 7, 4].

In this work, we use the finite element method as supplied by the software package FEniCS [1, 28] to solve the underlying PDE. To compute the gradient we need for the PDE constrained optimization, we use the software dolfin-adjoint [9] which works together with FEniCS to automatically derive and solve the adjoint PDE. Finally, we augment FEniCS and dolfin-adjoint with an artificial neural network to represent the unknown coefficient in the PDE. The neural networks we consider in this paper are simple feedforward neural networks with sigmoid activation functions in the hidden layers, and linear activations in the output layer. Such a neural network defines a smooth mapping which can approximate any continuous function to arbitrary accuracy [17]. The similarity with the probabilistic method lies in that a model for the coefficient is selected a priori, and as more data is analyzed, the model improves. The probabilistic method uses a statistical non-parametric model, and here we use neural networks which is a deterministic parametric prior.

The model problem we are working with in this paper is the scalar Poisson equation in a subset , , given by

(1.1)

subject to Dirichlet boundary conditions. The inverse problem is to compute the coefficient , given a series of more or less noisy measurements of , for . It is clear that the inverse problem is ill-posed whenever on an open set, as then there is no information available about the coefficient . Moreover, well-posedness of the inverse problem depends on the norm under consideration. For example, consider the Poisson equation in one space dimension, , , , and a sequence of coefficients and observations , with

(1.2)

In the -norm given by

(1.3)

we have

(1.4)

and we can clearly see that is not a continuous function of the data in the sense of Hadamard [22]. One can, however, show that this problem is well-posed in the Sobolev norm . However, as PDE constrained optimization in the -norm is in general not practically applicable, since one does not have measurements of in general, we will consistently use the standard -norm.

2 The general inverse problem

We consider stationary partial differential equations of the general form

(2.1)

where is a bounded domain and its boundary. Moreover, is the solution variable, is a boundary operator, is the boundary data, and is an a priori unknown parameter.

The deviation between and is defined by the error functional

(2.2)

and the goal is to compute such that

(2.3)

subject to satisfying (2.1). To efficiently solve the minimization problem (2.3), we require the gradient of the error functional (2.2) with respect to . To compute the gradients, we will use the adjoint approach.

2.1 The adjoint equations

Note that the parameter defines the mapping

(2.4)

which can be computed by solving (2.1) by analytical or numerical means. This turns (2.1), and (2.2) into pure functions of ,

(2.5)

and the PDE constrained problem (2.3) becomes an unconstrained problem as the constraint is built-in to the solution of (2.1). The total derivative of the error functional can be computed as

(2.6)

where and are in general easily computed. To compute the troublesome solution Jacobian , we formally differentiate the PDE itself to obtain

(2.7)

or equivalently

(2.8)

The relation (2.8) is known as the tangent linear system related to the functional (2.2). In the rest of the derivations, we will write , and to ease the notation. The tangent linear operator is the linearization of the differential operator around the solution which acts on the solution Jacobian . In the case of a linear PDE, the differential operator in (2.1) and the tangent linear operator coincides. Assuming that the tangent linear system is invertible, we can use (2.1) and (2.8) to solve

(2.9)

Substituting (2.9) into (2.6) and taking the Hermitian transpose gives

(2.10)

We define the adjoint variable by

(2.11)

or equivalently

(2.12)

The above equation (2.12) is the adjoint, or dual, equation associated with the forward problem (2.1) and the error functional (2.2). By solving the adjoint equation (2.12) and substituting into (2.6), we can compute the total derivative of the error functional as

(2.13)

With the total derivative of the error functional given by (2.13), any gradient based optimization method can be used to solve the minimization problem (2.3).

3 The numerical method

In general, the PDE (2.1) cannot be solved analytically and hence neither the mapping (2.4) nor the gradient (2.13) can be computed. In practice, a numerical method is used to solve both the PDE (2.1) and the adjoint equation (2.12). A common method is to use FEM. To effectively use FEM, one selects a finite element space for the solution, and another space for the coefficient. Typically, is the space of piecewise constants to reduce the number of degrees of freedom. As the space consist of local basis functions, one needs regularization of the error functional to reconstruct a smooth coefficient. Typically, one adds Tikhonov regularization to the error functional of the form

(3.1)

where is a regularization parameter. If, for some reason, an estimate of the coefficient and noise level is known a priori, one can further improve the results by adding generalized Tikhonov regularization of the form

(3.2)

where is the a priori estimated coefficient and the regularization parameter depends on the noise level. The actual value of regularization parameters and depends on the problem at hand, and finding an optimal value is a non-trivial task. The optimal generalized Tikhonov regularization parameter can be computed by using, for example, the Morozov discrepancy principle or heuristic -curves. These methods are, however, of limited practical use in a PDE constrained optimization context as they become extremely expensive. We will, however, discuss optimal regularization of the 1D example in appendix A.

In this section, we do not use any regularization in the error functional (2.2). We will use only the most basic settings and avoid any fine tuning of hyperparameters, or any other parameters, to get as much of a black box solution strategy as possible. This means that the comparisons we make with pure FEM are not entirely fair as finding the optimal function spaces and regularizations are beyond the scope of this paper. The main purpose of this paper is to present neural networks as possible augmentations to classical methods for inverse problems. Adding explicit regularization will improve the neural network augmentation as well as can be seen in appendix A.

3.1 The finite element method and automatic differentiation

In this work, we have used the FEM software FEniCS to solve the PDE (2.1), together with dolfin-adjoint to automatically derive and solve the adjoint equation (2.12). The automatic differentiation in dolfin-adjoint works by overloading the solver functions in FEniCS and recording a graph of solver operations. Each node in the graph has an associated gradient operation and the total derivative can be computed by reversed mode differentiation, i.e. backpropagation. The reversed mode differentiation has the benefit that the gradient computation is exact with respect to the FEM discretization, as long as it is smooth enough. The only source of error in the computation of (2.13) is thus in the numerical solution of the forward problem111At least as long as the adjoint equation can be solved by a direct method. For large enough systems, an iterative method must be used which introduces additional errors. These are, however, usually small compared to the discretization errors of the forward problem..

3.2 Neural network representation of the coefficient

Rather than representing the unknown coefficient in a finite element space, we can represent the coefficient by a feedforward artificial neural network. That is, we let be parameterized by the weights and biases of a neural network, here denoted by and . This approach yield some immediate benefits. A neural networks is a global, smooth, function approximator which can approximate any continuous function to arbitrary accuracy by having a sufficient amount of parameters [17, 25]. Since it is a global function, it can be cheaply evaluated anywhere in the domain without first searching for the correct mesh element and performing interpolation, which is an expensive operation.

The coefficient is computed by feed forwarding

(3.3)

where are the weight matrices connecting layers and and are the vectors of biases for each layer, and is the activation function of layer . To determine weights and biases, we seek , such that

(3.4)

As there is one extra layer of parameters, we need to apply the chain rule once more to compute the total derivative of the error functional. Let denote any of the weight or biases parameters, then

(3.5)

The first factor on the right hand side of (3.5) is computed as before by using FEniCS and dolfin-adjoint. The last factor is the gradient of the network output with respect to the network parameters, which can be computed exactly by using backpropagation or automatic differentiation.

Note that the neural network augmentation is not restricted to FEM. Any numerical method which efficiently can solve the forward problem (2.1), and the adjoint problem (2.12), can be used in combination. The neural network is simply a prior for the coefficient with good function approximation properties, and efficient means for computing its gradient. The application of the chain rule in (3.5) factors the discretization of the forward and backward problems from the computation of the gradient of the network. As a side effect, the discretization of the PDE can be refined without increasing the number of parameters in the neural network. In the simplest FEM case, the coefficient is represented as a constant on each mesh element. The number of optimization parameters is then equal to the number of mesh elements, which quickly become infeasible if the mesh is refined.

In the numerical examples which follow, we have consistently used a neural network design with as few parameters as possible. A low number of parameters is beneficial both for the numerical optimization as well as for the the implicit regularization. The designs are still, however, based on heuristics and experience. We found during this work that surprisingly small networks perform better than larger networks as long as the number of parameters is sufficient to approximate the coefficient. Once a sufficient design has been found, the results start to deteriorate with increasing network complexity due to overfitting and less efficient implicit regularization.

4 Moderately ill-posed examples

In the numerical examples, we have used an exact solution and coefficient to compute a forcing function in (2.1). The exact solution is used to generate the data in the error functional. In the case of added noise, we add a normally distributed value , , for some noise level , to each of the interior data points. The data on the boundary is always exact.

The results differ somewhat depending on the noise and random initialization of the network’s weights. In the figures and tables, we use Numpy’s random number generators with numpy.random.seed(2), for reproducibility, to add noise and initialize the weights. We use the same initial guess for the coefficient and settings for the optimizer in both the FEM and network cases.

The equation we use in the examples is the Poisson equation. The equation requires that the coefficient to be estimated is positive. To ensure that the coefficient is initially positive, we initialize the network’s weights from a uniform distribution and all the biases are initialized to zero. Note that the endpoint is excluded by default in numpy’s random number generators. It is of no practical importance.

The examples which follow are called moderately ill-posed since the coefficients to be estimated are smooth and the measurement data is available in the whole domain. The purpose of the examples is to show the applicability and ease of use of the method due to the implicit regularization by the neural network prior.

4.1 One-dimensional Poisson equation

The one-dimensional Poisson equation with Dirichlet boundary conditions is given by

(4.1)

where , are known, and is the coefficient to be estimated. We discretize the domain into continuous piecewise linear elements where the solutions are represented. We solve the minimization problem using both standard FEM, with the coefficient represented in the solution space, as well as represented by a neural network. Here, we use the BFGS [10] optimizer from SciPy [18] as the number of optimization parameters is rather small. We iterate until the norm of the gradient of the error functional is less than .

The neural network is a tiny feedforward network with one input neuron, one linear output neuron, and one hidden sigmoid layer with 3 neurons. We choose the solution to be

(4.2)

and we will compute the inverse problem for a few different exact coefficients and noise levels. The neural network has 10 parameters which will be optimized, compared to the FEM problem which has 101 (same as the number of elements). We consider the exact coefficients , , , and with noise level and . The results can be seen in Figures 14 and Table 1. It is clear that the network representation outperforms FEM in this case. The network representation is insensitive to noise and always produces smooth solutions and coefficients. FEM does not converge to smooth solutions and coefficients due to the lack of regularization in the error functional. We can also see that the number of iterations increase in the network case as the coefficient becomes more oscillatory. A deeper network with higher capacity might help to mitigate the increase as was seen in [3].

Remark 4.1.

For one-dimensional problems the number of mesh elements is usually rather limited. This means that the network is at a high risk of overfitting. In particular with noisy data, a network with too high capacity will fit the noise which causes a severe increase in the number of optimization iterations, and of course also lack of generalization. This is the reason we have used a tiny network with only three neurons in the hidden layer. A larger network would require weight regularization, dropout and/or any other method which reduces overfitting. See for example [23, 15, 35].

(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient with some noise.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient with some noise.
Figure 1: Comparison between FEM and a neural network with constant coefficient .
(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient with some noise.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient with some noise.
Figure 2: Comparison between FEM and a neural network with linear coefficient .
(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient with some noise.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient with some noise.
Figure 3: Comparison between FEM and a neural network with quadratic coefficient .
(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient with some noise.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient with some noise.
Figure 4: Comparison between FEM and a neural network with coefficient .
#it Time (s) #it Time (s)
Network 19 1 3.65e-5 1.38e-3 19 1 6.67e-3 9.72e-3
FEM 184 9 1.11e-3 1.30e-1 1748 103 3.25e-2 5.38e+00
#it Time (s) #it Time (s)
Network 29 2 2.26e-4 3.43e-3 37 2 7.47e-3 2.55e-2
FEM 195 10 1.44e-3 2.50e-1 1228 75 3.07e-2 5.70e+00
#it Time (s) #it Time (s)
Network 61 3 1.20e-3 1.10e-2 56 3 7.45e-3 2.28e-2
FEM 218 13 1.05e-3 2.05e-1 1856 105 3.14e-2 8.84e+00
#it Time (s) #it Time (s)
Network 324 14 2.56e-3 1.78e-2 240 11 1.07e-2 6.11e-2
FEM 218 11 9.55e-4 1.42e-1 2116 117 3.14e-2 1.79e+01
Table 1: Performance comparison between FEM and neural network representation of the coefficient for the 1D Poisson equation. #it is the number of iterations until the norm of the gradient of the error functional is less than . The norm is measured as the integral of a high-order interpolation onto the finite element space as provided by the errornorm function in FEniCS, and is the unperturbed exact solution in the case of added noise.

4.2 Two-dimensional Poisson

The higher-dimensional Poisson equation with Dirichlet boundary conditions is given by

(4.3)

where is the computational domain, its boundary. The functions , are known, and is the a priori unknown coefficient to be estimated. Here, we choose the solution to be

(4.4)

for the exact coefficients , , , and . The network is a single hidden layer feedforward network with 10 neurons in the hidden layer. The optimization is performed using BFGS and we iterate until the norm of the gradient of the error functional is less than .

4.2.1 Unit square with uniform mesh

Here, we let be the unit square discretized by piecewise linear elements elements. The coefficient represented by the network has 41 parameters, while it has 10201 when represented in the finite element space. Due to the very different nature of the optimization problems, a direct comparison is no longer meaningful. The BFGS method scales quadratically in complexity with the number of optimization parameters, and is only useful for small-scale optimization problems, where it is very efficient. The FEM minimization problems can instead be solved by, for example, the memory limited BFGS method [26, 5]. By doing so, we see the same pattern as in the 1D case. FEM does not converge to smooth solutions or coefficients with added noise due to the lack of regularization.

We plot the difference between the exact and computed coefficient in Figures 58 and summarize the results in Table 2. We can see the same pattern as in the 1D case. The neural network representation is insensitive to noise and we always recover a smooth coefficient even without regularization. Note that the errors are larger at the corners of the domain. This is probably due to the fact that the optimization is performed in the norm which allows larger errors in isolated points. The errors can probably be reduced by performing the optimization under the norm instead. See for example [34] and references therein.

(a) The optimized network coefficient without noise.
(b) The optimized network coefficient with some noise.
Figure 5: Difference between the optimized network and exact coefficient when .
(a) The optimized network coefficient without noise.
(b) The optimized network coefficient with some noise.
Figure 6: Difference between the optimized network and exact coefficient when .
(a) The optimized network coefficient without noise.
(b) The optimized network coefficient with some noise.
Figure 7: Difference between the optimized network and exact coefficient when .
(a) The optimized network coefficient without noise.
(b) The optimized network coefficient with some noise.
Figure 8: Difference between the optimized network and exact coefficient when .
#it Time (s) #it Time (s)
Network 41 13 1.11e-5 2.51e-4 40 12 2.35e-4 9.28e-4
#it Time (s) #it Time (s)
Network 90 24 1.46e-4 2.99e-3 332 93 1.40e-3 1.74e-2
#it Time (s) #it Time (s)
Network 402 111 2.07e-4 4.22e-3 517 138 1.76e-3 2.04e-2
#it Time (s) #it Time (s)
Network 2112 560 6.32e-4 1.86e-2 1553 406 3.09e-3 4.48e-2
Table 2: Summary for the neural network representation of the coefficient for the 2D Poisson equation. #it is the number of iterations. The norm is measured as the integral of a high-order interpolation onto the finite element space as provided by the errornorm function in FEniCS, and is the unperturbed exact solution in the case of added noise.

4.2.2 Mesh independent convergence

It was shown in [33] that the number of optimization iterations grows polynomially with the ratio between the volumes of the smallest and largest mesh elements, . The reason is that most gradient based optimization libraries do not take the underlying function space inner product into account when computing the step size and direction. As the neural network augmentation is mesh independent, we expect that the number of iterations remains constant when locally refining the mesh, as long as the mesh is fine enough to accurately compute the solution of the forward problem.

To test the mesh independent convergence, we consider a simple test case. We solve the inverse problem for the Poisson equation (4.3), on the unit square, with coefficient and the exact solution given by 4.4 without any added noise. We locally refine the mesh in concentric circles around the center of the domain with radius , for , as can be seen in Figure 9. The results are summarized in Table 3. We can clearly see that the number of iterations required until convergence as the mesh is refined becomes constant, and a stable minimum is found.

(a) Initial uniform mesh
(b) Final refined mesh.
Figure 9: Concentric mesh refinement.
0 1 2 3 4 5 6 7
#dofs 441 725 1017 1479 2482 5040 11922 32000
1 2 4 8 16 32 64 128
#it 32 35 40 40 40 40 40 40
6.14e-3 4.91e-3 4.85e-3 4.84e-3 4.84e-3 4.84e-3 4.84e-3 4.84e-3
Table 3: Number of iterations for different levels of refinement.

4.3 Three-dimensional Poisson

As a final example, we consider the three-dimensional Poisson equation (4.3) on a complex geometry from [27]. We take the exact coefficient and the analytical solution

(4.5)

with no noise and compute the inverse problem. We use a network with 10 neurons in the single hidden layer which gives a total of 51 optimization parameters. Convergence was achieved after 41 BFGS iterations with gradient norm tolerance , which is consistent with the result in Table 3. This indicates that the neural network augmentation method is also independent of the geometry. The domain and difference between the exact and computed network coefficient can be seen in Figure 10.

(a) Domain and surface mesh.
(b) Coefficient difference.
Figure 10: The 3D Poisson equation on a complex geometry with 362172 degrees of freedom.
Remark 4.2.

The extra BFGS iteration required in the 3D case is because the forward problem could not be solved by a direct method due to memory requirements. We used the GMRES iterative method with an incomplete LU preconditioner to solve the forward problem. The solution to the forward problem is thus slightly less accurate due to the tolerance settings of the iterative method, which in turn means that the computed error functional gradients are also less accurate and more BFGS iterations are required.

5 Severely ill-posed examples

In the previous section we discussed moderately ill-posed problems where we have measurements of the smooth coefficients in the whole domain. Here we will show some examples of severely ill-posed problems where we have discontinuous coefficients or incomplete measurements.

5.1 1D discontinuous coefficient

Here we repeat the examples in section 4.1 for the Poisson problem (4.1) with

(5.1)

In this case, the analytical solution is not known and we compute the reference solution and data using FEM. We used the same small network with a single hidden layer with 3 neurons and the sigmoid activation function. The results can be seen in Figure 11 and Table 4. The network captures the discontinuity perfectly for noiseless data, and the smoothing effect of the implicit regularization is clearly seen when noise is added.

(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient and 5% noise level.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient and 5% noise level.
Figure 11: Comparison between FEM and a neural network with a discontinuous coefficient.
Discontinuous coefficient
#it Time (s) #it Time (s)
Network 310 37 1.19e-05 9.33e-04 346 118 1.25e-02 1.64e-01
FEM 332 76 1.58e-03 2.07e-01 788 408 2.90e-02 5.34e+00
Table 4: Comparison between FEM and a neural network with a discontinuous coefficient.

5.2 1D incomplete data

We repeat the examples in section 4.1 for the constant coefficient case, , under the assumption that we only have access to measurement data at a distance from the boundaries. In this case, the error functional (2.2) we want to minimize is reduced to

(5.2)

We let, as before, the neural network representing the unknown coefficient have a single hidden layer with 3 neurons and the sigmoid activation function. The result can seen in Figures 1214 for , , and , respectively. We can clearly see that the neural network is able to reconstruct the coefficient and solution in the whole domain despite the data being known only close to the boundaries.

(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient and 5% noise level.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient and 5% noise level.
Figure 12: Comparison between FEM and a neural network with a constant coefficient and .
(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient and 5% noise level.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient and 5% noise level.
Figure 13: Comparison between FEM and a neural network with a constant coefficient and .
(a) Solutions with the optimized coefficients and no noise.
(b) The optimized coefficients without noise.
(c) Network solution with the optimized coefficient and 5% noise level.
(d) The optimized network coefficient and 5% noise level.
(e) FEM solution with the optimized coefficient and 5% noise level.
(f) The optimized FEM coefficient and 5% noise level.
Figure 14: Comparison between FEM and a neural network with a constant coefficient and .

5.3 Remark on multiple discontinuities and multiscale coefficients

The neural network augmentation without explicit regularization works well when there are only a few discontinuities of the same size. As the number of discontinuities grow, or the scales of the coefficient start to differ too much, the low capacity networks are no longer sufficient to approximate the coefficient. To approximate more complicated coefficients, the number of network parameters needs to be increased. However, without explicit regularization, we have been unable to obtain good results in these cases. Multiple discontinuities, highly oscillatory and multiscale coefficients require more advanced networks and/or explicit regularization and will be the topic of future research.

6 Summary and conclusions

We have shown how to augment classical inverse problems with artificial neural networks. We used the finite element method to discretize and solve the forward problem, and automatic differentiation to derive and solve the adjoint problem related to the error functional. The total gradient of the error functional could then be computed exactly by combining the automatic adjoint for the finite element part, and backpropagation/automatic differentiation for the neural network part. By the chain rule, the discretization of the forward and backward problems are factored from the neural network prior in the computation of the gradient of the error functional. With the gradient of the error functional given, any gradient based optimization method can be used.

The neural network acts as a prior for the coefficient to be estimated from noisy data. As neural networks are global, smooth, universal approximators, they do not require explicit regularization of the error functional to recover both smooth solutions and coefficients. The convergence of classical optimization methods, with the neural network augmentation, is independent of both the mesh and geometry in contrast to standard finite elements. As the number of optimization parameters is independent of the mesh, large scale inverse problems can be computed with efficient optimizers, such as BFGS, as long as the number of network parameters is rather small.

The implicit regularization by low capacity network priors allows a smooth reconstruction of discontinuous coefficients as long as there are not too many discontinuities. Since neural networks are globally defined, they allow the reconstruction of the coefficient in the whole domain even when the measurement data is available only close to the boundaries.

For multiple discontinuities and multiscale coefficients, more research is needed on explicit regularization and the choice of neural network design.

7 Acknowledgments

We would like to thank the anonymous reviewers for their thorough reviews and constructive suggestions on how to improve the quality of the paper.

The authors were partially supported by a grant from the Göran Gustafsson Foundation for Research in Natural Sciences and Medicine.

Appendix A Optimal regularization of the 1D Poisson equation

We return to the problem in section 4.1 to discuss optimal regularization for a fair comparison with FEM. We are to minimize the generalized Tikhonov functional (3.2) which is restated here for convenience,

(A.1)

In this case we have a complete description of the noise which is uniformly distributed with a value of added to each measurement, where and . Moreover, since the problem is artificially constructed we know the exact value of in (A.1). This allows us to compute the optimal value of using the Morozov discrepancy principle for the various coefficients to be estimated. The Morozov discrepancy principle amounts to finding the unique zero of the monotone function

(A.2)

where denotes the noise level. The computation of requires that the inverse coefficient problem is solved by minimizing (A.1). Each evaluation of (A.2) thus require the solution of the forward problem and the backward problem until convergence in the coefficient for a given . If (A.2) is computed numerically by an iterative method, this procedure is repeated in each iteration making the computation of an optimal regularization extremely expensive. For the 1D Poisson problem, however, we can perform the computations to obtain the optimal regularization parameters for as shown in Table 5.

(FEM) (Network)
10.3 0.29
18.6 0.29
17.3 0.29
12.0 0.29
Table 5: Optimal regularization parameters by the Morozov discrepancy principle for 5% noise level and known coefficients .

The results in section 4.1 are repeated here where we have used optimal regularization. It can clearly be seen that FEM outperforms neural networks this time. However, since optimal regularization is rare we propose that neural networks are used to compute the estimated coefficient in (A.1) which can be used in a generalized Tikohonov regularization.

(a) Solutions with the optimized coefficients.
(b) The optimized coefficients.
Figure 15: Comparison between optimal FEM and a neural network with constant coefficient and 5% noise level.
(a) Solutions with the optimized coefficients.
(b) The optimized coefficients.
Figure 16: Comparison between optimal FEM and a neural network with linear coefficient and 5% noise level.
(a) Solutions with the optimized coefficients.
(b) The optimized coefficients.
Figure 17: Comparison between optimal FEM and a neural network with quadratic coefficient and 5% noise level.
(a) Solutions with the optimized coefficients.
(b) The optimized coefficients.
Figure 18: Comparison between optimal FEM and a neural network with sine coefficient and 5% noise level.
#it Time (s)
Network 12 1 4.06e-03 5.34e-03
FEM 40 4 1.26e-03 5.00e-04
#it Time (s)
Network 354 46 2.90e-03 5.03e-03
FEM 36 8 9.30e-04 2.06e-04
#it Time (s)
Network 281 78 3.26e-03 5.43e-03
FEM 37 12 9.99e-04 2.44e-04
#it Time (s)
Network 473 190 4.48e-03 8.09e-03
FEM 38 19 1.38e-03 4.82e-04
Table 6: Performance comparison between neural networks and FEM with optimal regularization and 5% noise level.

References

  • [1] M. S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, and G. N. Wells. The FEniCS project version 1.5. Archive of Numerical Software, 3(100), 2015.
  • [2] L. Beilina and M. V. Klibanov. Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems. Springer-Verlag New York, 2012.
  • [3] J. Berg and K. Nyström. A unified deep artificial neural network approach to partial differential equations in complex geometries. ArXiv e-prints, Nov. 2017.
  • [4] T. Bui-Thanh and M. Girolami. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo. arXiv pre-print 1407.1517, July 2014.
  • [5] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208, 1995.
  • [6] G. Chavent. Nonlinear Least Squares for Inverse Problems: Theoretical Foundations and Step-by-Step Guide for Applications. Scientific Computation. Springer Netherlands, 2010.
  • [7] J. Cockayne, C. Oates, T. Sullivan, and M. Girolami. Probabilistic meshless methods for partial differential equations and Bayesian inverse problems. ArXiv, May 2016.
  • [8] J. Cockayne, C. Oates, T. Sullivan, and M. Girolami. Bayesian probabilistic numerical methods. ArXiv e-prints, stat.ME 1702.03673, Feb. 2017.
  • [9] P. E. Farrell, D. A. Ham, S. W. Funke, and M. E. Rognes. Automated derivation of the adjoint of high-level transient finite element programs. SIAM Journal on Scientific Computing, 35(4):C369–C393, 2013.
  • [10] R. Fletcher. Practical methods of optimization. Wiley, 2nd edition, 1987.
  • [11] D. N. Háo and T. N. T. Quyen. Finite element methods for coefficient identification in an elliptic equation. Applicable Analysis, 93(7):1533–1566, 2014.
  • [12] P. Hennig. Fast probabilistic optimization from noisy gradients. In International Conference on Machine Learning (ICML), 2013.
  • [13] P. Hennig and M. Kiefel. Quasi-Newton methods – a new direction. Journal of Machine Learning Research, 14:834–865, Mar. 2013.
  • [14] P. Hennig, M. A. Osborne, and M. Girolami. Probabilistic numerics and uncertainty in computations. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471(2179), 2015.
  • [15] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. ArXiv e-prints, July 2012.
  • [16] M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich. Optimization with PDE Constraints. Mathematical Modelling: Theory and Applications. Springer Netherlands, 2009.
  • [17] K. Hornik, M. Stinchcombe, and H. White. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3(5):551–560, 1990.
  • [18] E. Jones, T. Oliphant, P. Peterson, et al. SciPy: Open source scientific tools for Python. http://www.scipy.org, 2001–. Accessed 2017-12-12.
  • [19] S. I. Kabanikhin. Definitions and examples of inverse and ill-posed problems. Journal of Inverse and Ill-posed Problems, 16(4):317–357, 1966.
  • [20] M. Kac. Can one hear the shape of a drum? The American Mathematical Monthly, 73(4):1–23, 1966.
  • [21] T. Kärkkäinen, P. Neittaanmäki, and A. Niemistö. Numerical methods for nonlinear inverse problems. Journal of Computational and Applied Mathematics, 74(1):231–244, 1996.
  • [22] R. V. Kohn and B. D. Lowe. A variational method for parameter identification. ESAIM: Mathematical Modelling and Numerical Analysis, 22(1):119–158, 1988.
  • [23] Y. LeCun, L. Bottou, G. B. Orr, and K. R. Müller. Efficient backprop. In Neural Networks: Tricks of the Trade, pages 9–50. Springer, 1998.
  • [24] K. Levenberg. A method for the solution of certain non-linear problems in least squares. Quarterly of Applied Mathematics, 2(2):164–168, 1944.
  • [25] X. Li. Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer. Neurocomputing, 12(4):327–343, 1996.
  • [26] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503–528, Aug. 1989.
  • [27] A. Logg. Mesh generation in FEniCS. http://www.logg.org/anders/2016/06/02/mesh-generation-in-fenics, 2016. Accessed 2017-12-12.
  • [28] A. Logg, K.-A. Mardal, G. N. Wells, et al. Automated Solution of Differential Equations by the Finite Element Method. Springer, 2012.
  • [29] D. W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics, 11(2):431–441, 1963.
  • [30] N. Petra and G. Stadler. Model variational inverse problems governed by partial differential equations. Technical report, The Institute for Computational Engineering and Sciences, The University of Texas at Austin, 2011.
  • [31] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Machine learning of linear differential equations using Gaussian processes. Journal of Computational Physics, 348:683–693, 2017.
  • [32] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. SIAM Journal on Scientific Computing, 40(1):A172–A198, 2018.
  • [33] T. Schwedes, S. W. Funke, and D. A. Ham. An iteration count estimate for a mesh-dependent steepest descent method based on finite elements and Riesz inner product representation. ArXiv e-prints, June 2016.
  • [34] T. Schwedes, D. A. Ham, S. W. Funke, and M. D. Piggott. Mesh dependence in PDE-constrained optimisation, pages 53–78. Springer International Publishing, 2017.
  • [35] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.
  • [36] A. Tikhonov and V. Arsenin. Solutions of ill-posed problems. Scripta series in mathematics. Winston, 1977.
  • [37] J. Zou. Numerical methods for elliptic inverse problems. International Journal of Computer Mathematics, 70(2):211–232, 1998.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
271722
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description