Numerical studies of the Lagrangian approach for reconstruction of the conductivity in a waveguide
Abstract
We consider an inverse problem of reconstructing the conductivity function in a hyperbolic equation using single spacetime domain noisy observations of the solution on the backscattering boundary of the computational domain. We formulate our inverse problem as an optimization problem and use Lagrangian approach to minimize the corresponding Tikhonov functional. We present a theorem of a local strong convexity of our functional and derive error estimates between computed and regularized as well as exact solutions of this functional, correspondingly. In numerical simulations we apply domain decomposition finite elementfinite difference method for minimization of the Lagrangian. Our computational study shows efficiency of the proposed method in the reconstruction of the conductivity function in three dimensions.
1 Introduction
In this work, we consider the coefficient inverse problem (CIP) of reconstructing the conductivity function in a hyperbolic equation using single observation of the solution of this equation in space and time on the backscattered boundary of the computational domain. In our simulations, backscattered boundary measurements are generated by a single direction of propagation of a plane wave. We solve our CIP via minimization of the corresponding Tikhonov functional and use the Lagrangian approach to minimize it. Applying results of [8, 9], we have formulated a theorem of a local strong convexity of this functional in our case and show that the gradient method for minimizing this functional will converge. We have also presented estimates of the norms between computed and regularized solution of the Tikhonov functional via the norm of the Fréchet derivative of this functional and via the corresponding Lagrangian. In the minimization procedure of the Lagrangian, we applied conjugate gradient method and the domain decomposition finite element/finite difference method of [4]. The method of [4] is convenient for our simulations since it is efficiently implemented in the software package WavES [25] in C++ using PETSc [23] and message passing interface (MPI).
We tested our iterative algorithm by reconstructing a conductivity function that represents some small scatterers as well as smooth function inside the domain of interest. In all of our numerical simulations of this work we induced one nonzero initial condition in the hyperbolic equation accordingly to the theory of the recent work [15]. In [15] it was shown that one nonzero initial condition associated with the observation of the solution of the hyperbolic equation involve uniqueness and stability results in reconstruction of the conductivity function for a cylindrical domains. Our threedimensional numerical simulations show that we can accurately reconstruct large contrast of the conductivity function as well as its location. In our future work, similar to [6, 7], we are planning to use an adaptive finite element method in order to improve reconstruction of the shapes obtained in this work.
Another method for reconstruction of conductivity function  a layerstripping algorithm with respect to pseudofrequency  was presented in [14]. In addition the mathematical model governed by the hyperbolic equation studied in this work can also be considered as a special case of a timedependent transverse magnetic polarized wave scattering problem or as a simplified acoustic wave model for fluids with variable density and a constant bulk modulus. In recent years, some rapid identification techniques have been developed for solving the elastodynamic inverse problem, for instance, crack/fault identification techniques are developed for cracks having free boundary condition using a reciprocity gap function [12, 13], and linear sampling techniques are designed to locate inclusions in the isotropic elastic medium [10, 19]. To compare performance of the algorithm of this paper with different algorithms of [10, 12, 13, 14, 19] can be the subject of a future work.
The paper is organized as follows. In section 2 we formulate the forward and inverse problems. In section 3 we present the Tikhonov functional to be minimized and formulate the theorem of a local strong convexity of this functional. Section 4 is devoted to a Lagrangian approach to solve the inverse problem. In section 5 we present finite element method for the solution of our optimization problem and formulate conjugate gradient algorithm used in computations. Finally, in section 6 we present results of reconstructing the conductivity function in three dimensions.
2 Statement of the forward and inverse problems
Let be a convex bounded domain with the boundary , and is Hölder space, is an integer and We use the notation . Next, in our theoretical and numerical investigations we use domain decomposition of the domain into two subregions, and such that , , see figure 1. The communication between two domains is done through two layers of structured nodes as described in [4]. The boundary of the domain is such that where and are, respectively, front and back sides of the domain . The boundary is the union of the left, right, top and bottom sides of the domain .
We denote by the spacetime boundary where we will have timedependent observations of the backscattered field. We use the notation , , , .
Our model problem is as follows
(1) 
which satisfies stability and uniqueness results of [15].
Here, is the incident plane wave generated at the plane and propagating along the axis. We assume that
(2) 
We assume that in the function is known and is defined as a constant coefficient . For numerical solution of the problem (1) in we can use either the finite difference or the finite element method. Further in our theoretical considerations we will use the finite element method in both and , with known function in and in the overlapping layer of the structured nodes between and . This layer is constructed in a similar way as in [4]. We note that in the numerical simulations of section 6 we use the domain decomposition method of [4] since this method is efficiently implemented in the software package WavES [25] and is convenient for our purposes. We also note that both finite element and finite difference techniques provide the same explicit schemes in in the case of structured mesh in , see [11] for details.
We make the following assumptions on the coefficient in the problem (1):
(3) 
We consider the following
Inverse Problem (IP) Suppose that the coefficient of (1) satisfies conditions (3). Assume that the function is unknown in the domain . Determine the function for assuming that the following space and timedependent function is known
(4) 
From the assumptions (3) it follows that we should know a priori upper and lower bounds of the function . This corresponds to the theory of inverse problems about the availability of a priori information for an illposed problem [18, 24]. In applications, the assumption for means that the function corresponds to the homogeneous domain in .
a)  b) 
c) view  d) view 

3 Tikhonov functional
We reformulate our inverse problem as an optimization problem and we seek for the function . This function should fit to the spacetime observations measured at . Thus, we minimize the Tikhonov functional
(5) 
where is the observed field in (4), satisfies (1), is the initial guess for , and is the regularization parameter. Here, is a cutoff function to impose compatibility conditions at for the adjoint problem (32) which is defined as in [3].
Let us define the inner product and the norm in and , respectively, as
We also introduce the following spaces of real valued vector functions
(6) 
In our theoretical investigations below we need to reformulate the results of [8, 9] for the case of our IP. Below in this section, denotes norm.
We introduce a noise level in the function in the Tikhonov functional (5) that corresponds to the theory of illposed problems [1, 2, 24];
(7) 
where is the exact data corresponding to the exact function in (1), and the function represents the error in these data. In other words, we can write
(8) 
Let and be the finite dimensional linear space such that
(9) 
and
(10) 
where is the finiteelement mesh defined in section 5.
Let be a closed bounded convex set satisfying conditions (3). We introduce the operator corresponding to the Tikhonov functional (5) as
(11) 
where is the weak solution of the problem (1) and thus, depends on the function .
We impose assumption that the operator is onetoone. Next, we assume that there exists the exact solution of the equation
(12) 
It follows from our assumption that the operator is onetoone and thus, for a given function this solution is unique.
We denote by
(13) 
We also assume that the operator has the Lipschitz continuous Frechét derivative for such that there exist constants
(14) 
similar to [9] we choose the constant such that
(15) 
Through the paper, similar to [9], we assume that
(16)  
(17) 
where is the regularization parameter in (5). Equation (16) means that we assume that initial guess in (5) is located in a sufficiently small neighborhood of the exact solution . From Lemma 2.1 and 3.2 of [9] it follows that conditions (16) (17) ensures that belong to an appropriate neighborhood of the regularized solution of the functional (5).
Below we reformulate Theorem 1.9.1.2 of [8] for the Tikhonov functional (5). Different proofs of this theorem can be found in [8], [20] and in [9] and are straightly applied to our case.
The question of stability and uniqueness of our IP is addressed in [15] for the case of the unbounded domain.
Theorem 3.1
Let are two Hilbert spaces such that , is a closed bounded convex set satisfying conditions (3), and is a continuous onetoone operator.
Assume that the conditions (7) (8), (14)(15) hold. Assume that there exists the exact solution of the equation for the case of the exact data in (7). Let the regularization parameter in (5) be such that
(18) 
Let satisfies the condition (16). Then the Tikhonov functional (5) is strongly convex in the neighborhood with the strong convexity constant such that
(19) 
Next, there exists the unique regularized solution of the functional (5) and this solution The gradient method of the minimization of the functional (5) which starts at converges to the regularized solution of this functional and
(20) 
The property (20) means that the regularized solution of the Tikhonov functional (5) provides a better accuracy than the initial guess if it satisfies condition (16).
The next theorem presents the estimate of the norm via the norm of the Fréchet derivative of the Tikhonov functional (5).
Theorem 3.2
Assume that the conditions of Theorem 3.1 hold. Then for any function the following error estimate is valid
(21) 
where is the minimizer of the Tikhonov functional (5) computed with the regularization parameter and is the operator of orthogonal projection of the space on its subspace , is the Fréchet derivative of the Lagrangian (26) given by (34).
Proof.
similar to Theorem 4.11.2 of [8], since then
Hence, using (22) and the strong convexity property (19) we can write that
Thus, from the expression above we get
(23) 
Using the fact
together with (34) and (35) and dividing the expression (23) by , we obtain the inequality (21).
In our final theorem we present the error between the computed and exact solutions of the functional (5).
Theorem 3.3
Assume that the conditions of Theorem 3.1 hold. Then for any function the following error estimate holds
(24) 
4 Lagrangian approach
In this section, we will present the Lagrangian approach to solve the inverse problem IP. To minimize the Tikhonov functional (5) we introduce the Lagrangian
(26) 
where . We search for a stationary point of (26) with respect to satisfying
(27) 
where is the Jacobian of at . We can rewrite the equation (27) as
(28) 
To find the Frechét derivative (27) of the Lagrangian (26) we consider . Then we single out the linear part of the obtained expression with respect to . When we derive the Frechét derivative we assume that in the Lagrangian (26) function can be varied independently on each other. We assume that and seek to impose conditions on the function such that Next, we use the fact that and , as well as on , together with boundary conditions and on . The equation (27) expresses that for all ,
(29) 
(30) 
Finally, we obtain the equation which expresses stationarity of the gradient with respect to :
(31) 
The equation (29) is the weak formulation of the state equation (1) and the equation (30) is the weak formulation of the following adjoint problem
(32) 
We note that we have positive sign here in absorbing boundary conditions. However, after discretization in time of these conditions we will obtain the same schemes for computation of as for the computation of in the forward problem since we solve the adjoint problem backward in time.
Let now the functions be the exact solutions of the forward and adjoint problems, respectively, for the known function satisfying condition (20). Then with and using the fact that for exact solutions from (26) we have
(33) 
and assuming that solutions are sufficiently stable (see Chapter 5 of book [21] for details), we can write that the Frechét derivative of the Tikhonov functional is given by
(34) 
Inserting (31) into (34) we get
(35) 
5 Finite element method for the solution of an optimization problem
In this section, we formulate the finite element method for the solution of the forward problem (1) and the adjoint problem (32). We also present a conjugate gradient method for the solution of our IP.
5.1 Finite element discretization
We discretize denoting by the partition of the domain into tetrahedra ( being a mesh function, defined as , representing the local diameter of the elements), and we let be a partition of the time interval into time subintervals of uniform length . We assume also a minimal angle condition on the [11].
To formulate the finite element method, we define the finite element spaces , and . First we introduce the finite element trial space for defined by
where and denote the set of piecewiselinear functions on and , respectively. We also introduce the finite element test space defined by
To approximate function we will use the space of piecewise constant functions ,
(36) 
where is the piecewise constant function on .
Next, we define . Usually and as a set and we consider as a discrete analogue of the space We introduce the same norm in as the one in , from which it follows that in finite dimensional spaces all norms are equivalent and in our computations we compute coefficients in the space . The finite element method now reads: Find , such that
(37) 
5.2 Fully discrete scheme
We expand functions and in terms of the standard continuous piecewise linear functions in space and in time, substitute them into (38) and (39), and compute explicitly all time integrals which will appear in the system of discrete equations. Finally, we obtain the following system of linear equations for the forward and adjoint problems (1), (32), correspondingly (for convenience we consider here ):
(40) 
with initial conditions :
(41)  
(42) 
Here, and are the block mass matrix in space and mass matrix at the boundary , respectively, is the block stiffness matrix, and are load vectors at time level , and denote the nodal values of and , respectively and is a time step. For details of obtaining this system of discrete equations and computing the time integrals in it, as well as for obtaining then the system (40), we refer to [4].
Let us define the mapping for the reference element such that and let be the piecewise linear local basis function on the reference element such that . Then the explicit formulas for the entries in system (40) at each element can be given as:
(43) 
where denotes the scalar product and is the part of the boundary of element which lies at . Here, are computed solutions of the forward problem (1), and are discrete measured values of at at the point and time moment .
To obtain an explicit scheme we approximate with the lumped mass matrix (for further details, see [16]). Next, we multiply (40) by and get the following explicit method inside :
(44) 
In the formulas above the terms with disappeared since we used schemes (44) only inside .
Finally, for reconstructing we can use a gradientbased method with an appropriate initial guess values of which satisfies the condition (16). We have the following expression for the discrete version of the gradient with respect to coefficient in (31):
(45) 
Here, and are computed values of the adjoint and forward problems, respectively, using explicit schemes (44), and is approximated value of the computed coefficient.
5.3 The algorithm
We use conjugate gradient method for the iterative update of approximations of the function , where is the number of iteration in our optimization procedure. We denote
(46) 
where functions are computed by solving
the state and the adjoint problems
with .
Algorithm

Choose a mesh in and a time partition of the time interval Start with the initial approximation and compute the sequences of