Numerical studies of the Lagrangian approach for reconstruction of the conductivity in a waveguide
We consider an inverse problem of reconstructing the conductivity function in a hyperbolic equation using single space-time domain noisy observations of the solution on the backscattering boundary of the computational domain. We formulate our inverse problem as an optimization problem and use Lagrangian approach to minimize the corresponding Tikhonov functional. We present a theorem of a local strong convexity of our functional and derive error estimates between computed and regularized as well as exact solutions of this functional, correspondingly. In numerical simulations we apply domain decomposition finite element-finite difference method for minimization of the Lagrangian. Our computational study shows efficiency of the proposed method in the reconstruction of the conductivity function in three dimensions.
In this work, we consider the coefficient inverse problem (CIP) of reconstructing the conductivity function in a hyperbolic equation using single observation of the solution of this equation in space and time on the backscattered boundary of the computational domain. In our simulations, backscattered boundary measurements are generated by a single direction of propagation of a plane wave. We solve our CIP via minimization of the corresponding Tikhonov functional and use the Lagrangian approach to minimize it. Applying results of [8, 9], we have formulated a theorem of a local strong convexity of this functional in our case and show that the gradient method for minimizing this functional will converge. We have also presented estimates of the norms between computed and regularized solution of the Tikhonov functional via the norm of the Fréchet derivative of this functional and via the corresponding Lagrangian. In the minimization procedure of the Lagrangian, we applied conjugate gradient method and the domain decomposition finite element/finite difference method of . The method of  is convenient for our simulations since it is efficiently implemented in the software package WavES  in C++ using PETSc  and message passing interface (MPI).
We tested our iterative algorithm by reconstructing a conductivity function that represents some small scatterers as well as smooth function inside the domain of interest. In all of our numerical simulations of this work we induced one non-zero initial condition in the hyperbolic equation accordingly to the theory of the recent work . In  it was shown that one non-zero initial condition associated with the observation of the solution of the hyperbolic equation involve uniqueness and stability results in reconstruction of the conductivity function for a cylindrical domains. Our three-dimensional numerical simulations show that we can accurately reconstruct large contrast of the conductivity function as well as its location. In our future work, similar to [6, 7], we are planning to use an adaptive finite element method in order to improve reconstruction of the shapes obtained in this work.
Another method for reconstruction of conductivity function - a layer-stripping algorithm with respect to pseudo-frequency - was presented in . In addition the mathematical model governed by the hyperbolic equation studied in this work can also be considered as a special case of a time-dependent transverse magnetic polarized wave scattering problem or as a simplified acoustic wave model for fluids with variable density and a constant bulk modulus. In recent years, some rapid identification techniques have been developed for solving the elastodynamic inverse problem, for instance, crack/fault identification techniques are developed for cracks having free boundary condition using a reciprocity gap function [12, 13], and linear sampling techniques are designed to locate inclusions in the isotropic elastic medium [10, 19]. To compare performance of the algorithm of this paper with different algorithms of [10, 12, 13, 14, 19] can be the subject of a future work.
The paper is organized as follows. In section 2 we formulate the forward and inverse problems. In section 3 we present the Tikhonov functional to be minimized and formulate the theorem of a local strong convexity of this functional. Section 4 is devoted to a Lagrangian approach to solve the inverse problem. In section 5 we present finite element method for the solution of our optimization problem and formulate conjugate gradient algorithm used in computations. Finally, in section 6 we present results of reconstructing the conductivity function in three dimensions.
2 Statement of the forward and inverse problems
Let be a convex bounded domain with the boundary , and is Hölder space, is an integer and We use the notation . Next, in our theoretical and numerical investigations we use domain decomposition of the domain into two subregions, and such that , , see figure 1. The communication between two domains is done through two layers of structured nodes as described in . The boundary of the domain is such that where and are, respectively, front and back sides of the domain . The boundary is the union of the left, right, top and bottom sides of the domain .
We denote by the space-time boundary where we will have time-dependent observations of the backscattered field. We use the notation , , , .
Our model problem is as follows
which satisfies stability and uniqueness results of .
Here, is the incident plane wave generated at the plane and propagating along the -axis. We assume that
We assume that in the function is known and is defined as a constant coefficient . For numerical solution of the problem (1) in we can use either the finite difference or the finite element method. Further in our theoretical considerations we will use the finite element method in both and , with known function in and in the overlapping layer of the structured nodes between and . This layer is constructed in a similar way as in . We note that in the numerical simulations of section 6 we use the domain decomposition method of  since this method is efficiently implemented in the software package WavES  and is convenient for our purposes. We also note that both finite element and finite difference techniques provide the same explicit schemes in in the case of structured mesh in , see  for details.
We make the following assumptions on the coefficient in the problem (1):
We consider the following
Inverse Problem (IP) Suppose that the coefficient of (1) satisfies conditions (3). Assume that the function is unknown in the domain . Determine the function for assuming that the following space and time-dependent function is known
From the assumptions (3) it follows that we should know a priori upper and lower bounds of the function . This corresponds to the theory of inverse problems about the availability of a priori information for an ill-posed problem [18, 24]. In applications, the assumption for means that the function corresponds to the homogeneous domain in .
|c) view||d) view|
3 Tikhonov functional
We reformulate our inverse problem as an optimization problem and we seek for the function . This function should fit to the space-time observations measured at . Thus, we minimize the Tikhonov functional
where is the observed field in (4), satisfies (1), is the initial guess for , and is the regularization parameter. Here, is a cut-off function to impose compatibility conditions at for the adjoint problem (32) which is defined as in .
Let us define the inner product and the norm in and , respectively, as
We also introduce the following spaces of real valued vector functions
where is the exact data corresponding to the exact function in (1), and the function represents the error in these data. In other words, we can write
Let and be the finite dimensional linear space such that
where is the finite-element mesh defined in section 5.
where is the weak solution of the problem (1) and thus, depends on the function .
We impose assumption that the operator is one-to-one. Next, we assume that there exists the exact solution of the equation
It follows from our assumption that the operator is one-to-one and thus, for a given function this solution is unique.
We denote by
We also assume that the operator has the Lipschitz continuous Frechét derivative for such that there exist constants
similar to  we choose the constant such that
Through the paper, similar to , we assume that
where is the regularization parameter in (5). Equation (16) means that we assume that initial guess in (5) is located in a sufficiently small neighborhood of the exact solution . From Lemma 2.1 and 3.2 of  it follows that conditions (16)- (17) ensures that belong to an appropriate neighborhood of the regularized solution of the functional (5).
The question of stability and uniqueness of our IP is addressed in  for the case of the unbounded domain.
Let are two Hilbert spaces such that , is a closed bounded convex set satisfying conditions (3), and is a continuous one-to-one operator.
Assume that the conditions (7)- (8), (14)-(15) hold. Assume that there exists the exact solution of the equation for the case of the exact data in (7). Let the regularization parameter in (5) be such that
Next, there exists the unique regularized solution of the functional (5) and this solution The gradient method of the minimization of the functional (5) which starts at converges to the regularized solution of this functional and
The next theorem presents the estimate of the norm via the norm of the Fréchet derivative of the Tikhonov functional (5).
Assume that the conditions of Theorem 3.1 hold. Then for any function the following error estimate is valid
where is the minimizer of the Tikhonov functional (5) computed with the regularization parameter and is the operator of orthogonal projection of the space on its subspace , is the Fréchet derivative of the Lagrangian (26) given by (34).
similar to Theorem 4.11.2 of , since then
Thus, from the expression above we get
Using the fact
In our final theorem we present the error between the computed and exact solutions of the functional (5).
Assume that the conditions of Theorem 3.1 hold. Then for any function the following error estimate holds
4 Lagrangian approach
In this section, we will present the Lagrangian approach to solve the inverse problem IP. To minimize the Tikhonov functional (5) we introduce the Lagrangian
where . We search for a stationary point of (26) with respect to satisfying
where is the Jacobian of at . We can rewrite the equation (27) as
To find the Frechét derivative (27) of the Lagrangian (26) we consider . Then we single out the linear part of the obtained expression with respect to . When we derive the Frechét derivative we assume that in the Lagrangian (26) function can be varied independently on each other. We assume that and seek to impose conditions on the function such that Next, we use the fact that and , as well as on , together with boundary conditions and on . The equation (27) expresses that for all ,
Finally, we obtain the equation which expresses stationarity of the gradient with respect to :
We note that we have positive sign here in absorbing boundary conditions. However, after discretization in time of these conditions we will obtain the same schemes for computation of as for the computation of in the forward problem since we solve the adjoint problem backward in time.
Let now the functions be the exact solutions of the forward and adjoint problems, respectively, for the known function satisfying condition (20). Then with and using the fact that for exact solutions from (26) we have
and assuming that solutions are sufficiently stable (see Chapter 5 of book  for details), we can write that the Frechét derivative of the Tikhonov functional is given by
5 Finite element method for the solution of an optimization problem
In this section, we formulate the finite element method for the solution of the forward problem (1) and the adjoint problem (32). We also present a conjugate gradient method for the solution of our IP.
5.1 Finite element discretization
We discretize denoting by the partition of the domain into tetrahedra ( being a mesh function, defined as , representing the local diameter of the elements), and we let be a partition of the time interval into time sub-intervals of uniform length . We assume also a minimal angle condition on the .
To formulate the finite element method, we define the finite element spaces , and . First we introduce the finite element trial space for defined by
where and denote the set of piecewise-linear functions on and , respectively. We also introduce the finite element test space defined by
To approximate function we will use the space of piecewise constant functions ,
where is the piecewise constant function on .
Next, we define . Usually and as a set and we consider as a discrete analogue of the space We introduce the same norm in as the one in , from which it follows that in finite dimensional spaces all norms are equivalent and in our computations we compute coefficients in the space . The finite element method now reads: Find , such that
5.2 Fully discrete scheme
We expand functions and in terms of the standard continuous piecewise linear functions in space and in time, substitute them into (38) and (39), and compute explicitly all time integrals which will appear in the system of discrete equations. Finally, we obtain the following system of linear equations for the forward and adjoint problems (1), (32), correspondingly (for convenience we consider here ):
with initial conditions :
Here, and are the block mass matrix in space and mass matrix at the boundary , respectively, is the block stiffness matrix, and are load vectors at time level , and denote the nodal values of and , respectively and is a time step. For details of obtaining this system of discrete equations and computing the time integrals in it, as well as for obtaining then the system (40), we refer to .
Let us define the mapping for the reference element such that and let be the piecewise linear local basis function on the reference element such that . Then the explicit formulas for the entries in system (40) at each element can be given as:
where denotes the scalar product and is the part of the boundary of element which lies at . Here, are computed solutions of the forward problem (1), and are discrete measured values of at at the point and time moment .
In the formulas above the terms with disappeared since we used schemes (44) only inside .
Finally, for reconstructing we can use a gradient-based method with an appropriate initial guess values of which satisfies the condition (16). We have the following expression for the discrete version of the gradient with respect to coefficient in (31):
Here, and are computed values of the adjoint and forward problems, respectively, using explicit schemes (44), and is approximated value of the computed coefficient.
5.3 The algorithm
We use conjugate gradient method for the iterative update of approximations of the function , where is the number of iteration in our optimization procedure. We denote
where functions are computed by solving
the state and the adjoint problems
Choose a mesh in and a time partition of the time interval Start with the initial approximation and compute the sequences of