Numerical studies of the Lagrangian approach for reconstruction of the conductivity in a waveguide

Numerical studies of the Lagrangian approach for reconstruction of the conductivity in a waveguide

Larisa Beilina Department of Mathematical Sciences, Chalmers University of Technology and Gothenburg University, SE-412 96 Gothenburg Sweden, e-mail larisa.beilina@chalmers.se    K. Niinimäki IR4M UMR8081, CNRS, University of Paris-Sud, University of Paris-Saclay, SHFJ, 4 place du Général Leclerc 91401 Orsay France, e-mail kati.niinimaki@u-psud.fr
Abstract

We consider an inverse problem of reconstructing the conductivity function in a hyperbolic equation using single space-time domain noisy observations of the solution on the backscattering boundary of the computational domain. We formulate our inverse problem as an optimization problem and use Lagrangian approach to minimize the corresponding Tikhonov functional. We present a theorem of a local strong convexity of our functional and derive error estimates between computed and regularized as well as exact solutions of this functional, correspondingly. In numerical simulations we apply domain decomposition finite element-finite difference method for minimization of the Lagrangian. Our computational study shows efficiency of the proposed method in the reconstruction of the conductivity function in three dimensions.

1 Introduction

In this work, we consider the coefficient inverse problem (CIP) of reconstructing the conductivity function in a hyperbolic equation using single observation of the solution of this equation in space and time on the backscattered boundary of the computational domain. In our simulations, backscattered boundary measurements are generated by a single direction of propagation of a plane wave. We solve our CIP via minimization of the corresponding Tikhonov functional and use the Lagrangian approach to minimize it. Applying results of [8, 9], we have formulated a theorem of a local strong convexity of this functional in our case and show that the gradient method for minimizing this functional will converge. We have also presented estimates of the norms between computed and regularized solution of the Tikhonov functional via the norm of the Fréchet derivative of this functional and via the corresponding Lagrangian. In the minimization procedure of the Lagrangian, we applied conjugate gradient method and the domain decomposition finite element/finite difference method of [4]. The method of [4] is convenient for our simulations since it is efficiently implemented in the software package WavES [25] in C++ using PETSc [23] and message passing interface (MPI).

We tested our iterative algorithm by reconstructing a conductivity function that represents some small scatterers as well as smooth function inside the domain of interest. In all of our numerical simulations of this work we induced one non-zero initial condition in the hyperbolic equation accordingly to the theory of the recent work [15]. In [15] it was shown that one non-zero initial condition associated with the observation of the solution of the hyperbolic equation involve uniqueness and stability results in reconstruction of the conductivity function for a cylindrical domains. Our three-dimensional numerical simulations show that we can accurately reconstruct large contrast of the conductivity function as well as its location. In our future work, similar to [6, 7], we are planning to use an adaptive finite element method in order to improve reconstruction of the shapes obtained in this work.

Another method for reconstruction of conductivity function - a layer-stripping algorithm with respect to pseudo-frequency - was presented in [14]. In addition the mathematical model governed by the hyperbolic equation studied in this work can also be considered as a special case of a time-dependent transverse magnetic polarized wave scattering problem or as a simplified acoustic wave model for fluids with variable density and a constant bulk modulus. In recent years, some rapid identification techniques have been developed for solving the elastodynamic inverse problem, for instance, crack/fault identification techniques are developed for cracks having free boundary condition using a reciprocity gap function [12, 13], and linear sampling techniques are designed to locate inclusions in the isotropic elastic medium [10, 19]. To compare performance of the algorithm of this paper with different algorithms of [10, 12, 13, 14, 19] can be the subject of a future work.

The paper is organized as follows. In section 2 we formulate the forward and inverse problems. In section 3 we present the Tikhonov functional to be minimized and formulate the theorem of a local strong convexity of this functional. Section 4 is devoted to a Lagrangian approach to solve the inverse problem. In section 5 we present finite element method for the solution of our optimization problem and formulate conjugate gradient algorithm used in computations. Finally, in section 6 we present results of reconstructing the conductivity function in three dimensions.

2 Statement of the forward and inverse problems

Let be a convex bounded domain with the boundary , and is Hölder space, is an integer and We use the notation . Next, in our theoretical and numerical investigations we use domain decomposition of the domain into two subregions, and such that , , see figure 1. The communication between two domains is done through two layers of structured nodes as described in [4]. The boundary of the domain is such that where and are, respectively, front and back sides of the domain . The boundary is the union of the left, right, top and bottom sides of the domain .

We denote by the space-time boundary where we will have time-dependent observations of the backscattered field. We use the notation , , , .

Our model problem is as follows

 ∂2u∂t2−∇⋅(c∇u)=0, in  ΩT,u(x,0)=f0(x),   ∂tu(x,0)=f1(x) in  Ω,∂nu=p(t) on S1,1,∂nu=−∂tu on S1,2,∂nu=−∂tu on S2,∂nu=0 on S3, (1)

which satisfies stability and uniqueness results of [15].

Here, is the incident plane wave generated at the plane and propagating along the -axis. We assume that

 f0∈H1(Ω),f1∈L2(Ω). (2)

We assume that in the function is known and is defined as a constant coefficient . For numerical solution of the problem (1) in we can use either the finite difference or the finite element method. Further in our theoretical considerations we will use the finite element method in both and , with known function in and in the overlapping layer of the structured nodes between and . This layer is constructed in a similar way as in [4]. We note that in the numerical simulations of section 6 we use the domain decomposition method of [4] since this method is efficiently implemented in the software package WavES [25] and is convenient for our purposes. We also note that both finite element and finite difference techniques provide the same explicit schemes in in the case of structured mesh in , see [11] for details.

We make the following assumptions on the coefficient in the problem (1):

 (3)

We consider the following

Inverse Problem (IP) Suppose that the coefficient  of (1) satisfies conditions (3).  Assume that the function  is unknown in the domain . Determine the function  for  assuming that the following space and time-dependent function  is known

 u(x,t)=~u(x,t),∀(x,t)∈ST. (4)

From the assumptions (3) it follows that we should know a priori upper and lower bounds of the function . This corresponds to the theory of inverse problems about the availability of a priori information for an ill-posed problem [18, 24]. In applications, the assumption for means that the function corresponds to the homogeneous domain in .

3 Tikhonov functional

We reformulate our inverse problem as an optimization problem and we seek for the function . This function should fit to the space-time observations measured at . Thus, we minimize the Tikhonov functional

 J(c):=J(u,c)=12∫ST(u−~u)2zδ(t)dσdt+12γ∫Ω(c−c0)2  dx, (5)

where is the observed field in (4), satisfies (1), is the initial guess for , and is the regularization parameter. Here, is a cut-off function to impose compatibility conditions at for the adjoint problem (32) which is defined as in [3].

Let us define the inner product and the norm in and , respectively, as

 ((u,v))ΩT = ∫Ω∫T0uv dxdt, ||u||2 = ((u,u))ΩT, (u,v)Ω = ∫Ωuv dx, |u|2 = (u,u)Ω.

We also introduce the following spaces of real valued vector functions

 H1u:={w∈H1(ΩT):w(⋅,0)=f0(x),∂tw(⋅,0)=f1(x)},H1λ:={w∈H1(ΩT):w(⋅,T)=∂tw(⋅,T)=0},U1=H1u(ΩT)×H1λ(ΩT)×C(¯¯¯¯Ω),U0=L2(ΩT)×L2(ΩT)×L2(Ω). (6)

In our theoretical investigations below we need to reformulate the results of [8, 9] for the case of our IP. Below in this section, denotes norm.

We introduce a noise level in the function in the Tikhonov functional (5) that corresponds to the theory of ill-posed problems [1, 2, 24];

 ~u(x,t)=u(x,t,c∗)+~uδ(x,t); u(x,t,c∗),~uδ∈L2(ST), (7)

where is the exact data corresponding to the exact function in (1), and the function represents the error in these data. In other words, we can write

 ∥~uδ∥L2(ST)≤δ. (8)

Let and be the finite dimensional linear space such that

 Q1=⋃Khspan(V(Kh)), (9)

and

 V(Kh)={v(x):v(x)∈H1(Ω)}, (10)

where is the finite-element mesh defined in section 5.

Let be a closed bounded convex set satisfying conditions (3). We introduce the operator corresponding to the Tikhonov functional (5) as

 F(c)(x,t):=u∣ST∈L2(ST), (11)

where is the weak solution of the problem (1) and thus, depends on the function .

We impose assumption that the operator is one-to-one. Next, we assume that there exists the exact solution of the equation

 F(c∗)=u(x,t,c∗)∣ST. (12)

It follows from our assumption that the operator is one-to-one and thus, for a given function this solution is unique.

We denote by

 Vd(c)={c′∈Q1:∥∥c′−c∥∥0  ∀c∈Q1}. (13)

We also assume that the operator has the Lipschitz continuous Frechét derivative for such that there exist constants

 ∥∥F′(c)∥∥≤N1,∥∥F′(c1)−F′(c2)∥∥≤N2∥c1−c2∥,∀c1,c2∈V1(c∗). (14)

similar to [9] we choose the constant such that

 ∥∥J′(c1)−J′(c2)∥∥≤D∥c1−c2∥,∀c1,c2∈V1(c∗). (15)

Through the paper, similar to [9], we assume that

 ∥c0−c∗∥ ≤ δξ, ξ=const.∈(0,1), (16) γ = δζ,ζ=const.∈(0,min(ξ,2(1−ξ))), (17)

where is the regularization parameter in (5). Equation (16) means that we assume that initial guess in (5) is located in a sufficiently small neighborhood of the exact solution . From Lemma 2.1 and 3.2 of [9] it follows that conditions (16)- (17) ensures that belong to an appropriate neighborhood of the regularized solution of the functional (5).

Below we reformulate Theorem 1.9.1.2 of [8] for the Tikhonov functional (5). Different proofs of this theorem can be found in [8], [20] and in [9] and are straightly applied to our case.

The question of stability and uniqueness of our IP is addressed in [15] for the case of the unbounded domain.

Theorem 3.1

Let are two Hilbert spaces such that , is a closed bounded convex set satisfying conditions (3), and is a continuous one-to-one operator.

Assume that the conditions (7)- (8), (14)-(15) hold. Assume that there exists the exact solution of the equation for the case of the exact data in (7). Let the regularization parameter in (5) be such that

 γ=γ(δ)=δ2ν,  ν=const.∈(0,14), ∀δ∈(0,1). (18)

Let satisfies the condition (16). Then the Tikhonov functional (5) is strongly convex in the neighborhood with the strong convexity constant such that

 ∥c1−c2∥2≤2δ2ν(J′(c1)−J′(c2),c1−c2), ∀c1,c2∈Q1. (19)

Next, there exists the unique regularized solution of the functional (5) and this solution The gradient method of the minimization of the functional (5) which starts at converges to the regularized solution of this functional and

 ∥∥cγ−c∗∥∥≤ξ∥c0−c∗∥,  ξ∈(0,1). (20)

The property (20) means that the regularized solution of the Tikhonov functional (5) provides a better accuracy than the initial guess if it satisfies condition (16).

The next theorem presents the estimate of the norm via the norm of the Fréchet derivative of the Tikhonov functional (5).

Theorem 3.2

Assume that the conditions of Theorem 3.1 hold. Then for any function the following error estimate is valid

 ∥∥c−cγ(δ)∥∥≤2δ2ν∥∥PhJ′(c)∥∥≤2δ2ν∥∥J′(c)∥∥=2δ2ν∥L′c(v(c))∥=2δ2ν∥∥∥∫T0(∇u(c))(∇λ(c)) dt+γ(c−c0)∥∥∥, (21)

where is the minimizer of the Tikhonov functional (5) computed with the regularization parameter and is the operator of orthogonal projection of the space on its subspace , is the Fréchet derivative of the Lagrangian (26) given by (34).

Proof.

Since is the minimizer of the functional (5) on the set and then , or using (34) we can write

 PhJ′c(cγ)=0. (22)

similar to Theorem 4.11.2 of [8], since then

 (J′(c)−J′(cγ),c−cγ)=(PhJ′(c)−PhJ′(cγ),c−cγ).

Hence, using (22) and the strong convexity property (19) we can write that

 ∥∥c−cγ∥∥2≤2δ2ν(J′(c)−J′(cγ),c−cγ)=2δ2ν(PhJ′(c)−PhJ′(cγ),c−cγ)=2δ2ν(PhJ′(c),c−cγ)≤2δ2ν∥∥PhJ′(c)∥∥⋅∥∥c−cγ∥∥.

Thus, from the expression above we get

 ∥∥c−cγ∥∥2≤2δ2ν∥∥PhJ′(c)∥∥⋅∥∥c−cγ∥∥. (23)

Using the fact

 ∥∥PhJ′(c)∥∥L2(Ω)≤∥∥J′(c)∥∥L2(Ω)

together with (34) and (35) and dividing the expression (23) by , we obtain the inequality (21).

In our final theorem we present the error between the computed and exact solutions of the functional (5).

Theorem 3.3

Assume that the conditions of Theorem 3.1 hold. Then for any function the following error estimate holds

 ∥c−c∗∥≤2δ2ν∥∥∥∫T0(∇u(c))(∇λ(c)) dt+γ(c−c0)∥∥∥+ξ∥c0−c∗∥. (24)

Proof. Applying Theorem 3.2 and inequality (20) we get the inequality (24)

 ∥c−c∗∥=∥∥c−cγ(δ)+cγ(δ)−c∗∥∥≤∥∥c−cγ(δ)∥∥+∥∥cγ(δ)−c∗∥∥≤2δ2ν∥∥∥∫T0(∇u(c))(∇λ(c)) dt+γ(c−c0)∥∥∥+ξ∥c0−c∗∥. (25)

4 Lagrangian approach

In this section, we will present the Lagrangian approach to solve the inverse problem IP. To minimize the Tikhonov functional (5) we introduce the Lagrangian

 L(v)=J(u,c)−∫ΩT∂λ∂t∂u∂t dxdt+∫ΩT(c∇u)(∇λ) dxdt−∫Ωλ(x,0)f1(x) dx−∫S1,1λp(t) dσdt+∫S1,2λ∂tu dσdt+∫S2λ∂tu dσdt, (26)

where . We search for a stationary point of (26) with respect to satisfying

 L′(v;¯v)=0, (27)

where is the Jacobian of at . We can rewrite the equation (27) as

 L′(v;¯v)=∂L∂λ(v)(¯λ)+∂L∂u(v)(¯u)+∂L∂c(v)(¯c)=0. (28)

To find the Frechét derivative (27) of the Lagrangian (26) we consider . Then we single out the linear part of the obtained expression with respect to . When we derive the Frechét derivative we assume that in the Lagrangian (26) function can be varied independently on each other. We assume that and seek to impose conditions on the function such that Next, we use the fact that and , as well as on , together with boundary conditions and on . The equation (27) expresses that for all ,

 0=∂L∂λ(u)(¯λ)=−∫ΩT∂¯λ∂t∂u∂t dxdt+∫ΩT(c∇u)(∇¯λ) dxdt−∫Ω¯λ(x,0)f1(x) dx−∫S1,1¯λp(t) dσdt+∫S1,2¯λ∂tu dσdt+∫S2¯λ∂tu dσdt,∀¯λ∈H1λ(ΩT), (29)
 0=∂L∂u(u)(¯u)=∫ST(u−~u) ¯u zδ dσdt−∫Ω∂λ∂t(x,0)¯u(x,0)dx−∫S1,2∪S2∂λ∂t¯u dσdt−∫ΩT∂λ∂t∂¯u∂t dxdt+∫ΩT(c∇λ)(∇¯u) dxdt,  ∀¯u∈H1u(ΩT). (30)

Finally, we obtain the equation which expresses stationarity of the gradient with respect to :

 0=∂L∂c(u)(¯c)=∫ΩT(∇u)(∇λ)¯c dxdt+γ∫Ω(c−c0)¯c dx, x∈Ω. (31)

The equation (29) is the weak formulation of the state equation (1) and the equation (30) is the weak formulation of the following adjoint problem

 ∂2λ∂t2−∇⋅(c∇λ)=−(u−~u)|STzδ % in ΩT,λ(⋅,T)=∂λ∂t(⋅,T)=0,∂nλ=∂tλ, on%  S1,2,∂nλ=∂tλ, on%  S2,∂nλ=0, on S3. (32)

We note that we have positive sign here in absorbing boundary conditions. However, after discretization in time of these conditions we will obtain the same schemes for computation of as for the computation of in the forward problem since we solve the adjoint problem backward in time.

Let now the functions be the exact solutions of the forward and adjoint problems, respectively, for the known function satisfying condition (20). Then with and using the fact that for exact solutions from (26) we have

 J(u(c),c)=L(v(c)) (33)

and assuming that solutions are sufficiently stable (see Chapter 5 of book [21] for details), we can write that the Frechét derivative of the Tikhonov functional is given by

 (34)

Inserting (31) into (34) we get

 J′(c)(x):=J′(u(c),c)(x)=∫T0(∇u(c))(∇λ(c))(x,t) dt+γ(c−c0)(x). (35)

We note that the Lagrangian (26) and the optimality conditions (29), (30) will be the same, when the homogeneous initial conditions are used in the model problem (1), and only the terms containing the initial conditions will disappear.

5 Finite element method for the solution of an optimization problem

In this section, we formulate the finite element method for the solution of the forward problem (1) and the adjoint problem (32). We also present a conjugate gradient method for the solution of our IP.

5.1 Finite element discretization

We discretize denoting by the partition of the domain into tetrahedra ( being a mesh function, defined as , representing the local diameter of the elements), and we let be a partition of the time interval into time sub-intervals of uniform length . We assume also a minimal angle condition on the [11].

To formulate the finite element method, we define the finite element spaces , and . First we introduce the finite element trial space for defined by

 Wuh:={w∈H1u:w|K×J∈P1(K)×P1(J),∀K∈Kh,∀J∈Jτ},

where and denote the set of piecewise-linear functions on and , respectively. We also introduce the finite element test space defined by

 Wλh:={w∈H1λ:w|K×J∈P1(K)×P1(J),∀K∈Kh,∀J∈Jτ}.

To approximate function we will use the space of piecewise constant functions ,

 Ch:={u∈L2(Ω):u|K∈P0(K),∀K∈Kh}, (36)

where is the piecewise constant function on .

Next, we define . Usually and as a set and we consider as a discrete analogue of the space We introduce the same norm in as the one in , from which it follows that in finite dimensional spaces all norms are equivalent and in our computations we compute coefficients in the space . The finite element method now reads: Find , such that

 L′(vh)(¯v)=0 ∀¯v∈Vh. (37)

Using (37) we can write the finite element method for the forward problem (1) (for convenience we will use here and in section 5.2 in ): Find , such that and for known ,

 −∫ΩT∂¯λ∂t∂uh∂t dxdt−∫S1,1p(t)¯λ dσdt+∫S1,2∪S2∂tuh¯λ dσdt+∫ΩT(ch∇uh)(∇¯λ) dxdt=0. (38)

Similarly, the finite element method for the adjoint problem (32) in reads: Find , such that and for known , ,

 −∫ΩT∂λh∂t∂¯u∂t dxdt+∫ST(uh−~u)zσ¯λ dσdt−∫S1,2∪S2∂tλh¯u dσdt+∫ΩT(ch∇λh)(∇¯u) dxdt=0. (39)

5.2 Fully discrete scheme

We expand functions and in terms of the standard continuous piecewise linear functions in space and in time, substitute them into (38) and (39), and compute explicitly all time integrals which will appear in the system of discrete equations. Finally, we obtain the following system of linear equations for the forward and adjoint problems (1), (32), correspondingly (for convenience we consider here ):

 M(uk+1−2uk+uk−1)=τ2Gk−τ2Kuk−12τM∂Ω(uk+1−uk−1),M(λk+1−2λk+λk−1)=−τ2Sk−τ2Kλk+12τM∂Ω(λk+1−λk−1), (40)

with initial conditions :

 u(⋅,0) =∂u∂t(⋅,0)=0, (41) λ(⋅,T) =∂λ∂t(⋅,T)=0. (42)

Here, and are the block mass matrix in space and mass matrix at the boundary , respectively, is the block stiffness matrix, and are load vectors at time level , and denote the nodal values of and , respectively and is a time step. For details of obtaining this system of discrete equations and computing the time integrals in it, as well as for obtaining then the system (40), we refer to [4].

Let us define the mapping for the reference element such that and let be the piecewise linear local basis function on the reference element such that . Then the explicit formulas for the entries in system (40) at each element can be given as:

 MKi,j=( φi∘FK,φj∘FK)K,KKi,j=(ci∇φi∘FK,∇φj∘FK)K,GKj=(pk,φj∘FK)K∈S1,1,SKj=((uhi,k−~ui,k)|∂1Ωzδ,φj∘FK)K, (43)

where denotes the scalar product and is the part of the boundary of element which lies at . Here, are computed solutions of the forward problem (1), and are discrete measured values of at at the point and time moment .

To obtain an explicit scheme we approximate with the lumped mass matrix (for further details, see [16]). Next, we multiply (40) by and get the following explicit method inside :

 uk+1=−τ2(ML)−1Gk+(2−τ2(ML)−1K)uk−uk−1,λk−1=−τ2(ML)−1Sk+(2−τ2(ML)−1K)λk−λk+1. (44)

In the formulas above the terms with disappeared since we used schemes (44) only inside .

Finally, for reconstructing we can use a gradient-based method with an appropriate initial guess values of which satisfies the condition (16). We have the following expression for the discrete version of the gradient with respect to coefficient in (31):

 gh(x)=∫T0∇uh∇λhdt+γ(ch−c0). (45)

Here, and are computed values of the adjoint and forward problems, respectively, using explicit schemes (44), and is approximated value of the computed coefficient.

5.3 The algorithm

We use conjugate gradient method for the iterative update of approximations of the function , where is the number of iteration in our optimization procedure. We denote

 gm(x)=∫T0∇umh∇λmhdt+γ(cmh−c0), (46)

where functions  are computed by solving the state and the adjoint problems with .

Algorithm

• Choose a mesh in and a time partition of the time interval Start with the initial approximation and compute the sequences of