# Recovering of dielectric constants of explosives via a globally
strictly convex cost functional^{1}

^{1}

## Abstract

The inverse problem of estimating dielectric constants of explosives using boundary measurements of one component of the scattered electric field is addressed. It is formulated as a coefficient inverse problem for a hyperbolic differential equation. After applying the Laplace transform, a new cost functional is constructed and a variational problem is formulated. The key feature of this functional is the presence of the Carleman Weight Function for the Laplacian. The strict convexity of this functional on a bounded set in a Hilbert space of an arbitrary size is proven. This allows for establishing the global convergence of the gradient descent method. Some results of numerical experiments are presented.

Keywords: Coefficient inverse problem, Laplace transform, Carleman weight function, strictly convex cost functional, global convergence, Laguerre functions, numerical experiments.

AMS classification codes: 35R30, 35L05, 78A46

## 1 Introduction

The detection and identification of explosive devices has always been of particular interest to security and mine-countermeasures. One of the most popular techniques used for the purpose of detection and identification is the Ground Penetrating Radar (GPR). Exploiting the energy of backscattering electromagnetic pulses measured on a boundary, the GPR allows for mapping the internal structures containing explosives. Although this concept is not new, combining the GPR with quantitative imaging may significantly enhance the detection and especially identification of explosive devices. This idea was recently proposed and developed in a series of publications. For example, we mention [3, 5, 9, 10], where three-dimensional (3-d) quantitative imaging of dielectric constants of targets mimicking explosives was performed using experimental backscattering measurements; we also refer to [7] for 1-d imaging using real data collected in the field by a Forward Looking Radar.

In the above works, the imaging problem was formulated as a Coefficient
Inverse Problem (CIP) for a time dependent wave-like PDE. The main question
in performing quantitative imaging is as follows: *How to find a good
approximation of the solution to the corresponding CIP without any advanced
knowledge of a small neighborhood of this solution?* We call a numerical
method providing such an approximation *globally convergent*. As soon
as this approximation is found, a number of conventional locally convergent
methods may be used to refine that approximation, see, e.g., Chapters 4 and
5 of [3] for a two-stage numerical procedure.

It is well known that the objective functionals resulted from applying the traditional least-squares method are usually nonconvex. This fact explains existence of multiple local minima and ravines of those functionals. In turn, the latter leads to the local convergence of gradient and Newton-like methods. That is, the convergence of these methods is guaranteed only if an initial approximation lies in a small neighborhood of the solution. However, from our experience working with experimental data (see above citations), we have observed that such a requirement is not normally satisfied in practical situations.

In this paper we propose a new numerical method which provides the global convergence. Currently there are two approaches to constructing globally convergent algorithms. They were developed in a number of publications summarized in books [3, 8]. In particular, the above cited works treating experimental data use the approach of [3]. The common part of both approaches is the temporal Laplace transform. Let be the parameter in the Laplace transform. We assume that , where is a large enough positive constant. Then, applying the Laplace transform to the time-dependent wave-like PDE, one obtains a boundary value problem for a nonlinear integral differential equation. However, the main difficulty then is that the integration over the parameter is required on the infinite interval . In the convexification method [8] that integral was truncated, and the residual was set to zero. Unlike this, in [3] the residual, i.e., the so-called “tail function”, was approximated in an iterative process.

The open question is whether it is possible to avoid such a truncation. In other words, whether it is possible to avoid a procedure of approximating the tail function. This question is addressed in the current paper.

The *main novelty* here is that the truncation of the infinite integral
is replaced by the truncation of a series with respect to some -dependent
functions forming an orthonormal basis in In this way, we obtain a coupled system of nonlinear elliptic
equations with respect to the dependent coefficients of the truncated
series, where . In addition, we construct a least
squares objective functional with a Carleman Weight Function (CWF) in it.
The main result is formulated in Theorem 4.2. Given a bounded set of
an arbitrary size in a certain Hilbert space, one can choose a parameter of
the CWF such that this functional becomes strictly convex on this set. This
implies convergence of gradient methods in finding the unique minimizer of
the functional on that set, if it exists, starting from any point of that
set. Since restrictions on the diameter of that bounded set are not imposed,
we call the latter *global convergence* and we call that functional
*globally* strictly convex. This idea also works in the case when the
Fourier transform is applied to the wave-like equation instead of the
Laplace transform.

To demonstrate the computational feasibility of the proposed method, we perform a limited numerical study in the 1-d case. In particular, we show numerically that if a locally convergent algorithm starts from an approximation obtained by the proposed globally convergent algorithm, then the accuracy of approximations can be significantly improved. On the other hand, if that locally convergent algorithm starts from the background medium, then it may fail to perform well. In section 2 we formulate our inverse problem and theorems in 3-d. In sections 3–5 we prove these theorems. In section 6 we briefly summarize the special case of the 1-d problem. Section 7 describes some details of the numerical implementation in the 1-d case. Finally, in section 8 we demonstrate the numerical performance of the proposed algorithm.

## 2 The 3-d coefficient inverse problem

Below Let be a convex bounded domain with a piecewise-smooth boundary . We define Let the function satisfies the following conditions

(1) |

where the numbers and are given. In this paper, denotes Hölder spaces, where is an integer and . Consider the following Cauchy problem

(2) | |||

(3) |

The coefficient represents the spatially distributed dielectric constant. Here is normalized so that its value in the background medium, i.e., in , equals . The function represents one of components of the electric field generated by an incident plane wave propagating along the axis and excited at the plane where . The function is continuous and bounded which represents the time-dependent waveform of the incident plane wave. Of course, the propagation of electromagnetic waves should be modeled by the Maxwell’s equations. However, there are two reasons for us to use the scalar equation (2). The first reason is that most GPR systems provide only scalar data instead of three components of the electric field. For example, in the experiments used in [3, 5, 7, 9, 10] only one component of the electric field was generated by the source and the detector measured only that component of the backscattering electric field. The second reason is that it was demonstrated numerically in [4] that if the incident wave has only one non-zero component of the electric field, then this component dominates the other two. Moreover, equation (2) approximates well the propagation of that component even in 3-d [4].

CIP 1: *Suppose that conditions (1)–(3) are
satisfied and that the plane **. Determine
the coefficient ** for ** assuming that the
following function ** is known*

(4) |

The function in (4) models a boundary measurement. Having , one can uniquely solve the initial value problem (2)–(3) outside of the domain Hence, the normal derivative is also known:

(5) |

We note that the knowledge of functions and on an infinite rather than finite time interval is not a serious restriction of our method, since the Laplace transform, which we use, effectively cuts off values of these functions for large . In addition, if the incident plane wave is excited on a finite time interval, the scattered wave will eventually vanish, as we observed in our experiments in [3, 5, 7, 9, 10]. In practice, incident waves are usually excited for a short period of time.

## 3 The coupled system of nonlinear elliptic equations

Consider the Laplace transform where is referred to as the pseudofrequency. We also denote by the Laplace transform of . Consider , where the number is large enough, so that the Laplace transforms of and its derivatives converge absolutely. The number is defined in (1). We assume that for all . Define . Then, this function satisfies the equation:

(6) |

Define Note that tends to zero as The function is the unique solution of equation (6) in the case which tends to zero as It is shown in Theorem 3.1 of [10] that in the case

(7) |

and that the function can be represented in the form

(8) |

Furthermore, the same theorem claims that for all . Thus, we assume these properties in our algorithm even if Next, define . Substituting into (6) and keeping in mind that , we obtain

(9) |

Hence, if the function is known, then the coefficient can be computed directly using (9). We define . Thus, Note that this converges absolutely together with its derivatives with respect to up to the second order. The latter is true if certain non-restrictive conditions are imposed on the function (see Lemma 6.5.2 in [8]), and we assume that these conditions are in place. Hence, differentiating (9) with respect to leads to the following nonlinear integral differential equation as mentioned in Introduction:

(10) |

In addition, the following two boundary functions , can be derived from functions , in (4) and (5)

(11) |

We have obtained the nonlinear boundary value problem (10)–(11) for If this function is found, then the coefficient can be easily found via backwards calculations. Therefore, the central focus should be on the solution of the problem (10)–(11).

We remark that functions are results of measurements. Hence, they contain noise. Although one needs to calculate the first derivative with respect to of functions , in order to find functions , it was observed in [3] that this can be done in a stable way, since the Laplace transform smooth out the that noise. In addition, in our numerical computation we also remove high frequency noise by truncating high order Fourier coefficients in the Fourier transformed data.

There have been two globally convergent method proposed by our group so far [3, 8]. The common point of both methods is that the integral in (10) is written as

The function is called the “tail function”. In the method of [8] this tail function was set to be zero, whereas in [3] it was approximated in an iterative process.

The *key novelty* of the method of this paper is that it does not
truncate the integral over in (10) as in the above methods.
Instead, we represent the function as a series with respect to an
orthonormal basis of . Using this
representation, the integral over the infinite interval in (10) can be easily computed. Let be an orthonormal basis in such that As an example, one can consider Laguerre functions [1]

Next, we set It can be verified that Hence, one can represent the function as

(12) |

where is a sufficiently large integer which should be chosen in numerical experiments. Consider the vector of coefficients in the truncated series (12) Substituting the truncated series (12) into (10), we obtain

(13) |

To be precise, one should have “” instead of “” in (13) due to the truncation (12). Multiplying both sides of (13) by , integrating over and keeping in mind the fact that is an orthonormal basis in , we obtain the following system of coupled nonlinear elliptic equations:

(14) |

where the numbers , , are given by

The boundary conditions for are obtained by substituting again the truncated series (12) into (11). For the convenience of the following analysis, we rewrite system (14) together with the boundary conditions as the following boundary value problem with over-determined boundary conditions. Note that we have both Dirichlet and Neumann boundary conditions

(15) | |||

(16) |

where the boundary vector functions are computed from the functions and , with

If we can find an approximate solution of the problem (15)–(16), then we can find an approximation for the function via the truncated series (12). Therefore, we focus below on the method of approximating the vector function

Let be a vector function and be a Hilbert space. Below any statement that means that every component of the vector belongs to . The norm means

## 4 Globally Convex Cost Functional

Our ultimate goal is to apply this method to the inversion of experimental data of [5, 9, 10]. Thus, just as in these references, below is chosen to be a rectangular parallelepiped. Without loss of generality, it is convenient to assume that

where is a number. Thus, where

As in [5, 9, 10], is considered as the backscattering side, where the data are measured. Although measurements were not performed on it was demonstrated in these references that assigning

(17) |

does not affect the accuracy of the reconstruction via the technique of [3]. This is probably because of the condition (7). Thus, we now relax conditions (16), assuming that the normal derivative is given only on the Dirichlet condition is given on and no boundary condition is given on ,

(18) |

Let us introduce a CWF for the Laplace operator which is suitable for this domain and for boundary conditions (18). Let be two arbitrary numbers. Let be two large parameters which we will choose later. Then the CWF has the form

(19) |

Hence,

(20) |

Lemma 4.1 establishes a Carleman estimate for the operator in the domain with the weight function (19). The proof of this lemma is almost identical to the proof of Lemma 6.5.1 of [3] and is therefore omitted.

###### Lemma 4.1.

There exist sufficiently large numbers depending only on the listed parameters such that for an arbitrary function satisfying the following Carleman estimate holds for all and with a constant depending only on the domain

(21) |

Below denotes different constants depending only on the domain Let be an arbitrary number. Define the set of vector functions as

(22) |

Then is an open set in Also, Embedding Theorem implies that

(23) |

Let be the number in Lemma 4.1. Denote We seek the solution of the problem (15), (18) on the set via minimizing the following Tikhonov-like cost functional with the CWF and with the regularization parameter

(24) |

###### Theorem 4.2.

There exists a sufficiently large number depending only on such that if and , then the functional is strictly convex on the set , i.e., there exists a constant depending only on such that for all

(25) |

where is the Fréchet derivative of the functional at the point

Proof. The existence of the Fréchet derivative of the functional is shown in the proof. Everywhere below denotes different positive constants depending only on the listed parameters. Denote Then by (22)

(26) |

Denote the subspace of the space consisting of vector functions satisfying conditions (26). Let be the matrix,

Hence, (23) implies that

(27) |

Next, by Taylor’s formula

where

(28) |

Hence, for all

(29) |

By (28) and the Cauchy-Schwarz inequality Hence, using (27)–(29), we obtain

(30) |

On the other hand,

where is the scalar product in It follows from (29) that

(31) |

The right hand side of (31) is a bounded linear functional acting on the function Hence, Riesz theorem and (31) imply that there exists an element such that Thus, the Fréchet derivative of the functional exists and By (29)–(31)

(32) |

For Hence, by Lemma 4.1 for sufficiently large

###### Corollary 4.3 (Uniqueness).

Proof. Let Suppose that there exist two vector
functions * *satisfying conditions (15), (18). Then On the other hand,
Hence, and are points of minimum of the functional Hence, Hence, repeating the proof of
Theorem 4.2, we obtain the following analog of (33)

(34) |

Setting in (34) and using (20), we obtain in Since is an arbitrary number, then in

## 5 Global convergence of the gradient descent method

It is well-known that the gradient descent method is globally
convergent for functionals which are strictly convex on the entire space.
However, the functional (24) is strictly convex only on the bounded
set . Therefore we need to prove the global
convergence of this method on this set. Suppose that a minimizer
of (24) exists on . In the
regularization theory is called *regularized solution* of
the problem (15), (18) [3]. Theorem 4.2
guarantees that such a minimizer is unique. First, we estimate in this
section the distance between regularized and exact solutions, depending on
the level of error in the data. Next, we establish that Theorem 4.2
implies that the gradient descent method of the minimization of the
functional (24) converges to if starting at any point of
this set, i.e., it converges globally. In addition, we estimate the distance
between points of the minimizing sequence of the gradient descent method and
the exact solution of the problem. In principle, global convergence of other
gradient methods for the functional (24) can also be proved. However,
we are not doing this for brevity.

### 5.1 The distance between regularized and exact solutions

Following one of concepts of Tikhonov for ill-posed problems (see, e.g., section 1.4 in [3]), we assume that there exist noiseless boundary data and which correspond to the exact solution of the problem (15), (18). Also, we assume that functions and at the part of the boundary contain an error of the level

(35) |

On the other hand, we do not assume any error in the function at see a heuristic condition (17), which was justified numerically in [5, 9, 10]. Theorem 5.1 estimates the distance between and in the norm of which might be sufficient for computations. Note that while in Theorem 4.2 we have compared functions and satisfying the same boundary conditions, functions and in Theorem 5.1 satisfy different boundary conditions, because of the error in the data.

###### Theorem 5.1.

Assume that conditions of Theorem 4.2 hold and . In addition, assume that conditions (35) are satisfied and also . Suppose that there exists an exact solution of the problem (15), (18) and In addition, assume that there exists a minimizer of the functional Let the number be so small that

(36) |

Let Choose the regularization parameter in (24) as where

Then (as in Theorem 4.2) and

(37) |

Proof. Denote In the proof of Theorem 4.2 the function satisfies zero boundary conditions (26). Now, however, the only zero condition for the function is Still, it is obvious that one can slightly modify the proof of Theorem 4.2 for the case of non-zero Dirichlet and Neumann boundary conditions for at To do so, we take into account the second term on the left hand side of (21). Thus,

(38) |

By (35)

(39) |

Since then it follows from (36) that one can choose such that Thus, . It can be easily verified that the above choice guarantees that Hence, (38) and (39) imply that for such

(40) |