On Steepest-Descent-Kaczmarz methods for regularizing systems of nonlinear ill-posed equations

On Steepest-Descent-Kaczmarz methods for regularizing systems of nonlinear ill-posed equations

A. De Cezaro IMPA, Estr. D. Castorina 110, 22460-320 Rio de Janeiro, Brazil decezaro@impa.br.    M. Haltmeier Department of Computer Science, University of Innsbruck, Technikerstrasse 21a, A-6020 Innsbruck, Austria {markus.haltmeier,otmar.scherzer}@uibk.ac.at.    A. Leitão Department of Mathematics, Federal University of St. Catarina, P.O. Box 476, 88040-900 Florianópolis, Brazil aleitao@mtm.ufsc.br.    O. Scherzer
July 18, 2019
Abstract

We investigate modified steepest descent methods coupled with a loping Kaczmarz strategy for obtaining stable solutions of nonlinear systems of ill-posed operator equations. We show that the proposed method is a convergent regularization method. Numerical tests are presented for a linear problem related to photoacoustic tomography and a non-linear problem related to the testing of semiconductor devices.

Keywords. Nonlinear systems; Ill-posed equations; Regularization; Steepest descent method; Kaczmarz method.

AMS Classification: 65J20, 47J06.

1 Introduction

In this paper we propose a new method for obtaining regularized approximations of systems of nonlinear ill-posed operator equations.

The inverse problem we are interested in consists of determining an unknown physical quantity from the set of data , where , are Hilbert spaces and . In practical situations, we do not know the data exactly. Instead, we have only approximate measured data satisfying

(1)

with (noise level). We use the notation . The finite set of data above is obtained by indirect measurements of the parameter, this process being described by the model

(2)

where , and are the corresponding domains of definition.

Standard methods for the solution of system (2) are based in the use of Iterative type regularization methods [1, 7, 13, 16, 19] or Tikhonov type regularization methods [7, 23, 30, 32, 33] after rewriting (2) as a single equation , where

(3)

and . However these methods become inefficient if is large or the evaluations of and are expensive. In such a situation, Kaczmarz type methods [6, 15, 22, 26] which cyclically consider each equation in (2) separately are much faster [24] and are often the method of choice in practice.

For recent analysis of Kaczmarz type methods for systems of ill-posed equations, we refer the reader to [4, 10, 11, 17].

The starting point of our approach is the steepest descent method [7, 29] for solving ill-posed problems. Motivated by the ideas in [10, 11], we propose in this article a loping Steepest-Descent-Kaczmarz method (l-SDK method) for the solution of (2). This iterative method is defined by

(4)

where

(5)
(6)
(7)

Here , are appropriate chosen numbers (see (13), (14) below), , and is an initial guess, possibly incorporating some a priori knowledge about the exact solution. The function defines a sequence of relaxation parameters and is assumed to be continuous, monotonically increasing, bounded by a constant , and to satisfy (see Figure 1).

Figure 1: Typical examples for relaxation function .

If is an upper bound for , then (cf. Lemma 3.2). Hence the relaxation function needs only be defined on . In particular, if one chooses being constant on that interval, then and the l-SDK method reduces to the loping Landweber-Kaczmarz (l-LK) method considered in [10, 11]. The convergence analysis of the l-LK method requires , whereas the adaptive choice of the relaxation parameters in the present paper allows being much larger than .

The l-SDK method consists in incorporating the Kaczmarz strategy (with the loping parameters ) in the steepest descent method. This strategy is analog to the one introduced in [11] regarding the Landweber-Kaczmarz iteration. As usual in Kaczmarz-type algorithms, a group of subsequent steps (starting at some multiple of ) shall be called a cycle. The iteration should be terminated when, for the first time, all are equal within a cycle. That is, we stop the iteration at

(8)

Notice that is the smallest multiple of such that

(9)

In the case of noise free data, in (1), we choose and the iteration (4) - (7) reduces to the Steepest-Descent-Kaczmarz (SDK) method, which is closely related to the Landweber-Kaczmarz (LK) method considered in [17].

It is worth noticing that, for noisy data, the l-SDK method is fundamentally different from the SDK method: The bang-bang relaxation parameter effects that the iterates defined in (4) become stationary if all components of the residual vector fall below a pre-specified threshold. This characteristic renders (4) - (7) a regularization method (see Section 3). Another consequence of using these relaxation parameters is the fact that, after a large number of iterations, will vanish for some within each iteration cycle. Therefore, the computational expensive evaluation of might be loped, making the l-SDK method in (4) - (7) a fast alternative to the LK method in [17]. Since in praxis the steepest descent method performs better than the Landweber method, the l-SDK is expected to be more efficient than the l-LK method [10, 11]. Our numerical experiments (mainly for the nonlinear problem considered in Section 5) corroborate this conjecture.

The article is outlined as follows. In Section 2 we formulate basic assumptions and derive some auxiliary estimates required for the analysis. In Section 3 we provide a convergence analysis for the l-SDK method. In Sections 4 and 5 we compare the numerical performance of the l-SDK method with other standard methods for inverse problems in photoacoustic tomography and in semiconductors respectively.

2 Assumptions and Basic Results

We begin this section by introducing some assumptions, that are necessary for the convergence analysis presented in the next section. These assumptions derive from the classical assumptions used in the analysis of iterative regularization methods [7, 16, 29].

First, we assume that the operators are continuously Fréchet differentiable, and also that there exist , , and such that

(10)

Notice that is used as starting value of the l-SDK iteration. Next we make an uniform assumption on the nonlinearity of the operators . Namely, we assume that the local tangential cone condition [7, 16]

(11)

holds for some . Moreover, we assume the existence of and element

(12)

where are the exact data satisfying (1).

We are now in position to choose the positive constants , in (7), (6). For the rest of this article we shall assume

(13)
(14)

In particular, for linear problems we can choose equal to 2.

In the sequel we verify some basic results that are necessary for the convergence analysis derived in the next section. The first result concerns the well-definedness and positivity of the relaxation parameter .

Lemma 2.1.

Let assumptions (10) - (12) be satisfied. Then the coefficients in (7) are well-defined and positive.

Proof.

If , the assertion follows from (7). If , then and the assertion is a consequence of [29, Lemma 3.1], applied to instead of . ∎

In the next lemma we prove an estimate for the step size of the l-SDK iteration.

Lemma 2.2.

Let and be defined by (5) and (7). Then

(15)
Proof.

It is enough to consider the case . It follows from (7) that

(16)

Moreover, from the definition of we obtain

Now, substituting the last two expressions in (16), shows (15). ∎

The following Lemma is an important auxiliary result, which will be used at several places throughout this article.

Lemma 2.3.

Let , , , and be defined by (4) - (7) and assume that (10) - (12) hold true. If for some , then

(17)
Proof.

If , then and (17) follows with equality. If , it follows from (4) and (5) and Lemma 2.2 that

Now, applying (11) with and , leads to

The last inequality and (1) show (17).∎

Our next goal is to prove a monotony property, known to be satisfied by other iterative regularization methods, e.g., by the Landweber [7], the steepest descent [29], the LK [17], and the l-LK [11] method.

Proposition 2.4 (Monotonicity).

Under the assumptions of Lemma 2.3,

(18)

Moreover, all iterates remain in and satisfy (17).

Proof.

From (12) it follows that . If , then satisfies (18) with equality and . If , then Lemma 2.3 implies

Therefore (18), for , follows from (14). In particular, . An inductive argument implies (18) and that for all . The assertions therefore follows from Lemma 2.3. ∎

3 Convergence Analysis of the Loping Steepest Descent Kaczmarz Method

In this section we provide a complete convergence analysis for the l-SDK iteration, showing that it is a convergent regularization method in the sense of [7] (see Theorems 3.3 and 3.6 below). Throughout this section, we assume that (10) - (14) hold, and that , , , and are defined by (4) - (7).

Our first goal is to prove convergence of the l-SDK iteration for . For exact data , the iterates in (4) are denoted by .222This is a standard notation used in the literature.

Lemma 3.1.

There exists an -minimal norm solution of (2) in , i.e., a solution of (2) such that

Moreover, is the only solution of (2) in .

Proof.

Lemma 3.1 is a consequence of [13, Proposition 2.1]. A detailed proof can be found in [16]. ∎

Lemma 3.2.

For all , we have .

Proof.

For the claimed estimate holds with equality. If , it follows from (10) that

Now the monotonicity of implies . ∎

Throughout the rest of this article, denotes the -minimal norm solution of (2). We define . From Proposition 2.4 it follows that (17) holds for all . By summing over all , this leads to

(19)

Equation (19) and the monotony of shown in Proposition 2.4 are main ingredients in the following proof of the convergence of the SDK iteration.

Theorem 3.3 (Convergence for Exact Data).

For exact data, the iteration converges to a solution of (2), as . Moreover, if

(20)

then .

Proof.

From (18) it follows that decreases monotonically and therefore that converges to some . In the following we show that is in fact a Cauchy sequence.

For and with and , let be such that

(21)

Then, with , we have

(22)

and

(23)

For , the first two terms of (23) converge to . Therefore, in order to show that is a Cauchy sequence, it is sufficient to prove that and converge to zero as .

To that end, we write , and set . Then, using the definition of the steepest descent Kaczmarz iteration it follows that

(24)

From (11) it follows that

(25)

Again using the definition of the steepest descent Kaczmarz iteration and equations (7), (10), it follows that

(26)

Substituting (25), (26) in (24) leads to

with . Here we made use of (21). So, we finally obtain the estimate

Because of (19), the last sum tends to zero for , and therefore . Analogously one shows that . Therefore is a Cauchy sequence and converges to an element . Because all residuals tend to zero, is solution of (2).

Now assume , for . Then from the definition of it follows that

An inductive argument shows that all iterates are elements of . Together with the continuity of this implies that . By Lemma 3.1, is the only solution of (2) in , and so the second assertion follows. ∎

The second goal in this section is to prove that converges to a solution of (2), as . First we verify that, for noisy data, the stopping index defined in (8) is finite.

Proposition 3.4 (Stopping Index).

Assume . Then defined in (8) is finite, and

(27)
Proof.

Assume that for every , there exists such that . From Proposition 2.4 follows that we can apply (17) recursively for and obtain

Using the fact that either or , we obtain

(28)

Equation (28), Lemma 3.2 and the fact that for all , imply

(29)

The right hand side of (29) tends to infinity, which gives a contradiction. Consequently, and the infimum in (8) takes a finite value.

To prove (27), assume to the contrary, that for some . From (6) and (8) it follows that, and respectively. Thus, Proposition 2.4 and Lemma 2.1 imply

This contradicts (14), concluding the proof of (27). ∎

The last auxiliary result concerns the continuity of at . For , , and we define

Lemma 3.5.

For all ,

(30)

Moreover, , as .

Proof.

We prove Lemma 3.5 by induction. The case is similar to the general case and is omitted.

Now, assume and that (30) holds for all . First we note that (30) and the continuity of obviously imply , as . For the proof of (30) we consider two cases. In the first case, , we have

In the second case, , we have and consequently

Now (30) follows from (10), the continuity of and , and the induction hypothesis (which implies ). ∎

Theorem 3.6 (Convergence for Noisy Data).

Assume is a sequence in with . Let be a sequence of noisy data satisfying

and let denote the corresponding stopping index defined in (8). Then converges to a solution of (2), as . Moreover, if (20) holds, then .

Proof.

Let denote the limit of the iterates which is a solution of (2), cf. Theorem 3.3. From Lemma 3.5 and the continuity of we know that, for any fixed ,

(31)

To show that , we first assume that has a finite accumulation point . Without loss of generality we may assume that for all . From Proposition 3.4 we know that and, by taking the limit , that . Consequently and as