Convergence rates and structure of solutions of inverse problems with imperfect forward models

Convergence rates and structure of solutions of inverse problems with imperfect forward models

Martin Burger111Institute for Analysis and Numerics, University of Münster, Einsteinstr. 62, 48149 Münster, Germany, martin.burger@wwu.de    Yury Korolev222Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK, y.korolev@damtp.cam.ac.uk    Julian Rasch333Institute for Analysis and Numerics, University of Münster, Einsteinstr. 62, 48149 Münster, Germany, julian.rasch@wwu.de
Abstract

The goal of this paper is to further develop an approach to inverse problems with imperfect forward operators that is based on partially ordered spaces. Studying the dual problem yields useful insights into the convergence of the regularised solutions and allow us to obtain convergence rates in terms of Bregman distances – as usual in inverse problems, under an additional assumption on the exact solution called the source condition. These results are obtained for general absolutely one-homogeneous functionals. In the special case of -based regularisation we also study the structure of regularised solutions and prove convergence of their level sets to those of an exact solution. Finally, using the developed theory, we adapt the concept of debiasing to inverse problems with imperfect operators and propose an approach to pointwise error estimation in -based regularisation.

Keywords: inverse problems, imperfect forward models, total variation, extended support, Bregman distances, convergence rates, error estimation, debiasing

1 Introduction

Inverse problems are typically concerned with the interpretation of indirect measurements. The measurable data are typically connected to the quantities of interest through some forward operator or forward model that models the data acquisition process. To obtain the quantities of interest from the data , we need to invert this forward model. Since the inverse of is typically not continuous, the inversion is ill-posed and one needs to employ regularisation to obtain a stable approximation to . Variational regularisation is a common approach to solving ill-posed problems and consists in minimising a weighted sum of a data fidelity term enforcing closeness to the measured data and a regularisation term enforcing some regularity of the reconstructed solution.

In this paper we consider inverse problems in form of an ill-posed operator equation

(1.1)

where is a linear operator and is a bounded domain. We assume that there exists a non-negative solution of (1.1).

For an appropriate functional we consider non-negative -minimising solutions, which solve the following problem:

(1.2)

We assume that the feasible set in (1.2) has at least one point with a finite value of and denote a (possibly non-unique) solution of (1.2) by . Throughout this paper it is assumed that the regularisation functional is convex, proper and absolutely one-homogeneous.

In practice the data are not known precisely and only their perturbed version is available. In this case, we cannot simply replace the constraint in (1.2) with , since the solutions of the original problem (1.1) would no longer be feasible in this case. Therefore, we need to relax the equality in (1.2) to guarantee the feasibility of solutions of the original problem (1.1). This is the idea of the residual method [GrasmairHalmeierScherzer2011, IvanovVasinTanana]. If the error in the data is bounded by some known constant , the residual method accounts to solving the following constrained problem:

(1.3)

The fidelity function becomes in this case the characteristic function of the convex set . In the linear case, the residual method is equivalent to Tikhonov regularisation

(1.4)

with the regularisation parameter chosen according to Morozov’s discrepancy principle [IvanovVasinTanana].

In many practical situations not only the data contain errors, but also the forward operator, that generated the data, are not perfectly known. In order to guarantee the feasibility of solutions of the original problem (1.1) in the constrained problem (1.3), one needs to account for the errors in the operator in the feasible set. If the errors in the operator are bounded by a known constant (in the operator norm), the feasible set can be amended as follows in order to guarantee feasibility of the solutions of the original problem (1.1):

(1.5)

This optimisation problem is non-convex and therefore presents considerably more computational challenges than its counterpart with the exact operator (1.3). Thus, in the context of the residual method, uncertainty in the operator results in a qualitative change in the optimisation problem to be solved, which, in general, requires using different numerical approaches from those in (1.3). The reason for non-convexity is the fact that we used the operator norm to quantify the error in the operator.

An alternative approach was proposed in [Kor_IP:2014]. Instead of the operator norm, it uses intervals in an appropriate partial order to quantify the error in the operator. It assumes that, instead of only one instance of approximate data and approximate operator , lower and upper bounds for them are available, i.e. and such that

(1.6)

The first two inequalities are understood in the sense of partial order in and the last two in the sense of partial order for linear operators (more details on how partial order is defined for linear operators will be given in Section 2.1).

Using the bounds (1.6), the residual method can be reformulated as the following optimisation problem:

(1.7)

This optimisation problem is convex and has the same structure in the case of errors in the operator as in the error-free case. The fidelity term in this case is the characteristic function of a convex polyhedron. It can be easily verified that any solution of the original problem (1.1) is a feasible solution of (1.7).

In this paper, we study the dual problem of (1.7), which can be written as follows

(1.8)

where , and is the subdifferential of the regularisation functional at zero. We shall see that, under certain assumptions, and in (1.8) are Lagrange multipliers corresponding to the positivity constraint and the constraints and in (1.7), respectively, and is a subgradient of the regulariser at the optimal solution of (1.7).

To study the convergence of the minimisers of (1.7) to a solution of (1.1), we some notion of convergence of the bounds and to the exact data and operator , respectively. For this purpose, we consider sequences of lower and upper bounds and such that

(1.9)
(1.10)

and

(1.11)

With these sequences of bounds, we obtain a sequence of optimisation problems

(1.12)

It was shown in [Kor_IP:2014] (see also Theorem 5 in Section 3) that the minimisers of  (1.12) converge to as . In this paper we study this convergence in more detail, ultimately aiming at obtaining convergence rates.

It is well-known [Benning_Burger_general_fid_2011] that solutions of the dual problem play an important role in establishing convergence rates, therefore we study the behaviour of the dual problem as in more detail. Uncertainty in the operator results in a perturbation of the feasible set of the dual problem (1.8). In order to ensure the convergence of its solutions, we would like to know that the solution of the dual problem is stable with respect to such perturbations. Stability theory for optimisation problems with perturbations [Bonnans_Shapiro:1998] emphasises the role of the so-called Robinson regularity [Robinson_lin, Robinson_nonlin] in the stability of the solution under perturbations of the feasible set. In our particular case a condition on the interior of (see Assumption 4) plays a crucial role in the stability of the dual problem.

Establishing the stability of the dual problem allows us to relate its solutions to solutions of the dual problem in the limit case of exact data and operator, which has a very similar form to (1.8):

(1.13)

where and . If the original problem (1.1) is ill-posed, existence of such limit solutions of the dual problem (1.13) cannot be guaranteed, unless additional assumptions on the exact solution are made, known as the source condition [Burger_Osher:2004], which in our case takes the form (3.10). Under the source condition we are able to prove uniform boundedness of the Lagrange multipliers and convergence of the subgradient, which allows us to establish convergence rates (Section 3.8). For the symmetric Bregman distance between the minimisers of (1.12) and any -minimising solution we obtain the following estimate

(1.14)

where and . These convergence rates coincide with those from [Benning_Burger_general_fid_2011] for problems with exact operators, providing an interface with existing theory.

We further investigate the solutions of problem (1.7) by studying their geometric properties in the spirit of [Peyre:2017]. In particular, we prove Hausdorff convergence of the level sets of -regularised solutions to those of the exact solution. However, unlike the original paper [Peyre:2017], we cannot use , since it does not guarantee stability of the dual problem and convergence of the subgradient. Instead, we use the full (weighted) norm, choosing , .

Our numerical experiments with deblurring demonstrate that reconstructions obtained with , , are, indeed, piecewise-constant (if so is the ground truth), while misses some jumps and results in smoother reconstructions. This is surprising, since it contradicts the typical behaviour of in ROF-type models [ROF], which is known as staircasing [Ring:2000, Jalalzai:2016]. The reason for this is the additional freedom provided by our constraint-based approach. While in classical ROF-denoising zero is in the subgradient of only if the minimiser is equal to the data (which rarely happens for noisy data), the constraint-based approach allows the subgradient to be zero whenever the noise is small enough and contained within a prescribed corridor around the true data. However, with , , whenever the subgradient of is zero, the subgradient of is equal to , forcing the reconstruction to be piecewise constant.

Finally, we use the developed theory to adopt the concept of two-step debiasing [Burger_Rasch_debiasing, Deladelle1, Deladelle2], which allows to reduce the systematic bias in the reconstruction, such as loss of contrast, to our framework. We also propose a method of obtaining asymptotic pointwise lower and upper bounds of -regularised solutions in areas, where the exact solution is piecewise-constant.

The paper is organised as follows. In Section 2 we introduce the primal and the dual problems for fixed bounds , , , and study their properties. In Section 3 we present the convergence analysis and establish convergence rates. In Section 4 we study geometric properties of -regularised solutions and prove Hausdorff convergence of the level sets. In Section 5 we describe our approach to debiasing and pointwise error estimation and in Section 6 we present the results of our numerical experiments. The Appendices contain some results of more technical nature that we need for the proofs.

2 Primal and Dual Problems

In order to accurately formulate the primal problem (1.7), we briefly recall some definitions from the theory of functional spaces with partial order.

2.1 Banach lattices

spaces, endowed with a partial order relation

become Banach lattices, i.e. partially ordered Banach spaces with well-defined suprema and infima of each pair of elements and a monotone norm [Schaefer]. The set is called the positive cone. It can be shown that the interior of the positive cone in an space is empty, unless  [Schaefer, Schaefer:1958].

Partial orders in and induce a partial order in a subspace of the space of linear operators acting from to , namely in the space of regular operators. A linear operator is called regular, if it can be represented as a difference of two positive operators. An operator is called positive and we write iff . Partial order in the space of regular operators is introduced as follows: iff is a positive operator. Every regular operator acting between two Banach lattices is continuous [Schaefer].

2.2 Primal and dual problems

In this section we study in more detail the optimisation problem (1.7) and its dual (1.8). For convenience, we repeat problem (1.7) here:

In order to simplify notation, we introduce

as well as

Obviously, , however, for the sake of compact notation, we will write , where it will cause no confusion. The same holds for the Lagrange multiplier corresponding to the constraint , to be introduced later, of which we will simply write most of the time.

With this notation, problem (1.7) can be written as follows

(2.1)

Denote a (possibly non-unique) minimiser of (2.1) by .

Now let us turn to the dual problem of (2.1). The Lagrange function is given by the following expression

where , , . Taking the minimum in , we obtain the following expression for the dual objective:

where is the convex conjugate of . Since we assumed that is absolutely one-homogeneous, we have that is the characteristic function of . We discuss the properties of absolutely one-homogeneous regularisation functionals in more detail in Appendix A. Hence we obtain the following formulation of the dual problem:

(2.2)

We will mainly consider regularisation functionals as functionals in (and not, for example, in ), will therefore be understood as a subset of (and not ), with exceptions denoted by a subscript , where is the corresponding subspace. Properties of for regularisation functionals of the type and , , (where may be replaced with a similar regularisation functional, such as ) will be discussed in Appendix B.

The following characterisation [Burger_Gilboa_Moeller_Eckard_spectral] of the the subdifferential of an absolutely one-homogeneous functional will be useful for us later:

(2.3)

In particular, for we get

(2.4)

Clearly, the set is nonempty, convex and closed, although it may be unbounded.

2.3 Robinson regularity

We would like to establish strong duality between (2.1) and (2.2). To do this, we need to recall a concept from optimisation theory called Robinson regularity.

Consider an optimisation problem

(2.5)

where is a closed and convex set, is continuously Fréchet differentiable, and are Banach spaces and is a closed convex subset of . We say that the Robinson regularity condition [Hinze_Pinnau_Ulbrich] is satisfied at in problem (2.5) if

(2.6)

The next result [Bonnans_Shapiro:1998, Thm. 4.2] demonstrates the role that Robinson regularity plays in the existence of the Lagrange multipliers associated with the constraint .

Proposition 1.

Suppose that

  • problem (2.5) is convex;

  • its optimal value is finite;

  • is continuously differentiable and

  • Robinson regularity condition is satisfied in (2.5).

Then

  • strong duality holds between problem (2.5) and its dual;

  • the set of optimal solutions of the dual problem is non-empty and bounded;

  • if the set of Lagrange multipliers is nonempty for an optimal solution of (2.5), then it is the same for any other optimal solution of (2.5) and coincides with the set of optimal solutions of the dual problem.

Robinson regularity also plays an important role in the stability of problem (2.5) under small perturbatuions in . Consider a perturbation of the form . Denote by the feasible set in the perturbed problem. The following result holds [Bonnans_Shapiro:1998, Prop. 3.3].

Proposition 2.

Suppose that Robinson condition (2.6) holds in (2.5) at . Then for every in a neighbourhood of one has

2.4 Relationship between the primal and the dual problems

Our aim in this section is to show that Robinson condition (2.6) holds for the primal problem (2.1). This will ensure existence of the Lagrange multipliers and strong duality between (2.1) and (2.2). Since the constraint in our case is linear, the Robinson condition (2.6) can be written as follows:

(2.7)

where .

To prove Robinson regularity in problem (2.1), we need to assume that and are uniformly bounded away from the true data:

(2.8)

This assumption will be extended in Assumption 1 to cover the case of sequences and .

Now we can proceed with the Robinson condition.

Lemma 3.

If (2.8) holds then the Robinson condition (2.7) is fulfilled at any minimiser of (2.1).

Proof.

Fix and take an arbitrary with . Our aim is to find and such that .

Choose . Then we have that

We see that holds if is small enough444Note that since the interior of the positive cone is empty in all spaces except for , a bound on any other norm of would be insufficient., therefore, Robinson condition (2.7) holds at in the primal problem (2.1). ∎

Now we are ready to study the relationship between the primal problem (2.1) and its dual (2.2).

Proposition 4.

Under the assumptions of Lemma 3, strong duality between (2.1) and (2.2) holds, the complementarity conditions

(2.9)

are satisfied and , where denotes the solution of the dual problem (2.2).

Proof.

Strong duality between the primal problem (2.1) and its dual (2.2) follows from Proposition 1, since the primal problem (2.1) is convex, its optimal value is bounded (by ) and Robinson condition (2.7) is satisfied. Therefore, we have that

(2.10)

Consider the element . Since is a subgradient, we get that

and, since , also that

(the latter inequality holds since and ). Therefore, the complementarity conditions (2.9) are satisfied.

Since and , we conclude that by Proposition 31. ∎

3 Convergence analysis

In this section we turn our attention to sequences of primal and dual problems defined using sequences of bounds (1.9):

(3.1)

and

(3.2)

We will be particularly interested in the convergence of their solutions to those of the limit problems with exact data and operator (note that (3.3) is just another way of writing (1.2)):

(3.3)

and

(3.4)

We start with the convergence of primal variables – solutions of (3.1).

3.1 Convergence of primal solutions

It can be easily verified that any -minimising solution satisfies for all , which implies . It has been shown in [Kor_IP:2014] that under standard assumptions on the minimisers of (3.1) converge to a -minimising solution strongly in :

Theorem 5.

If the regulariser

  • is strongly lower-semicontinuous in ,

  • its non-empty sub-levelsets are strongly sequentially compact,

then there exists a minimiser of (2.1), strongly in (possibly, along a subsequence) and .

The proof is similar to that in [Kor_IP:2014, Thm 2].

Assumptions of Theorem 5 are satisfied, for example, for the (weighted) - norm , , or its topological equivalents with replaced with, e.g.,  [bredies2009tgv] or  [Burger_TVLp_2016]. The term can be dropped if its boundedness is implied by the condition (we will see an example of this in Section 3.2).

In order to make sure the Robinson condition is satisfied in (3.1) for all , we need to extend the assumption that we already made in (2.8) to sequences of bounds and . In order to have all assumptions on convergence in one place, we also include our assumptions on the convergence of the operator, which we will need later, in the following

Assumption 1.

Suppose that there exists a sequence and a constant as well as a sequence and a constant such that

(3.5)
(3.6)
(3.7)

The meaning of (3.5) is that converges to uniformly, but not too fast; the difference is always uniformly bounded away from zero. The second inequality in (3.5) obviously implies that . The meaning of (3.7) is that the data do not converge faster than the operator.

3.2 Boundedness of feasible solutions of the primal problem

In this section we will show that under some assumptions about the exact forward operator all elements of the feasible set are uniformly bounded in . Assumptions from this section will not be used in the rest of the paper, unless specifically stated, and the results of other sections will be also valid for more general forward operators.

Since all elements of the feasible set are positive, we have that . Consider the following optimisation problem:

(3.8)

It is a linear programming problem and its dual is as follows [Anderson_Nash_LP_inf_dim]

(3.9)

We make the following assumption about the exact forward operator :

Assumption 2.

Assume that the adjoint operator satisfies the following condition:

for some constant .

This assumption is satisfied in many imaging inverse problems, such as deconvolution [Burger_Osher_TV_Zoo] and PET [Sawatzky:2013]. It also trivially satisfied for denoising and impainting.

The case .

In order to get some intuition, let us first consider the case .

Theorem 6.

Suppose that for all and Assumption 2 is satisfied. Then all elements of the feasible set in (3.1) are uniformly bounded in .

Proof.

It is easy to verify that is a feasible solution of (3.9) (here is the constant from Assumption 2). Indeed, we have that . By weak duality we have that problem (3.8) is bounded and , since strongly in . ∎

The general case.

In the general case we obtain a similar result using Assumption 1.

Theorem 7.

Suppose that (3.6) holds and Assumption 2 is satisfied. Then all elements of the feasible set in (2.1) are uniformly bounded in .

Proof.

(3.6) implies that and for any . Therefore, we have that

and

Taking , we get that

Now consider . It is a feasible solution of problem (3.9), since

if is large enough, since . Therefore, problem (3.8) is bounded and . ∎

3.3 Strong duality in the limit case

Since the exact operator is ill-posed, we cannot expect Robinson regularity to hold in the primal limit problem (3.3) and, therefore, we cannot guarantee strong duality between (3.3) and (3.4) or even the existence of solutions of the dual limit problem (3.4), let alone its stability and convergence of the solutions of (3.2). As usual in ill-posed problems, we will need to make an additional assumption about the dual limit problem, called the source condition [Burger_Osher:2004], which in our case is can be written as follows:

Assumption 3 (Source condition).

Assume that such that

(3.10)

Let us note that since , where , with . Therefore, (3.10) implies the source condition from [Burger_Osher:2004]. On the other hand, since every element can be represented as a difference of two positive elements  [Schaefer], the source condition from [Burger_Osher:2004] also implies (3.10).

Proposition 8.

Under Assumption 3, the pair solves the dual problem (3.4) and strong duality between (3.3) and (3.4) holds.

Proof.

The source condition (3.10) implies that . On the other hand, for any feasible , by weak duality. Therefore, the pair solves (3.4) and strong duality holds. ∎

3.4 Stability of the dual problem

The goal of this section is to show that the feasible set in the dual limit problem (3.4) is stable under perturbations of the following form:

for some small . Denote by the feasible set of the perturbed problem. From Proposition 2 we know that if Robinson condition holds at some point at then .

In order to show that Robinson condition (2.6) is satisfied in (3.4) at , we need to make the following

Assumption 4.

Assume that

We emphasise that, since we consider the regularisation functional in , its subdifferential at zero should be considered in , rather than, for instance, . Assumption 4 holds, for example, for the (weighted) norm , , also in the case when it is considered as a functional from to and not from to , as shown in Appendix B. Assumption 4 fails, however, for (see Appendix B as well).

Lemma 9.

Under Assumption 4 Robinson condition (2.6) holds in (3.4).

Proof.

We need to show that

For this, we need to show that for an arbitrary , , we have that , if is small enough. Fix some , . We need to find and such that

The required condition is satisfied if we take and . Since and , we see that and Robinson condition is satisfied. ∎

3.5 Boundedness of Lagrange multipliers

Now we want to investigate the convergence of the Largange multipliers and , which by the results of Section 2.4 are related to the subgradient of at . As noted earlier, convergence of the subgradient plays an important role in establishing convergence rates.

The case .

Again, we will first consider the case . We will see that it significantly differs from the general case, because it does not require Assumption 4.

Theorem 10.

Suppose that for all and Assumptions 1 (convergence) and 3 (source condition) hold. Then

(3.11)
Proof.

Consider two problems (3.2) and (3.4). Since , their feasible sets coincide and is a feasible solution of (3.2). Therefore, . Similarly, is a feasible solution of (3.4) and . Combining these two estimates, we conclude that . Assumption 1 implies that

Since and , estimate (3.11) follows. ∎

The general case.

In the general case the optimal solution of one problem is no longer a feasible solution of the other one, but due to the stability of the feasible set, there are feasible points “not too far away”.

Theorem 11.

Suppose that Assumptions 1 (convergence), 3 (source condition) and 4 (non-empty interior) hold. Then there exists a constant such that

(3.12)
Proof.

Consider first the limit problem (3.4). Since Robinson condition holds in the dual problem (Lemma 9), by Proposition 2 we have that

where the last inequality holds because and .

Therefore, there exist such that and (and, therefore, ). Since is feasible, we get that . Furthermore, since , where and can be chosen arbitrary close to , we get that

(3.13)

Similarly, in the dual problem for finite  (3.4) we get that there exist such that and . Since is feasible, we get that and

(3.14)

Combining (3.13) and (3.14), we get the following estimate

or, equivalently,

If the constant in (3.7) is small enough, this implies (3.12) with

(3.15)

Corollary 12.

Since the sequence is bounded in , the sequence is bounded in if the operators are bounded from to .

Remark 13.

Note that in the case we did not use the stability of the dual problem and, therefore, did not need Assumption 4 to show that the Lagrange multipliers are bounded. This demonstrates that Assumption 4 plays an important role specifically in problems with an imperfect operator.

3.6 Boundedness of Lagrange multipliers

Proposition 14.

Suppose that . Then under the assumptions of Theorem 11 we have that .

Proof.

Since , we have that , or, equivalently,

Choosing , we obtain an estimate of the norm of (since ):

(3.16)

It is worth noting that, although , we only get a bound on the norm of here.

3.7 Convergence of the subgradient

Now we are ready to study the convergence of the subgradient of at the optimal solution of the primal problem .

Proposition 15.

Under the assumptions of Theorem 11 the sequence has a weakly- convergent subsequence (in ), which we still denote by , , and in .

Proof.

Since ’s are bounded in , (along a subsequence) in by the Banach-Alaoglu theorem [Burger_Osher_TV_Zoo], i.e. for any we have that . Considering, for an arbitrary , the scalar product , we note that , since in and in . Therefore, in . ∎

Remark 16.

Since (3.12) holds for any delivering the source condition, i.e. every such that ,