On regularization under continuity of the forward operator in weaker topologies
Abstract
Our focus is on the stable approximate solution of linear operator equations based on noisy data by using regularization as a sparsityenforcing version of Tikhonov regularization. We summarize recent results on situations where the sparsity of the solution slightly fails. In particular, we show how the recently established theory for weak*toweak continuous linear forward operators can be extended to the case of weak*toweak* continuity. This might be of interest when the image space is nonreflexive. We discuss existence, stability and convergence of regularized solutions. For injective operators, we will formulate convergence rates by exploiting variational source conditions. The typical rate function obtained under an illposed operator is strictly concave and the degree of failure of the solution sparsity has an impact on its behavior. Linear convergence rates just occur in the two borderline cases of proper sparsity, where the solutions belong to , and of wellposedness. For an exemplary operator, we demonstrate that the technical properties used in our theory can be verified in practice. In the last section, we briefly mention the difficult case of oversmoothing regularization where does not belong to .
1 Introduction
We are going to deal with the stable solution of linear operator equations
(1) 
with a bounded linear operator , mapping from the nonreflexive infinite dimensional space of absolutely summable infinite real or complex sequences to an infinite dimensional Banach space . Instead of the exact righthand side from the range of we assume to have only noisy data available which satisfy the deterministic noise model
(2) 
with prescribed noise level . Our focus for solving equation (1) is on the method of regularization, where for regularization parameters the minimizers of the extremal problem
(3) 
are used as approximate solutions. This method is a sparsityenforcing version of Tikhonov regularization, possessing applications in different branches of imaging, natural sciences, engineering and mathematical finance. It was comprehensively analyzed with all its facets and varieties in the last fifteen years (cf., e.g., the corresponding chapters in the books [31, 32, 33] and the papers [2, 4, 8, 12, 20, 21, 22, 25, 28, 29]). We restrict our considerations to injective operators such that the element denotes the uniquely determined solution to (1). For assertions concerning the case of noninjective operators in the context of regularization, we refer to [10]. In the noninjective case, even the norm minimizing solutions need not be uniquely determined. As a consequence, very technical conditions must be introduced in order to formulate convergence assertions and rates. In our framework, the Propositions 4.3 and 5.4 below would have to be adapted, which however is out of the scope of this paper.
With the paper [5] as starting point and preferably based on variational source conditions first introduced in [23], convergence rates for regularization of operator equations (1) and variants like elasticnet
(4) 
have been verified under the condition that the sparsity assumption slightly fails (cf. [6, 13, 14]). This means that the solution is not sparse, abbreviated as . Most recently in [11], the first author and Jens Flemming have shown that complicated conditions on , usually supposed for proving convergence rates in regularization (cf. [5, Assumption 2.2 (c)] and condition (9) below), can be simplified to the requirement of weaktoweak continuity of the injective operator . This seems to be convincing if is a reflexive Banach space. The present paper, however, makes assertions also in the case that is only weaktoweak continuous, which is of interest for nonreflexive Banach spaces . Moreover, we complement results from [11], for example with respect to the wellposed situation.
The paper is organized as follows. In Section 2 we recall basic properties of regularization. We proceed in Section 3 by discussing the illposedness of equation (1). We mention that in particular variational source conditions allow us to deal with the illposedness and yield convergence rates. For our convergence analysis a particular property of the operator is necessary. In Section 4 we show that weak*toweak continuity and injectivity imply this property. Interestingly, the same property holds under weak*toweak* continuity and injectivity as shown in Section 5. There we also derive the convergence rates which hold for both continuity assumptions. Finally, we demonstrate that even the case of a wellposed operator is reflected in our property in Section 6. There we also hint at the case of oversmoothing regularization, which occurs when one employs regularization although the true solution does not belong to .
2 Preliminaries and basic propositions
In this paper, we consider the variant (3) of regularization with some exponent and with a regularization parameter . Let . Then, due to the injectivity of , there exists a uniquely determined solution to (1). With the following Proposition 2.1 we recall the assertions of Proposition 2.8 in [5] with respect to existence, stability, convergence and sparsity of the regularized solutions . The proof ibidem emphasizes the fact that most of these properties follow directly from the general theory of Tikhonov regularization in Banach spaces (cf., e.g., [23, Section 3] and [33, Section 4.1]). Since for the Tikhonov functional to be minimized in (3) is strictly convex, the regularized solutions , whenever they exist, are uniquely determined for all .
Proposition 2.1.
Let be weaktoweak continuous, i.e., in implies that in . Then for all and all there exist uniquely determined minimizers of the Tikhonov functional from (3). These regularized solutions are sparse, i.e., , and they are stable with respect to the data, i.e., small perturbations in in the norm topology of lead only to small changes in with respect to the weaktopology in . If and if the regularization parameters are chosen such that and as , then converges in the weaktopology of to the uniquely determined solution of the operator equation (1). Moreover we have , which, as a consequence of the weak KadecKlee property in (see, e.g., [3, Lemma 2.2]), implies norm convergence
The weaktoweak continuity of in combination with the stabilizing property of the penalty functional in together with an appropriate choice of the regularization parameter represent basic assumptions of Proposition 2.1. In contrast to regularization in reflexive Banach space, where the level sets of the norm functional are weakly compact, we have in weak compactness of the corresponding level sets according to the sequential BanachAlaoglu theorem (cf., e.g., [30, Theorems 3.15 and 3.17]), which we present in form of the following lemma.
Lemma 2.2.
The closed unit ball of a Banach space is compact in the weaktopology if there is a separable Banach space (predual space) with dual . Then any bounded sequence in has a weakconvergent subsequence such that as .
The occurring kind of compactness of the level sets with and predual space ensures the existence of minimizers of the functional (3).
Throughout this paper, we use the terms ‘continuous’, ‘compact’ or ‘lower semicontinuous’ for an operator, a set or a functional always in the sense of ‘sequentially continuous’, ‘sequentially compact’ or ‘sequentially lower semicontinuous’. As the Lemmas 6.3 and 6.5 from [9] show, there is no reason for a distinction in case of using weak topologies. From Lemma 2.7 and Proposition 2.4 in [5] one can take assertions concerning sufficient conditions for the weaktoweak continuity of , which we summarize in the Proposition 2.3 below. As also indicated in Proposition 2.1, for the choice of , the socalled regularization property
(5) 
where tends to zero, but sufficiently slow, plays an important role. In our studies, we consider on the one hand a priori parameter choices defined as
(6) 
with concave index functions . We call an index function if with is continuous and strictly increasing. Obviously, an a priori parameter choice from (6) with an arbitrary concave index function satisfies (5) as is valid for each index function and we have as , because is an index function for all exponents in (3) and the factor is bounded whenever is concave.
On the other hand, we consider the sequential discrepancy principle, comprehensively analyzed in [1] (see also [24]), as a specific a posteriori parameter choice for the regularization parameter. For prescribed and a sufficiently large value , we let
Given and , we choose according to the sequential discrepancy principle such that
(7) 
By Theorem 1 in [1] it has been shown that there is such that is welldefined for and satisfies (5) whenever data compatibility in the sense of [1, Assumption 3] takes place.
Consequently, both regularization parameter choices and are applicable for the regularization in order to get existence, stability and convergence of regularized solutions in the sense of Proposition 2.1. Now we are going to discuss conditions under which weaktoweak continuity of can be obtained. The occurring cross connections are relevant in order to ensure existence, stability and convergence of regularized solutions, but they have also an essential impact on convergence rates which will be discussed in Section 4.
Proposition 2.3.
Let with adjoint operator satisfy the condition
(8) 
where is the Banach space of realvalued sequences converging to zero equipped with the supremum norm. Then is weaktoweak continuous. In particular, (8) is fulfilled whenever there exist, for all , source elements such that the system of source conditions
(9) 
holds true, where is the sequence of th standard unit vectors which forms a Schauder basis in . Under the condition (9) we even have the equality
(10) 
The paper [2] shows that the condition (9), originally introduced by Grasmair in [19], can be verified for a wide class of applied linear inverse problems. But as also the counterexamples in [12] indicate, it may fail if the underlying basis smoothness is insufficient. However, weaktoweak continuity of can be reformulated in several ways as the following proposition, proven in [10, Lemma 2.1], shows. This proposition brings more order into the system of conditions.
Proposition 2.4.
The three assertions

converges in weakly to zero, i.e. ,

,

is weaktoweak continuous ,
are equivalent.
As outlined in [5], the operator equation (1) with operator is often motivated by a background operator equation with an injective and bounded linear operator mapping from the infinite dimensional Banach space with uniformly bounded Schauder basis , i.e. , to the Banach space . Here, following the setting in [19] we take into account a synthesis operator defined as for , which is a welldefined, injective and bounded linear operator, and so is the composite operator . In particular is always weaktoweak continuous if has a bounded extension to , , as this yields (i) in Proposition 2.4. Even more specific, is weak*toweak continuous if is a Hilbert space. Since this case appears rather often in practice, the continuity property comes “for free” in this situation.
3 Illposedness and conditional stability
In this section, we discuss illposedness phenomena of the operator equation (1) based on Nashed’s definition from [27], which we formulate in the following as Definition 3.1 for the simplified case of an injective bounded linear operator. Moreover, we draw a connecting line to the phenomenon of conditional wellposedness characterized by conditional stability estimates, which yield for appropriate choices of the regularization parameter convergence rates in Tikhonovtype regularization.
Definition 3.1.
The operator equation with an injective bounded linear operator mapping between infinite dimensional Banach spaces and is called wellposed if the range of is a closed subset of , otherwise the equation is called illposed. In the illposed case, we call the equation illposed of type I if contains an infinite dimensional closed subspace and otherwise illposed of type II.
The following proposition taken from [13, Proposition 4.2 and 4.4] and the associated Figure 1 gives some more insight into the different situations distinguished in Definition 3.1.
Proposition 3.2.
Consider the operator equation from Definition 3.1. If this equation is wellposed, i.e., and there is some constant such that for all or the equation is illposed of type I, then the operator is noncompact. Consequently, compactness of implies illposedness of type II. More precisely, for an illposed equation with injective and infinite dimensional Banach spaces and , illposedness of type II occurs if and only if is strictly singular. This means that the restriction of to an infinite dimensional subspace of is never an isomorphism (linear homeomorphism). If and are both Hilbert spaces and the equation is illposed, then illposedness of type II occurs if and only if is compact.
Now we apply the case distinction of Definition 3.1, verified in detail in Proposition 3.2, to our situation of equation (1) with and . We start with a general observation in Proposition 3.3, which motivates the use of regularization for the stable approximate solution of (1), because the equation is mostly illposed. Below we enlighten the cross connections a bit more by the discussion of some example situations.
Proposition 3.3.
If is a reflexive Banach space, then the operator equation (1) is always illposed of type II.
Proof.
As a consequence of the theorem from [18] we have that every bounded linear operator is strictly singular if is a reflexive Banach space. Hence wellposedness and illposedness of type I cannot occur in such case. ∎
Example 3.4.
Consider that, as mentioned before in Section 2, we have a composition with forward operator for reflexive and synthesis operator . Then (1) is illposed of type II even if is continuously invertible and hence the equation wellposed. This may occur, for example, for Fredholm or Volterra integral equations of the second kind. Similarly, if as mapping between Hilbert spaces is noncompact with nonclosed range and hence is illposed of type I (which occurs, e.g., for multiplication operators mapping in ), (1) is still illposed of type II . In the frequent case that is a separable Hilbert space and an orthonormal basis, then is compact whenever is compact (occurring for example for Fredholm or Volterra integral equations of the first kind).
Example 3.5.
If with and is the embedding operator, then solving equation (1) based on noisy data fulfilling (2) is a denoising problem (see also [13, Sect. 5] and [14, Example 6.1]). For the embedding operator is strictly singular with nonclosed range but noncompact. Due to Proposition 3.3 the equation is illposed of type II. Moreover, we have in , which due to Proposition 2.4 implies that is weaktoweak continuous and hence . The latter is obvious, because the adjoint is the embedding operator from to with and . In particular, the source condition (9) applies with for all .
Example 3.6.
For in the previous example we have the continuously invertible identity operator with closed range . Then equation (1) is wellposed, but we have in for , which due to Proposition 2.4 indicates that the range of the adjoint of does not belong to and in particular is not weaktoweak continuous. This is evident, because the adjoint of is the identity and . We will come back to this example later.
For obtaining error estimates in regularization on which convergence rates are based, we need some kind of conditional wellposedness in order to overcome the illposedness of equation (1). Wellposed varieties of equation (1) yield stability estimates for all , which under (2) and for the choice imply the best possible rate
(11) 
which is typical for wellposed situations. We will come back to this in Section 6. We say that a conditional stability estimate holds true if there is a subset such that
(12) 
Because is not known a priori, such kind of stability requires the additional use of regularization for bringing the approximate solutions to such that a rate (11) can be verified. This idea was first published in [7] by Cheng and Yamamoto. In the context of regularization for our equation (1), we have estimates of the form (12) if the solution is sparse, i.e. only a finite number of nonzero components occur in the infinite sequence . Then can be considered as a subset of with specific properties, and the sparsity of regularized solutions verified in Proposition 2.1 ensures that the corresponding approximate solutions belong to . This implies the rate (11) for , although equation (1) is not wellposed.
A similar but different kind of conditional wellposedness estimates are variational source conditions, which attain in our setting the form
(13) 
satisfied for a constant and some concave index function . From [24, Theorems 1 and 2] we find directly the convergence rates results of the subsequent proposition.
Proposition 3.7.
Consequently, for the manifestation of convergence rates results in the next section it remains to find constants , concave index functions and sufficient conditions for the verification of corresponding variational inequalities (13).
4 Convergence rates for regularization
The first step to derive a variational source condition (13) at the solution point was taken by Lemma 5.1 in [5], where the inequality
(15) 
was proven for all and all . Then under the source condition (9) ,valid for all , one directly finds
(16) 
and hence from (15) that a function of type
(17) 
with and
(18) 
provides us with a variational inequality (13). Along the lines of the proof of [5, Theorem 5.2] one can show the assertion of the following lemma.
Lemma 4.1.
If is a nondecreasing sequence, then from (17) is a welldefined and concave index function for all .
Both the decay rate of as and the behaviour of as in (17) have impact on the resulting rate function . A powertype decay of leads to Hölder convergence rates (see [5, Example 5.3] and [13, Example 3.4]), whereas exponential decay of leads to nearto rates slowed down by a logarithmic factor (see [3, Example 3.5] and [13, Example 3.5]). In the case that is sparse with for all , then the best possible rate (11) is seen. This becomes clear from formula (17), because then fulfills the inequality .
From Proposition 3.7 we have that for all concave index functions from (17) a convergence rate (14) for the regularization takes place in the case of appropriate choices of the regularization parameter whenever a constant exists such that (13) is valid with from (17). When the condition (9) is valid, this is the case with and from (18). Under the same condition the rate was slightly improved in [13] (see also [14]) by showing that from (18) can be replaced with
(19) 
However, the condition (9) may fail as was noticed first in [12] for a bidiagonal operator. Therefore, assumption (9) was replaced by a weaker (but not particularly eyepleasing) one in [12]. Ibid the authors assume, in principle, that for each there are elements such that for all
and
This means that each basis vector can be approximated exactly up to arbitrary position but with a nonzero tail consisting of sufficiently small elements. Later, in [14], a more clearly formulated property was assumed which implies the one from [12]. We give a slightly reformulated version of this property in the following. In this context, we notice that denotes the projection operator applied to elements such that .
Property 4.2.
For arbitrary , we have a real sequence such that for each and each , with
(20) 
there exists some satisfying

,

for all ,

.
It is important to note that it was a substantial breakthrough in the recent paper [11] to show that Property 4.2 follows directly from injectivity and weaktoweak continuity of the operator . Namely, the following proposition was proven there. Note that we changed the definition of the in (20) slightly. By checking the proofs in the original paper one sees the amendments we made are not relevant.
Proposition 4.3.
Let be bounded, linear and weaktoweak continuous. Then the following assertions are equivalent.

is injective,

for all ,

,

Property 4.2 holds.
In other words, for such operators there exist appropriate sequences occurring in (17) such that a variational source condition (13) holds for an index function from (17) and constant (see Proposition 5.5 below). Item (b) in Property 4.2 is a generalization of (9). Namely, the canonical basis vectors do not necessarily belong to the range of but to its closure. For the proof of Proposition 4.3 we refer to [11]. Most of the steps are identical or at least similar to the proof of Proposition 5.4 which we will give later.
5 Nonreflexive image spaces
If the injective bounded linear operator fails to be weaktoweak continuous, then the results of the preceding section do not apply. In case that is a nonreflexive Banach space, it makes sense to consider the weaker property of weaktoweak continuity of . An already mentioned example is the identity mapping for . In , weak convergence and norm convergence coincide (Schur property), but there is no coincidence with weak convergence. Thus, the identity mapping cannot be weaktoweak continuous, but it is weaktoweak continuous as the following Proposition 5.1 shows. It is a modified extension of Proposition 2.4. Following [9, Lemma 6.5] we formulate this extension and repeat below the relevant proof details.
Proposition 5.1.
Let be a separable Banach space which acts as a predual space for the Banach space . Then the following four assertions are equivalent.

converges in weakly to zero ,

,

is weaktoweak continuous ,

There is a bounded linear operator such that .
Proof.
Let (i) be satisfied. Then for each from we have
as . This yields and hence (ii) is valid.
Now let (ii) be true and take a weakly convergent sequence in as . Then for all in . Because moreover belongs to and is the dual of , we may write this as . Thus,
which proves condition (iii). From (iii) and the fact that in as we immediately obtain (i). Finally, the equivalence between (iii) and (iv) can be found, e.g., in [26, Theorem 3.1.11]. ∎
As a consequence of item (iv) in Proposition 5.1, each weaktoweak continuous linear operator is automatically bounded. Figure 2 illustrates the connection between the different spaces and operators we juggle around in this section.
For the identity mapping with and predual , property (i) of Proposition 5.1 is trivially satisfied which yields the weaktoweak continuity of this operator. Note that the case , is only of theoretical interest. Precisely, it is a tool for exploring the frontiers of the theoretic framework we have chosen for investigating regularization. For practical applications it is irrelevant because one easily verifies that with the choice in (3), where we have , the regularized solutions coincide with the data if and we have the best possible rate (11).
Main parts of the above mentioned Proposition 2.1 on existence, stability and convergence of regularized solutions remain true if the operator is only weaktoweak continuous. The sparsity property , however, will fail in general (consider the example of the identity as mentioned above). Existence, stability and convergence assertions remain valid, because their proofs basically rely on the fact that the mapping is a weakly lower semicontinuous functional. This is the case in both variants, with or without , since the norm functional is weakly and also weakly lower semicontinuous. For the existence of regularized solutions (minimizers of the Tikhonov functional (3)) again the BanachAlaoglu theorem (Lemma 2.2) is required and yields weakly compact level sets of the norm functional.
Our goal is to proof an analogue to Proposition 4.3 for weaktoweak continuous operators. We start with a first observation.
Proposition 5.2.
Let be injective and weaktoweak continuous and let for some Banach space . Then
Proof.
From item (iv) of Proposition 5.1 we take the operator with . As is injective, i.e., , it follows
There, the subscript denotes the preannihilator for a set , in our situation with and defined as
Let and recall (cf. Figure 2). Then for each
i.e., . Thus . At this point we emphasize that in both Banach spaces and the same supremum norm applies. ∎
We will show in Proposition 5.4 that conversely implies injectivity for weaktoweak continuous operators. Before doing so we need the following Proposition which coincides in principle with [11, Proposition 9].
Proposition 5.3.
Let be injective and weaktoweak continuous. Moreover, let and . Then for each there exists such that
Proof.
We proof the proposition by induction with respect to . For set
By Proposition 5.2 we have that . Hence we find elements and with
Consequently, and as well as for . Thus we find a convex combination of and such that . This obviously also satisfies for , which proves the proposition for .
Now let the proposition be true for . We prove it for . Let and set
By the induction hypothesis we find and with
and
Consequently, we have and as well as for . Thus we find a convex combination of and such that . This obviously also satisfies for and for , which proves the proposition for . ∎
Now we come to the main result of this section. The proof is similar and in part identical to the one of Proposition 12 in [11].
Proposition 5.4.
Let be bounded, linear and weaktoweak continuous. Then the following assertions are equivalent.

is injective,

,

for all ,

Property 4.2 holds.
Proof.
We show (i) (iv) (iii) (ii) (i).
(i)(iv): Fix , and take some as described in Property 4.2. By Proposition 5.3 with there exists some such that ( in the proposition) satisfies items (a) and (b) in Property 4.2. In particular we have such that and for all . Since it is
for coefficients , i.e., with as an upper bound for . By construction this also fulfills