On \mathbf{\ell^{1}}-regularization under continuity of the forward operator in weaker topologies

# On ℓ1-regularization under continuity of the forward operator in weaker topologies

Daniel Gerth and Bernd Hofmann111Chemnitz University of Technology, Faculty of Mathematics, 09107 Chemnitz, Germany, {daniel.gerth,bernd.hofmann}@mathematik.tu-chemnitz.de
###### Abstract

Our focus is on the stable approximate solution of linear operator equations based on noisy data by using -regularization as a sparsity-enforcing version of Tikhonov regularization. We summarize recent results on situations where the sparsity of the solution slightly fails. In particular, we show how the recently established theory for weak*-to-weak continuous linear forward operators can be extended to the case of weak*-to-weak* continuity. This might be of interest when the image space is non-reflexive. We discuss existence, stability and convergence of regularized solutions. For injective operators, we will formulate convergence rates by exploiting variational source conditions. The typical rate function obtained under an ill-posed operator is strictly concave and the degree of failure of the solution sparsity has an impact on its behavior. Linear convergence rates just occur in the two borderline cases of proper sparsity, where the solutions belong to , and of well-posedness. For an exemplary operator, we demonstrate that the technical properties used in our theory can be verified in practice. In the last section, we briefly mention the difficult case of oversmoothing regularization where does not belong to .

## 1 Introduction

We are going to deal with the stable solution of linear operator equations

 Ax=y (1)

with a bounded linear operator , mapping from the non-reflexive infinite dimensional space of absolutely summable infinite real or complex sequences to an infinite dimensional Banach space . Instead of the exact right-hand side from the range of we assume to have only noisy data available which satisfy the deterministic noise model

 ∥y−yδ∥Y≤δ (2)

with prescribed noise level . Our focus for solving equation (1) is on the method of -regularization, where for regularization parameters the minimizers of the extremal problem

 1p∥Ax−yδ∥pY+α∥x∥ℓ1→min,subject tox∈ℓ1, (3)

are used as approximate solutions. This method is a sparsity-enforcing version of Tikhonov regularization, possessing applications in different branches of imaging, natural sciences, engineering and mathematical finance. It was comprehensively analyzed with all its facets and varieties in the last fifteen years (cf., e.g., the corresponding chapters in the books [31, 32, 33] and the papers [2, 4, 8, 12, 20, 21, 22, 25, 28, 29]). We restrict our considerations to injective operators such that the element denotes the uniquely determined solution to (1). For assertions concerning the case of non-injective operators in the context of -regularization, we refer to [10]. In the non-injective case, even the -norm minimizing solutions need not be uniquely determined. As a consequence, very technical conditions must be introduced in order to formulate convergence assertions and rates. In our framework, the Propositions 4.3 and 5.4 below would have to be adapted, which however is out of the scope of this paper.

With the paper [5] as starting point and preferably based on variational source conditions first introduced in [23], convergence rates for -regularization of operator equations (1) and variants like elastic-net

 1p∥Ax−yδ∥pY+α(12∥x∥ℓ2+η∥x∥ℓ1)→min, subject to x∈ℓ1 (4)

have been verified under the condition that the sparsity assumption slightly fails (cf. [6, 13, 14]). This means that the solution is not sparse, abbreviated as . Most recently in [11], the first author and Jens Flemming have shown that complicated conditions on , usually supposed for proving convergence rates in -regularization (cf. [5, Assumption 2.2 (c)] and condition (9) below), can be simplified to the requirement of weak-to-weak continuity of the injective operator . This seems to be convincing if is a reflexive Banach space. The present paper, however, makes assertions also in the case that is only weak-to-weak continuous, which is of interest for non-reflexive Banach spaces . Moreover, we complement results from [11], for example with respect to the well-posed situation.

The paper is organized as follows. In Section 2 we recall basic properties of -regularization. We proceed in Section 3 by discussing the ill-posedness of equation (1). We mention that in particular variational source conditions allow us to deal with the ill-posedness and yield convergence rates. For our convergence analysis a particular property of the operator is necessary. In Section 4 we show that weak*-to-weak continuity and injectivity imply this property. Interestingly, the same property holds under weak*-to-weak* continuity and injectivity as shown in Section 5. There we also derive the convergence rates which hold for both continuity assumptions. Finally, we demonstrate that even the case of a well-posed operator is reflected in our property in Section 6. There we also hint at the case of oversmoothing regularization, which occurs when one employs -regularization although the true solution does not belong to .

## 2 Preliminaries and basic propositions

In this paper, we consider the variant (3) of -regularization with some exponent and with a regularization parameter . Let . Then, due to the injectivity of , there exists a uniquely determined solution to (1). With the following Proposition 2.1 we recall the assertions of Proposition 2.8 in [5] with respect to existence, stability, convergence and sparsity of the -regularized solutions . The proof ibidem emphasizes the fact that most of these properties follow directly from the general theory of Tikhonov regularization in Banach spaces (cf., e.g., [23, Section 3] and [33, Section 4.1]). Since for the Tikhonov functional to be minimized in (3) is strictly convex, the regularized solutions , whenever they exist, are uniquely determined for all .

###### Proposition 2.1.

Let be weak-to-weak continuous, i.e., in implies that in . Then for all and all there exist uniquely determined minimizers of the Tikhonov functional from (3). These regularized solutions are sparse, i.e., , and they are stable with respect to the data, i.e., small perturbations in in the norm topology of lead only to small changes in with respect to the weak-topology in . If and if the regularization parameters are chosen such that and as , then converges in the weak-topology of to the uniquely determined solution of the operator equation (1). Moreover we have , which, as a consequence of the weak Kadec-Klee property in (see, e.g., [3, Lemma 2.2]), implies norm convergence

 limn→∞∥xδnαn−x†∥ℓ1=0.

The weak-to-weak continuity of in combination with the stabilizing property of the penalty functional in together with an appropriate choice of the regularization parameter represent basic assumptions of Proposition 2.1. In contrast to regularization in reflexive Banach space, where the level sets of the norm functional are weakly compact, we have in weak compactness of the corresponding level sets according to the sequential Banach-Alaoglu theorem (cf., e.g., [30, Theorems 3.15 and 3.17]), which we present in form of the following lemma.

###### Lemma 2.2.

The closed unit ball of a Banach space is compact in the weak-topology if there is a separable Banach space (predual space) with dual . Then any bounded sequence in has a weak-convergent subsequence such that as .

The occurring kind of compactness of the level sets with and predual space ensures the existence of minimizers of the functional (3).

Throughout this paper, we use the terms ‘continuous’, ‘compact’ or ‘lower semicontinuous’ for an operator, a set or a functional always in the sense of ‘sequentially continuous’, ‘sequentially compact’ or ‘sequentially lower semicontinuous’. As the Lemmas 6.3 and 6.5 from [9] show, there is no reason for a distinction in case of using weak topologies. From Lemma 2.7 and Proposition 2.4 in [5] one can take assertions concerning sufficient conditions for the weak-to-weak continuity of , which we summarize in the Proposition 2.3 below. As also indicated in Proposition 2.1, for the choice of , the so-called regularization property

 α(δ,yδ)→0andδpα(δ,yδ)→0asδ→0, (5)

where tends to zero, but sufficiently slow, plays an important role. In our studies, we consider on the one hand a priori parameter choices defined as

 α(δ):=δpφ(δ),0<δ≤¯¯¯δ, (6)

with concave index functions . We call an index function if with is continuous and strictly increasing. Obviously, an a priori parameter choice from (6) with an arbitrary concave index function satisfies (5) as is valid for each index function and we have as , because is an index function for all exponents in (3) and the factor is bounded whenever is concave.

On the other hand, we consider the sequential discrepancy principle, comprehensively analyzed in [1] (see also [24]), as a specific a posteriori parameter choice for the regularization parameter. For prescribed and a sufficiently large value , we let

 Δq:={αj>0:αj=qjα0,j=1,2,...}.

Given and , we choose according to the sequential discrepancy principle such that

 ∥Axδα−yδ∥≤τδ<∥Axδα/q−yδ∥. (7)

By Theorem 1 in [1] it has been shown that there is such that is well-defined for and satisfies (5) whenever data compatibility in the sense of [1, Assumption 3] takes place.

Consequently, both regularization parameter choices and are applicable for the -regularization in order to get existence, stability and convergence of regularized solutions in the sense of Proposition 2.1. Now we are going to discuss conditions under which weak-to-weak continuity of can be obtained. The occurring cross connections are relevant in order to ensure existence, stability and convergence of regularized solutions, but they have also an essential impact on convergence rates which will be discussed in Section 4.

###### Proposition 2.3.

Let with adjoint operator satisfy the condition

 R(A∗)⊆c0, (8)

where is the Banach space of real-valued sequences converging to zero equipped with the supremum norm. Then is weak-to-weak continuous. In particular, (8) is fulfilled whenever there exist, for all , source elements such that the system of source conditions

 e(k)=A∗f(k) (9)

holds true, where is the sequence of -th standard unit vectors which forms a Schauder basis in . Under the condition (9) we even have the equality

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯R(A∗)ℓ∞=c0. (10)

The paper [2] shows that the condition (9), originally introduced by Grasmair in [19], can be verified for a wide class of applied linear inverse problems. But as also the counterexamples in [12] indicate, it may fail if the underlying basis smoothness is insufficient. However, weak-to-weak continuity of can be reformulated in several ways as the following proposition, proven in [10, Lemma 2.1], shows. This proposition brings more order into the system of conditions.

###### Proposition 2.4.

The three assertions

• converges in weakly to zero, i.e.  ,

•  ,

• is weak-to-weak continuous ,

are equivalent.

As outlined in [5], the operator equation (1) with operator is often motivated by a background operator equation with an injective and bounded linear operator mapping from the infinite dimensional Banach space with uniformly bounded Schauder basis , i.e. , to the Banach space . Here, following the setting in [19] we take into account a synthesis operator defined as for , which is a well-defined, injective and bounded linear operator, and so is the composite operator . In particular is always weak-to-weak continuous if has a bounded extension to , , as this yields (i) in Proposition 2.4. Even more specific, is weak*-to-weak continuous if is a Hilbert space. Since this case appears rather often in practice, the continuity property comes “for free” in this situation.

## 3 Ill-posedness and conditional stability

In this section, we discuss ill-posedness phenomena of the operator equation (1) based on Nashed’s definition from [27], which we formulate in the following as Definition 3.1 for the simplified case of an injective bounded linear operator. Moreover, we draw a connecting line to the phenomenon of conditional well-posedness characterized by conditional stability estimates, which yield for appropriate choices of the regularization parameter convergence rates in Tikhonov-type regularization.

###### Definition 3.1.

The operator equation with an injective bounded linear operator mapping between infinite dimensional Banach spaces and is called well-posed if the range of is a closed subset of , otherwise the equation is called ill-posed. In the ill-posed case, we call the equation ill-posed of type I if contains an infinite dimensional closed subspace and otherwise ill-posed of type II.

The following proposition taken from [13, Proposition 4.2 and 4.4] and the associated Figure 1 gives some more insight into the different situations distinguished in Definition 3.1.

###### Proposition 3.2.

Consider the operator equation from Definition 3.1. If this equation is well-posed, i.e., and there is some constant such that for all or the equation is ill-posed of type I, then the operator is non-compact. Consequently, compactness of implies ill-posedness of type II. More precisely, for an ill-posed equation with injective and infinite dimensional Banach spaces and , ill-posedness of type II occurs if and only if is strictly singular. This means that the restriction of to an infinite dimensional subspace of is never an isomorphism (linear homeomorphism). If and are both Hilbert spaces and the equation is ill-posed, then ill-posedness of type II occurs if and only if is compact.

Now we apply the case distinction of Definition 3.1, verified in detail in Proposition 3.2, to our situation of equation (1) with and . We start with a general observation in Proposition 3.3, which motivates the use of -regularization for the stable approximate solution of (1), because the equation is mostly ill-posed. Below we enlighten the cross connections a bit more by the discussion of some example situations.

###### Proposition 3.3.

If is a reflexive Banach space, then the operator equation (1) is always ill-posed of type II.

###### Proof.

As a consequence of the theorem from [18] we have that every bounded linear operator is strictly singular if is a reflexive Banach space. Hence well-posedness and ill-posedness of type I cannot occur in such case. ∎

###### Example 3.4.

Consider that, as mentioned before in Section 2, we have a composition with forward operator for reflexive and synthesis operator . Then (1) is ill-posed of type II even if is continuously invertible and hence the equation well-posed. This may occur, for example, for Fredholm or Volterra integral equations of the second kind. Similarly, if as mapping between Hilbert spaces is non-compact with non-closed range and hence is ill-posed of type I (which occurs, e.g., for multiplication operators mapping in ), (1) is still ill-posed of type II . In the frequent case that is a separable Hilbert space and an orthonormal basis, then is compact whenever is compact (occurring for example for Fredholm or Volterra integral equations of the first kind).

###### Example 3.5.

If with and is the embedding operator, then solving equation (1) based on noisy data fulfilling (2) is a denoising problem (see also [13, Sect. 5] and [14, Example 6.1]). For the embedding operator is strictly singular with non-closed range but non-compact. Due to Proposition 3.3 the equation is ill-posed of type II. Moreover, we have in , which due to Proposition 2.4 implies that is weak-to-weak continuous and hence . The latter is obvious, because the adjoint is the embedding operator from to with and . In particular, the source condition (9) applies with for all .

###### Example 3.6.

For in the previous example we have the continuously invertible identity operator with closed range . Then equation (1) is well-posed, but we have in for , which due to Proposition 2.4 indicates that the range of the adjoint of does not belong to and in particular is not weak-to-weak continuous. This is evident, because the adjoint of is the identity and . We will come back to this example later.

For obtaining error estimates in -regularization on which convergence rates are based, we need some kind of conditional well-posedness in order to overcome the ill-posedness of equation (1). Well-posed varieties of equation (1) yield stability estimates for all , which under (2) and for the choice imply the best possible rate

 ∥xδα−x†∥ℓ1=O(δ)asδ→0, (11)

which is typical for well-posed situations. We will come back to this in Section 6. We say that a conditional stability estimate holds true if there is a subset such that

 ∥x−x†∥ℓ1≤K(M)∥Ax−Ax†∥Yfor allx∈M. (12)

Because is not known a priori, such kind of stability requires the additional use of regularization for bringing the approximate solutions to such that a rate (11) can be verified. This idea was first published in [7] by Cheng and Yamamoto. In the context of -regularization for our equation (1), we have estimates of the form (12) if the solution is sparse, i.e. only a finite number of non-zero components occur in the infinite sequence . Then can be considered as a subset of with specific properties, and the sparsity of -regularized solutions verified in Proposition 2.1 ensures that the corresponding approximate solutions belong to . This implies the rate (11) for , although equation (1) is not well-posed.

A similar but different kind of conditional well-posedness estimates are variational source conditions, which attain in our setting the form

 β∥x−x†∥ℓ1≤∥x∥ℓ1−∥x†∥ℓ1+φ(∥Ax−Ax†∥Y)for allx∈ℓ1, (13)

satisfied for a constant and some concave index function . From [24, Theorems 1 and 2] we find directly the convergence rates results of the subsequent proposition.

###### Proposition 3.7.

If the variational source condition (13) holds true for a constant and some concave index function , then we have for -regularized solutions the convergence rate

 ∥xδα−x†∥ℓ1=O(φ(δ))as% δ→0 (14)

whenever the regularization parameter is chosen either a priori according to (6) or a posteriori as according to (7).

Consequently, for the manifestation of convergence rates results in the next section it remains to find constants , concave index functions and sufficient conditions for the verification of corresponding variational inequalities (13).

## 4 Convergence rates for ℓ1-regularization

The first step to derive a variational source condition (13) at the solution point was taken by Lemma 5.1 in [5], where the inequality

 ∥x−x†∥ℓ1≤∥x∥ℓ1−∥x†∥ℓ1+2(∞∑k=n+1|x†k|+n∑k=1|xk−x†k|) (15)

was proven for all and all . Then under the source condition (9) ,valid for all , one directly finds

 n∑k=1|xk−x†k|=n∑k=1⟨e(k),x−x†⟩ℓ∞×ℓ1=n∑k=1⟨f(k),A(x−x†)⟩Y∗×Y (16)

and hence from (15) that a function of type

 φ(t)=2infn∈N(∞∑k=n+1|x†k|+γnt) (17)

with and

 γn=n∑k=1∥f(k)∥Y∗ (18)

provides us with a variational inequality (13). Along the lines of the proof of [5, Theorem 5.2] one can show the assertion of the following lemma.

###### Lemma 4.1.

If is a non-decreasing sequence, then from (17) is a well-defined and concave index function for all .

Both the decay rate of as and the behaviour of as in (17) have impact on the resulting rate function . A power-type decay of leads to Hölder convergence rates (see [5, Example 5.3] and [13, Example 3.4]), whereas exponential decay of leads to near-to- rates slowed down by a logarithmic factor (see [3, Example 3.5] and [13, Example 3.5]). In the case that is sparse with for all , then the best possible rate (11) is seen. This becomes clear from formula (17), because then fulfills the inequality .

From Proposition 3.7 we have that for all concave index functions from (17) a convergence rate (14) for the -regularization takes place in the case of appropriate choices of the regularization parameter whenever a constant exists such that (13) is valid with from (17). When the condition (9) is valid, this is the case with and from (18). Under the same condition the rate was slightly improved in [13] (see also [14]) by showing that from (18) can be replaced with

 γn=supak∈{−1,0,1}k=1,…,n∥∥ ∥∥n∑k=1akf(k)∥∥ ∥∥Y∗. (19)

However, the condition (9) may fail as was noticed first in [12] for a bidiagonal operator. Therefore, assumption (9) was replaced by a weaker (but not particularly eye-pleasing) one in [12]. Ibid the authors assume, in principle, that for each there are elements such that for all

 [A∗f(n,k)]i=[e(k)]i

and

 ∣∣ ∣∣n∑k=1[A∗f(n,k)]i∣∣ ∣∣≤cfor % alli>nandc<1.

This means that each basis vector can be approximated exactly up to arbitrary position but with a non-zero tail consisting of sufficiently small elements. Later, in [14], a more clearly formulated property was assumed which implies the one from [12]. We give a slightly reformulated version of this property in the following. In this context, we notice that denotes the projection operator applied to elements such that .

###### Property 4.2.

For arbitrary , we have a real sequence such that for each and each , with

 ξk{∈[−1,1],if k≤n,=0,if k>n, (20)

there exists some satisfying

• ,

•  for all ,

• .

It is important to note that it was a substantial breakthrough in the recent paper [11] to show that Property 4.2 follows directly from injectivity and weak-to-weak continuity of the operator . Namely, the following proposition was proven there. Note that we changed the definition of the in (20) slightly. By checking the proofs in the original paper one sees the amendments we made are not relevant.

###### Proposition 4.3.

Let be bounded, linear and weak-to-weak continuous. Then the following assertions are equivalent.

• is injective,

• for all ,

• ,

• Property 4.2 holds.

In other words, for such operators there exist appropriate sequences occurring in (17) such that a variational source condition (13) holds for an index function from (17) and constant (see Proposition 5.5 below). Item (b) in Property 4.2 is a generalization of (9). Namely, the canonical basis vectors do not necessarily belong to the range of but to its closure. For the proof of Proposition 4.3 we refer to [11]. Most of the steps are identical or at least similar to the proof of Proposition 5.4 which we will give later.

## 5 Non-reflexive image spaces

If the injective bounded linear operator fails to be weak-to-weak continuous, then the results of the preceding section do not apply. In case that is a non-reflexive Banach space, it makes sense to consider the weaker property of weak-to-weak continuity of . An already mentioned example is the identity mapping for . In , weak convergence and norm convergence coincide (Schur property), but there is no coincidence with weak convergence. Thus, the identity mapping cannot be weak-to-weak continuous, but it is weak-to-weak continuous as the following Proposition 5.1 shows. It is a modified extension of Proposition 2.4. Following [9, Lemma 6.5] we formulate this extension and repeat below the relevant proof details.

###### Proposition 5.1.

Let be a separable Banach space which acts as a predual space for the Banach space . Then the following four assertions are equivalent.

• converges in weakly to zero ,

•  ,

• is weak-to-weak continuous ,

• There is a bounded linear operator such that  .

###### Proof.

Let (i) be satisfied. Then for each from we have

 [A∗z]k=⟨A∗z,e(k)⟩ℓ∞×ℓ1=⟨z,Ae(k)⟩Y∗×Y=⟨Ae(k),z⟩Z∗×Z→0

as . This yields and hence (ii) is valid.

Now let (ii) be true and take a weakly convergent sequence in as . Then for all in . Because moreover belongs to and is the dual of , we may write this as . Thus,

 limn→∞⟨Axn,z⟩Z∗×Z =limn→∞⟨xn,A∗z⟩ℓ1×c0=⟨x0,A∗z⟩ℓ1×c0 =⟨Ax0,z⟩Z∗×Zfor all z∈Z,

which proves condition (iii). From (iii) and the fact that in as we immediately obtain (i). Finally, the equivalence between (iii) and (iv) can be found, e.g., in [26, Theorem 3.1.11]. ∎

As a consequence of item (iv) in Proposition 5.1, each weak-to-weak continuous linear operator is automatically bounded. Figure 2 illustrates the connection between the different spaces and operators we juggle around in this section.

For the identity mapping with and predual , property (i) of Proposition 5.1 is trivially satisfied which yields the weak-to-weak continuity of this operator. Note that the case , is only of theoretical interest. Precisely, it is a tool for exploring the frontiers of the theoretic framework we have chosen for investigating -regularization. For practical applications it is irrelevant because one easily verifies that with the choice in (3), where we have , the -regularized solutions coincide with the data if and we have the best possible rate (11).

Main parts of the above mentioned Proposition 2.1 on existence, stability and convergence of -regularized solutions remain true if the operator is only weak-to-weak continuous. The sparsity property , however, will fail in general (consider the example of the identity as mentioned above). Existence, stability and convergence assertions remain valid, because their proofs basically rely on the fact that the mapping is a weakly lower semicontinuous functional. This is the case in both variants, with or without , since the norm functional is weakly and also weakly lower semicontinuous. For the existence of regularized solutions (minimizers of the Tikhonov functional (3)) again the Banach-Alaoglu theorem (Lemma 2.2) is required and yields weakly compact level sets of the -norm functional.

Our goal is to proof an analogue to Proposition 4.3 for weak-to-weak continuous operators. We start with a first observation.

###### Proposition 5.2.

Let be injective and weak-to-weak continuous and let for some Banach space . Then

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯R(A∗|Z)ℓ∞=c0.
###### Proof.

From item (iv) of Proposition 5.1 we take the operator with . As is injective, i.e., , it follows

 ¯¯¯¯¯¯¯¯¯¯¯¯R(S)c0=N(S∗)⊥=N(A)⊥=c0.

There, the subscript denotes the pre-annihilator for a set , in our situation with and defined as

 V⊥:={x∈c0:⟨ζ,x⟩ℓ1×c0=0∀ζ∈V}.

Let and recall (cf. Figure 2). Then for each

 ⟨A∗η,x⟩ℓ∞×ℓ1 =⟨η,Ax⟩Y∗×Y=⟨η,Ax⟩Z×Y=⟨Sη,x⟩c0×ℓ1 =⟨Sη,x⟩ℓ∞×ℓ1,

i.e., . Thus . At this point we emphasize that in both Banach spaces and the same supremum norm applies. ∎

We will show in Proposition 5.4 that conversely implies injectivity for weak-to-weak continuous operators. Before doing so we need the following Proposition which coincides in principle with [11, Proposition 9].

###### Proposition 5.3.

Let be injective and weak-to-weak continuous. Moreover, let and . Then for each there exists such that

 ~ξk=ξkfor k≤nand|~ξk−ξk|≤εfor k>n.
###### Proof.

We proof the proposition by induction with respect to . For set

 ξ+:=(ξ1+ε,ξ2,ξ3,…)andξ−:=(ξ1−ε,ξ2,ξ3,…).

By Proposition 5.2 we have that . Hence we find elements and with

 ∥~ξ+−ξ+∥ℓ∞≤εand∥~ξ−−ξ−∥ℓ∞≤ε.

Consequently, and as well as for . Thus we find a convex combination of and such that . This obviously also satisfies for , which proves the proposition for .

Now let the proposition be true for . We prove it for . Let and set

 ξ+ :=(ξ1,…,ξm,ξm+1+ε,ξm+2,ξm+3,…), ξ− :=(ξ1,…,ξm,ξm+1−ε,ξm+2,ξm+3,…).

By the induction hypothesis we find and with

 ~ξ+k=ξk=~ξ−kfor k≤m

and

 |~ξ+k−ξ+k|≤εand|~ξ−k−ξ−k|≤εfor k>m.

Consequently, we have and as well as for . Thus we find a convex combination of and such that . This obviously also satisfies for and for , which proves the proposition for . ∎

Now we come to the main result of this section. The proof is similar and in part identical to the one of Proposition 12 in [11].

###### Proposition 5.4.

Let be bounded, linear and weak-to-weak continuous. Then the following assertions are equivalent.

• is injective,

• ,

• for all ,

• Property 4.2 holds.

###### Proof.

We show (i) (iv) (iii) (ii) (i).

(i)(iv): Fix , and take some as described in Property 4.2. By Proposition 5.3 with there exists some such that ( in the proposition) satisfies items (a) and (b) in Property 4.2. In particular we have such that and for all . Since it is

 ξ=n∑k=1cke(k)=n∑k=1ckA∗ηk=A∗(n∑k=1ckηk),

for coefficients , i.e., with as an upper bound for . By construction this also fulfills

 |[(I−Pn)A∗η]i|≤n∑i=1|[(I−Pn)A∗ηk]i|≤μ.

(iv)(iii): Fix , fix , take a sequence in with and choose in Property 4.2. Then for a corresponding sequence from Property 4.2 we obtain

 ∥e(k)−A∗ηm∥ℓ∞≤∥e(k)−