Iterated fractional Tikhonov regularization
Abstract
Fractional Tikhonov regularization methods have been recently proposed to reduce the oversmoothing property of the Tikhonov regularization in standard form, in order to preserve the details of the approximated solution. Their regularization and convergence properties have been previously investigated showing that they are of optimal order. This paper provides saturation and converse results on their convergence rates. Using the same iterative refinement strategy of iterated Tikhonov regularization, new iterated fractional Tikhonov regularization methods are introduced. We show that these iterated methods are of optimal order and overcome the previous saturation results. Furthermore, nonstationary iterated fractional Tikhonov regularization methods are investigated, establishing their convergence rate under general conditions on the iteration parameters. Numerical results confirm the effectiveness of the proposed regularization iterations.
1 Introduction
We consider linear operator equations of the form
(1.1) 
where is a compact linear operator between Hilbert spaces and . We assume to be attainable, i.e., that problem (1.1) has a solution of minimal norm. Here denotes the (MoorePenrose) generalized inverse operator of , which is unbounded when is compact, with infinite dimensional range. Hence problem (1.1) is illposed and has to be regularized in order to compute a numerical solution; see [4].
We want to approximate the solution of the equation (1.1), when only an approximation of is available with
(1.2) 
where is called the noise level. Since is not a good approximation of , we approximate with where is a family of continuous operators depending on a parameter that will be defined later. A classical example is the Tikhonov regularization defined by , where denotes the identity and the adjoint of , cf. [6].
Using the singular values expansion of , filter based regularization methods are defined in terms of filters of the singular values, cf. Proposition 3. This is a useful tool for the analysis of regularization techniques [10], both for direct and iterative regularization methods [8, 11]. Furthermore, new regularization methods can be defined investigating new classes of filters. For instance, one of the contributes in [13] is the proposal and the analysis of the fractional Tikhonov method. The authors obtain a new class of filtering regularization methods adding an exponent, depending on a parameter, to the filter of the standard Tikhonov method. They provide a detailed analysis of the filtering properties and the optimality order of the method in terms of such further parameter. A different generalization of the Tikhonov method has been recently proposed in [12] with a detailed filtering analysis. Both generalizations are called “fractional Tikhonov regularization” in the literature and they are compared in [5], where the optimality order of the method in [12] is provided as well. To distinguish the two proposals in [13] and [12], we will refer in the following as “fractional Tikhonov regularization” and “weighted Tikhonov regularization”, respectively. These variants of the Tikhonov method have been introduced to compute good approximations of nonsmooth solutions, since it is well known that the Tikhonov method provides oversmoothed solutions.
In this paper, we firstly provide a saturation result similar to the wellknown saturation result for Tikhonov regularization [4]: let be the range of and let be the orthogonal projector onto , if
then , as long as is not closed. Such result motivated us to introduce the iterated version of fractional and weighted Tikhonov in the same spirit of the iterated Tikhonov method. We prove that those iterated methods can overcome the previous saturation results. Afterwards, inspired by the works [1, 7] we introduce the nonstationary variants of our iterated methods. Differently from the nonstationary iterated Tikhonov, we have two nonstationary sequences of parameters. In the noise free case, we give sufficient conditions on these sequences to guarantee the convergence providing also the corresponding convergence rates. In the noise case, we show the stability of the proposed iterative schemes proving that they are regularization methods. Finally, few selected examples confirm the previous theoretical analysis, showing that a proper choice of the nonstationary sequences of parameters can provide better restorations compared to the classical iterated Tikhonov with a geometric sequence of regularizzation parameter according to [7].
The paper is organized as follows. Section 2 recalls the basic definition of filter based regularization methods and of optimal order of a regularization method. Fractional Tikhonov regularization with optimal order and converse results are studied in Section 3. Section 4 is devoted to saturation results for both variants of fractional Tikhonov regularization. New iterated fractional Tikhonov regularization methods are introduced in Section 5, where the analysis of their convergence rate shows that their are able to overcome the previous saturation results. A nonstationary iterated weighted Tikhonov regularization is investigated in detail in Section 6, while a similar nonstationary iterated fractional Tikhonov regularization is discussed in Section 7. Finally, some numerical examples are reported in Section 8.
2 Preliminaries
As described in the Introduction, we consider a compact linear operator between Hilbert spaces and (over the field or ) with given inner products and , respectively. Hereafter we will omit the subscript for the inner product as it will be clear in the context. If denotes the adjoint of (i.e., ), then we indicate with the singular value expansion (s.v.e.) of , where and are a complete orthonormal system of eigenvectors for and , respectively, and are written in decreasing order, with being the only accumulating point for the sequence . If is not finite dimensional, then , the spectrum of , namely . Finally, denotes the closure of , i.e., .
Let now be the spectral decomposition of the selfadjoint operator . Then from wellknown facts from functional analysis [16] we can write , where is a bounded Borel measurable function and is a regular complex Borel measure for every . The following equalities hold
(2.1)  
(2.2)  
(2.3)  
(2.4)  
(2.5) 
where the series (2.1) and (2.2) converge in the norms induced by the scalar products of and , respectively. If is a continuous function on then equality holds in (2.5).
Definition 1
We define the generalized inverse of a compact linear operator as
(2.6) 
where
With respect to problem (1.1), we consider the case where only an approximation of satisfying the condition (1.2) is available. Therefore , , cannot be approximated by , due to the unboundedness of , and hence in practice the problem (1.1) is approximated by a family of neighbouring wellposed problems [4].
Definition 2
By a regularization method for we call any family of operators
with the following properties:

is a bounded operator for every .

For every there exists a mapping (rule choice) , , such that
and
Throughout this paper is a constant which can change from one instance to the next. For the sake of clarity, if more than one constant will appear in the same line or equation we will distinguish them by means of a subscript.
Proposition 3
Let be a compact linear operator and its generalized inverse. Let be a family of operators defined for every as
(2.7) 
where is a Borel function such that
(2.8a)  
(2.8b)  
(2.8c) 
Then is a regularization method, with , and it is called filter based regularization method.
For the sake of notational brevity, we fix the following notation
(2.9)  
(2.10) 
We report hereafter the definition of optimal order, under the same apriori assumption given in [4].
Definition 4
For every given , let
A regularization method is called of optimal order under the apriori assumption if
(2.11) 
where for any general set , and for a regularization method , we define
If is not known, as it will be usually the case, then we relax the definition introducing the set
and saying that a regularization method is called of optimal order under the apriori assumption if
(2.12) 
Remark 5
Since we are concerned with the rate that converges to zero as , the apriori assumption is usually sufficient for the optimal order analysis, requiring that (2.12) is satisfied.
Hereafter we cite a theorem which states sufficient conditions for order optimality, when filtering methods are employed, see [14, Proposition 3.4.3, pag. 58].
Theorem 6
[14] Let be a compact linear operator, and , and let be a filter based regularization method. If there exists a fixed such that
(2.13a)  
(2.13b) 
then is of optimal order, under the apriori assumption , with the choice rule
If we are concerned just about the rate of convergence with respect to only , the preceding theorem can be applied under the apriori assumption , fitting the proof to the latter case without any effort. On the contrary, below we present a converse result.
Theorem 7
Let be a compact linear operator with infinite dimensional range and let be a filter based regularization method with filter function . If there exist and such that
(2.14) 
and
(2.15) 
then .
3 Fractional variants of Tikhonov regularization
In this section we discuss two recent types of regularization methods that generalize the classical Tikhonov method and that were first introduced and studied in [12] and [13].
3.1 Weighted Tikhonov regularization
Definition 8 ([12])
We call Weighted Tikhonov method the filter based method
where the filter function is
(3.1) 
for and .
Remark 9
The Weighted Tikhonov method can also be defined as the unique minimizer of the following functional,
(3.4) 
where the seminorm is induced by the operator . For , is to be intended as the MoorePenrose (pseudo) inverse. Developing the calculations, it follows that
(3.5) 
That is the reason that motivated us to rename the original method of Hochstenbach and Reichel, that appeared in [12], into weighted Tikhonov method. In this way it would be easier to distinguish from the fractional Tikhonov method introduced by Klann and Ramlau in [13].
The optimal order of the weighted Tikhonov regularization was proved in [5]. The following proposition restates such result, putting in evidence the dependence on of , and provides a converse result.
Proposition 10
Let be a compact linear operator with infinite dimensional range. For every given the weighted Tikhonov method, , is a regularization method of optimal order, under the apriori assumption , with . The best possible rate of convergence with respect to is , that is obtained for with . On the other hand, if then .
Proof. For weighted Tikhonov the lefthand side of condition (2.13a) becomes
By derivation, if then it is straightforward to see that the quantity above is bounded by , with . Similarly, the lefthand side of condition (2.13b) takes the form
and it is easy to check that it is bounded by if and only if . From Theorem 6, as long as , with , if then we find order optimality (2.11) and the best possible rate of convergence obtainable with respect to is , for .
3.2 Fractional Tikhonov regularization
Here we introduce the fractional Tikhonov method defined and discussed in [13].
Definition 11 ([13])
We call Fractional Tikhonov method the filter based method
where the filter function is
(3.6) 
for and .
Note that is welldefined also for , but the condition (2.8a) requires to guarantee that is a filter function.
We use the notation for and like in equations (3.2) and (3.3), respectively. The optimal order of the fractional Tikhonov regularization was proved in [13, Proposition 3.2]. The following proposition restates such result including also and provides a converse result.
Proposition 12
The extended fractional Tikhonov filter method is a regularization method of optimal order, under the apriori assumption , for every and . The best possible rate of convergence with respect to is , that is obtained for with . On the other hand, if then .
Proof. Condition (2.8a) is verified for and the same holds for conditions (2.8b) and (2.8c). Deriving the filter function, it is immediate to see that equation (2.13a) is verified for , with . It remains to check equation (2.13b):
where is monotone, for every , and . Namely for and for . Therefore we deduce that
(3.7)  
(3.8) 
from which we infer that
(3.9) 
since is standard Tikhonov, that is of optimal order, with and for every , see [4]. On the contrary, with and , and by equations (3.7) and (3.8), we deduce that
(3.10) 
Therefore, if then by Theorem 7.
4 Saturation results
The following proposition deals with a saturation result similar to a well known result for classic Tikhonov, cf. [4, Proposition 5.3].
Proposition 13 (Saturation for weighted Tikhonov regularization)
Let be a compact linear operator with infinite dimensional range and be the corresponding family of weighted Tikhonov regularization operators in Definition 8. Let be any parameter choice rule. If
(4.1) 
then , where we indicated with the orthogonal projector onto .
Proof. Define
By the assumption that has not finite dimensional range, then for every and . According to Remark 9, from equation (3.5) we have
and hence by (3.1)
From the choice of follows that
(4.2) 
By (3.5),
(4.3) 
so that
(4.4) 
Since, by assumption, , it follows from (4.4) that if , then
(4.5) 
Now, by (4.1) and (4.5) applied to inequality (4) it follows that ,which is a contradiction. Hence .
Note that for (classical Tikhonov) the previous proposition gives exactly Proposition 5.3 in [4]. On the other hand, taking a large , it is possible to overcome the saturation result of classical Tikhonov obtaining a convergence rate arbitrary close to .
A similar saturation result can be proved also for the fractional Tikhonov regularization in Definition 11.
Proposition 14 (Saturation for fractional Tikhonov regularization)
Let be a compact linear operator with infinite dimensional range and let be the corresponding family of fractional Tikhonov regularization operators in Definition 11, with fixed . Let be any parameter choice rule. If
(4.6) 
then , where we indicated with the orthogonal projector onto .
Proof. If , the thesis follows from the saturation result for standard Tikhonov [4, Proposition 5.3]. For , recalling that
by equations (3.7) and (3.8), we obtain
(4.7) 
where and is standard Tikhonov. Let us define
Then, by the continuity of , there exists such that, for every , we find
with being the closure of the ball of center and radius . Passing to the we obtain that
(4.8) 
Therefore, using relation (4.6), we deduce
(4.9) 
and the thesis follows again from the saturation result for standard Tikhonov, cf. [4, Proposition 5.3].
Differently from the weighted Tikhonov regularization, for the fractional Tikhonov method, it is not possible to overcome the saturation result of classical Tikhonov, even for a large .
5 Stationary iterated regularization
We define new iterated regularization methods based on weighed and fractional Tikhonov regularization using the same iterative refinement strategy of iterated Tikhonov regularization [1, 4]. We will show that the iterated methods go beyond the saturation results proved in the previous section. In this section the regularization parameter will still be with the iteration step, , assumed to be fixed. On the contrary, in Section 6, we will analyze the nonstationary counterpart of this iterative method, in which will be replaced by a prefixed sequence and we will be concerned on the rate of convergence with respect to the index .
5.1 Iterated weighted Tikhonov regularization
We propose now an iterated regularization method based on weighted Tikhonov
Definition 15 (Stationary iterated weighted Tikhonov)
We define the stationary iterated weighted Tikhonov method (SIWT) as
(5.1) 
with and , or equivalently
(5.2) 
where is the seminorm introduced in (3.4). We define as the th iteration of weighted Tikhonov if .
Proposition 16
For any given and , the SIWT in (5.1) is a filter based regularization method, with filter function
(5.3) 
Moreover, the method is of optimal order, under the apriori assumption , for and , with best convergence rate , that is obtained for , with . On the other hand, if , then .
Proof. Multiplying both sides of (5.1) by and iterating the process, we get
Therefore, the filter function in (2.7) is equal to
as we stated. Condition (2.8c) is straightforward to verify. Moreover, note that
from which it follows that
(5.4) 
Therefore, conditions (2.8a), (2.8b) and (2.13a) follows immediately by the regularity of the weighted Tikhonov filter method for and by the order optimality for . Finally, condition (2.13b) becomes
and deriving one checks that it is bounded by , with , if and only if . Applying now Proposition 6 the rest of the thesis follows.
On the contrary, if we define and , then we deduce that
Therefore, if