Iterated fractional Tikhonov regularization
Fractional Tikhonov regularization methods have been recently proposed to reduce the oversmoothing property of the Tikhonov regularization in standard form, in order to preserve the details of the approximated solution. Their regularization and convergence properties have been previously investigated showing that they are of optimal order. This paper provides saturation and converse results on their convergence rates. Using the same iterative refinement strategy of iterated Tikhonov regularization, new iterated fractional Tikhonov regularization methods are introduced. We show that these iterated methods are of optimal order and overcome the previous saturation results. Furthermore, nonstationary iterated fractional Tikhonov regularization methods are investigated, establishing their convergence rate under general conditions on the iteration parameters. Numerical results confirm the effectiveness of the proposed regularization iterations.
We consider linear operator equations of the form
where is a compact linear operator between Hilbert spaces and . We assume to be attainable, i.e., that problem (1.1) has a solution of minimal norm. Here denotes the (Moore-Penrose) generalized inverse operator of , which is unbounded when is compact, with infinite dimensional range. Hence problem (1.1) is ill-posed and has to be regularized in order to compute a numerical solution; see .
We want to approximate the solution of the equation (1.1), when only an approximation of is available with
where is called the noise level. Since is not a good approximation of , we approximate with where is a family of continuous operators depending on a parameter that will be defined later. A classical example is the Tikhonov regularization defined by , where denotes the identity and the adjoint of , cf. .
Using the singular values expansion of , filter based regularization methods are defined in terms of filters of the singular values, cf. Proposition 3. This is a useful tool for the analysis of regularization techniques , both for direct and iterative regularization methods [8, 11]. Furthermore, new regularization methods can be defined investigating new classes of filters. For instance, one of the contributes in  is the proposal and the analysis of the fractional Tikhonov method. The authors obtain a new class of filtering regularization methods adding an exponent, depending on a parameter, to the filter of the standard Tikhonov method. They provide a detailed analysis of the filtering properties and the optimality order of the method in terms of such further parameter. A different generalization of the Tikhonov method has been recently proposed in  with a detailed filtering analysis. Both generalizations are called “fractional Tikhonov regularization” in the literature and they are compared in , where the optimality order of the method in  is provided as well. To distinguish the two proposals in  and , we will refer in the following as “fractional Tikhonov regularization” and “weighted Tikhonov regularization”, respectively. These variants of the Tikhonov method have been introduced to compute good approximations of non-smooth solutions, since it is well known that the Tikhonov method provides over-smoothed solutions.
In this paper, we firstly provide a saturation result similar to the well-known saturation result for Tikhonov regularization : let be the range of and let be the orthogonal projector onto , if
then , as long as is not closed. Such result motivated us to introduce the iterated version of fractional and weighted Tikhonov in the same spirit of the iterated Tikhonov method. We prove that those iterated methods can overcome the previous saturation results. Afterwards, inspired by the works [1, 7] we introduce the nonstationary variants of our iterated methods. Differently from the nonstationary iterated Tikhonov, we have two nonstationary sequences of parameters. In the noise free case, we give sufficient conditions on these sequences to guarantee the convergence providing also the corresponding convergence rates. In the noise case, we show the stability of the proposed iterative schemes proving that they are regularization methods. Finally, few selected examples confirm the previous theoretical analysis, showing that a proper choice of the nonstationary sequences of parameters can provide better restorations compared to the classical iterated Tikhonov with a geometric sequence of regularizzation parameter according to .
The paper is organized as follows. Section 2 recalls the basic definition of filter based regularization methods and of optimal order of a regularization method. Fractional Tikhonov regularization with optimal order and converse results are studied in Section 3. Section 4 is devoted to saturation results for both variants of fractional Tikhonov regularization. New iterated fractional Tikhonov regularization methods are introduced in Section 5, where the analysis of their convergence rate shows that their are able to overcome the previous saturation results. A nonstationary iterated weighted Tikhonov regularization is investigated in detail in Section 6, while a similar nonstationary iterated fractional Tikhonov regularization is discussed in Section 7. Finally, some numerical examples are reported in Section 8.
As described in the Introduction, we consider a compact linear operator between Hilbert spaces and (over the field or ) with given inner products and , respectively. Hereafter we will omit the subscript for the inner product as it will be clear in the context. If denotes the adjoint of (i.e., ), then we indicate with the singular value expansion (s.v.e.) of , where and are a complete orthonormal system of eigenvectors for and , respectively, and are written in decreasing order, with being the only accumulating point for the sequence . If is not finite dimensional, then , the spectrum of , namely . Finally, denotes the closure of , i.e., .
Let now be the spectral decomposition of the self-adjoint operator . Then from well-known facts from functional analysis  we can write , where is a bounded Borel measurable function and is a regular complex Borel measure for every . The following equalities hold
We define the generalized inverse of a compact linear operator as
With respect to problem (1.1), we consider the case where only an approximation of satisfying the condition (1.2) is available. Therefore , , cannot be approximated by , due to the unboundedness of , and hence in practice the problem (1.1) is approximated by a family of neighbouring well-posed problems .
By a regularization method for we call any family of operators
with the following properties:
is a bounded operator for every .
For every there exists a mapping (rule choice) , , such that
Throughout this paper is a constant which can change from one instance to the next. For the sake of clarity, if more than one constant will appear in the same line or equation we will distinguish them by means of a subscript.
Let be a compact linear operator and its generalized inverse. Let be a family of operators defined for every as
where is a Borel function such that
Then is a regularization method, with , and it is called filter based regularization method.
For the sake of notational brevity, we fix the following notation
We report hereafter the definition of optimal order, under the same a-priori assumption given in .
For every given , let
A regularization method is called of optimal order under the a-priori assumption if
where for any general set , and for a regularization method , we define
If is not known, as it will be usually the case, then we relax the definition introducing the set
and saying that a regularization method is called of optimal order under the a-priori assumption if
Since we are concerned with the rate that converges to zero as , the a-priori assumption is usually sufficient for the optimal order analysis, requiring that (2.12) is satisfied.
Hereafter we cite a theorem which states sufficient conditions for order optimality, when filtering methods are employed, see [14, Proposition 3.4.3, pag. 58].
 Let be a compact linear operator, and , and let be a filter based regularization method. If there exists a fixed such that
then is of optimal order, under the a-priori assumption , with the choice rule
If we are concerned just about the rate of convergence with respect to only , the preceding theorem can be applied under the a-priori assumption , fitting the proof to the latter case without any effort. On the contrary, below we present a converse result.
Let be a compact linear operator with infinite dimensional range and let be a filter based regularization method with filter function . If there exist and such that
3 Fractional variants of Tikhonov regularization
3.1 Weighted Tikhonov regularization
Definition 8 ()
We call Weighted Tikhonov method the filter based method
where the filter function is
for and .
The Weighted Tikhonov method can also be defined as the unique minimizer of the following functional,
where the semi-norm is induced by the operator . For , is to be intended as the Moore-Penrose (pseudo) inverse. Developing the calculations, it follows that
That is the reason that motivated us to rename the original method of Hochstenbach and Reichel, that appeared in , into weighted Tikhonov method. In this way it would be easier to distinguish from the fractional Tikhonov method introduced by Klann and Ramlau in .
The optimal order of the weighted Tikhonov regularization was proved in . The following proposition restates such result, putting in evidence the dependence on of , and provides a converse result.
Let be a compact linear operator with infinite dimensional range. For every given the weighted Tikhonov method, , is a regularization method of optimal order, under the a-priori assumption , with . The best possible rate of convergence with respect to is , that is obtained for with . On the other hand, if then .
Proof. For weighted Tikhonov the left-hand side of condition (2.13a) becomes
By derivation, if then it is straightforward to see that the quantity above is bounded by , with . Similarly, the left-hand side of condition (2.13b) takes the form
and it is easy to check that it is bounded by if and only if . From Theorem 6, as long as , with , if then we find order optimality (2.11) and the best possible rate of convergence obtainable with respect to is , for .
On the contrary, with and , we deduce that
Therefore, if then by Theorem 7.
3.2 Fractional Tikhonov regularization
Here we introduce the fractional Tikhonov method defined and discussed in .
Definition 11 ()
We call Fractional Tikhonov method the filter based method
where the filter function is
for and .
Note that is well-defined also for , but the condition (2.8a) requires to guarantee that is a filter function.
We use the notation for and like in equations (3.2) and (3.3), respectively. The optimal order of the fractional Tikhonov regularization was proved in [13, Proposition 3.2]. The following proposition restates such result including also and provides a converse result.
The extended fractional Tikhonov filter method is a regularization method of optimal order, under the a-priori assumption , for every and . The best possible rate of convergence with respect to is , that is obtained for with . On the other hand, if then .
Proof. Condition (2.8a) is verified for and the same holds for conditions (2.8b) and (2.8c). Deriving the filter function, it is immediate to see that equation (2.13a) is verified for , with . It remains to check equation (2.13b):
where is monotone, for every , and . Namely for and for . Therefore we deduce that
from which we infer that
Therefore, if then by Theorem 7.
4 Saturation results
The following proposition deals with a saturation result similar to a well known result for classic Tikhonov, cf. [4, Proposition 5.3].
Proposition 13 (Saturation for weighted Tikhonov regularization)
Let be a compact linear operator with infinite dimensional range and be the corresponding family of weighted Tikhonov regularization operators in Definition 8. Let be any parameter choice rule. If
then , where we indicated with the orthogonal projector onto .
and hence by (3.1)
From the choice of follows that
Since, by assumption, , it follows from (4.4) that if , then
Note that for (classical Tikhonov) the previous proposition gives exactly Proposition 5.3 in . On the other hand, taking a large , it is possible to overcome the saturation result of classical Tikhonov obtaining a convergence rate arbitrary close to .
A similar saturation result can be proved also for the fractional Tikhonov regularization in Definition 11.
Proposition 14 (Saturation for fractional Tikhonov regularization)
Let be a compact linear operator with infinite dimensional range and let be the corresponding family of fractional Tikhonov regularization operators in Definition 11, with fixed . Let be any parameter choice rule. If
then , where we indicated with the orthogonal projector onto .
Proof. If , the thesis follows from the saturation result for standard Tikhonov [4, Proposition 5.3]. For , recalling that
where and is standard Tikhonov. Let us define
Then, by the continuity of , there exists such that, for every , we find
with being the closure of the ball of center and radius . Passing to the we obtain that
Therefore, using relation (4.6), we deduce
and the thesis follows again from the saturation result for standard Tikhonov, cf. [4, Proposition 5.3].
Differently from the weighted Tikhonov regularization, for the fractional Tikhonov method, it is not possible to overcome the saturation result of classical Tikhonov, even for a large .
5 Stationary iterated regularization
We define new iterated regularization methods based on weighed and fractional Tikhonov regularization using the same iterative refinement strategy of iterated Tikhonov regularization [1, 4]. We will show that the iterated methods go beyond the saturation results proved in the previous section. In this section the regularization parameter will still be with the iteration step, , assumed to be fixed. On the contrary, in Section 6, we will analyze the nonstationary counterpart of this iterative method, in which will be replaced by a pre-fixed sequence and we will be concerned on the rate of convergence with respect to the index .
5.1 Iterated weighted Tikhonov regularization
We propose now an iterated regularization method based on weighted Tikhonov
Definition 15 (Stationary iterated weighted Tikhonov)
We define the stationary iterated weighted Tikhonov method (SIWT) as
with and , or equivalently
where is the semi-norm introduced in (3.4). We define as the -th iteration of weighted Tikhonov if .
For any given and , the SIWT in (5.1) is a filter based regularization method, with filter function
Moreover, the method is of optimal order, under the a-priori assumption , for and , with best convergence rate , that is obtained for , with . On the other hand, if , then .
Proof. Multiplying both sides of (5.1) by and iterating the process, we get
Therefore, the filter function in (2.7) is equal to
as we stated. Condition (2.8c) is straightforward to verify. Moreover, note that
from which it follows that
and deriving one checks that it is bounded by , with , if and only if . Applying now Proposition 6 the rest of the thesis follows.
On the contrary, if we define and , then we deduce that