The Projected GSURE for Automatic Parameter Tuning in Iterative Shrinkage Methods
Abstract
Linear inverse problems are very common in signal and image processing. Many algorithms that aim at solving such problems include unknown parameters that need tuning. In this work we focus on optimally selecting such parameters in iterative shrinkage methods for image deblurring and image zooming. Our work uses the projected Generalized Stein Unbiased Risk Estimator (GSURE) for determining the threshold value and the iterations number in these algorithms. The proposed parameter selection is shown to handle any degradation operator, including illposed and even rectangular ones. This is achieved by using GSURE on the projected expected error. We further propose an efficient greedy parameter setting scheme, that tunes the parameter while iterating without impairing the resulting deblurring performance. Finally, we provide extensive comparisons to conventional methods for parameter selection, showing the superiority of the use of the projected GSURE.
keywords:
Iterated Shrinkage, Stein Unbiased Risk Estimator, Separable Surrogate Function, Inverse problems.1 Introduction
In many applications in signal and image processing there is a need for solving a linear inverse problem Vogel2002ComputMethods (); Bertero98IntroducInverse (). Here we consider the scenario in which an original image is degraded by a linear operator followed by an additive white Gaussian noise . Our observation can be written as
(1) 
where the goal is to reconstruct the original image from . In our setting, there is no Bayesian prior on ; it is therefore treated as a deterministic unknown.
There are many techniques in the literature for this task that focus on different objectives Vogel2002ComputMethods (); Bertero98IntroducInverse (); Takeda08dDeblurring (); foi06Shape–adaptive (); Dabov07imagedenoising (); Fadili07Sparse–Representation (); Daubechies04Iter–threshold (); Elad07awideangle (); Figueiredo05BOA (); BE07a (); KE98 (). Most of these methods include parameters that require tuning. Some of the algorithms are iterative, and thus the number of iterations needs to be set as well. Tuning of such parameters is generally not a trivial task. Very often, the objective in these problems is recovery of the signal with a minimal MeanSquared Error (MSE) between the original image and the estimated one eldar2009biased (). In this case, ideally, the parameters should be chosen such that this MSE is minimized. However, since the original image is deterministic, the MSE will depend in general on the unknown . Consequently, we cannot know what choice of parameters minimizes the true MSE. In practice, therefore, a common approach to parameter tuning is still manual, by looking at the reconstructed result and judging its quality subjectively.
The literature offers several automatic ways for choosing the parameters of the reconstruction algorithm Vogel2002ComputMethods (). One such method is the Unbiased Predictive Risk Estimator (UPRE) Mallows73Cp (), which is valid only for the case that the reconstruction operator is linear. Other popular tools are the generalized cross validation (GCV) technique with its many variants Golub1979Generalized (), and the LCurve method Hansen93Lcurve (), both available for general reconstruction algorithms. The number of iterations can be chosen using the discrepancy principle Morozov66OnSol (), such that the norm of the error between the degraded image and reconstructed image is close to the noise variance.
An alternative technique for parameter tuning is the Stein Unbiased Risk Estimator (SURE) Stein73Estimate–mean (); Stein81Estimate–mean (). SURE provides an unbiased estimate of the MSE for a candidate solver, including nonlinear ones. Minimizing this estimated MSE, leads to the automatic tuning desired. The original SURE by Stein is limited to the case of denoising of white additive Gaussian noise. One of the best known algorithms that uses SURE is Donoho’s SureShrink denoising algorithm Donoho95Adaptingwavelet (). There are also other contributions along these lines, as in Blu07SURELET (); Pesquet97adaptive–signal (); BenazzaBenyahia05Building–robust ().
Recently, a generalization of SURE (GSURE) Eldar09GSURE () has been developed for more general models that can be described in the form of exponential family distribution. This provides a new opportunity for choosing the parameters automatically in a much more diverse set of inverse problems. In the case where is rankdeficient, or even rectangular (with more columns than rows), the inverse problem becomes illposed. In such a scenario, GSURE can at best estimate the MSE in the range of . We refer hereafter to a tuning of the algorithm’s parameters with this estimation as the projected GSURE.
The reconstruction algorithms used in our work belong to the iterated shrinkage family of methods Daubechies04Iter–threshold (); Elad07awideangle (); Elad07Coord–subspace (); Figueiredo03anem (), In these algorithms, the image is known to have a sparse representation over a dictionary. Reconstruction is obtained by an iterative process composed of applying the dictionary and the degradation operator with their conjugates, in addition to a pointwise shrinkage operation. In these algorithms the threshold value in the shrinkage parameter needs to be tuned, along with the number of iterations to be performed.
The work reported in Vonesch08GSURE () develops a special case of GSURE for the case where the blur operator is assumed to be fullrank. This work also addresses automatic tuning of the parameters and in an iterated shrinkage algorithm, used for the deblurring problem. For a prechosen number of iterations, the GSURE is used to select a fixed value of that optimizes the overall estimated MSE. For a given , the GSURE is again used to determine the number of iterations. We refer to this method of setting and as the global method.
Similar to the work in Vonesch08GSURE (), this paper proposes to tune and for iterated shrinkage algorithms, which are geared towards solving inverse problems in image processing. Our work extends the applicability of GSURE to illposed inverse problems, using the projected GSURE. We show that turning from the plain GSURE to its projected version, the performance of the parameter tuning improves substantially.
In addition, we propose an alternative to the global method mentioned above, where the value of is chosen by minimizing the estimated MSE at each iteration, thereby obtaining a different value of per iteration. We refer to this technique as the greedy method. The main benefit of such a strategy is the natural way with which the number of iterations is set simultaneously – the iterations can be stopped when the estimated MSE improvement is under a certain threshold. In order to further improve the performance with regard to the overall estimated MSE in this greedy approach, in each iteration can be chosen with a lookahead on the estimated MSE in the next few iterations.
This paper is organized as follows: In Section 2 we present the deblurring and image scaling (zooming) problems, and the iterative shrinkage algorithms used for solving them. Section 3 describes several conventional techniques for parameter setting (GCV, Lcurve, and the discrepancy method). Section 4 presents the SURE with its generalization, as developed in Eldar09GSURE (). In Section 5 we use GSURE for the task of parameter selection in the iterative shrinkage algorithms using the two approaches – global and greedy. The results of these methods are presented in Section 6. We conclude in Section 7 with a summary of this work, and a discussion on future research directions.
2 Iterative Shrinkage Methods
To define a linear inverse problems, we need to specify the model of the original image, the noise characteristics, and the degradation operator. We consider two types of operators . The first is a blur operator, and the second is a scaledown operator that defines a scaleup problem. In the blur case, is a square matrix that blurs each pixel by replacing it with a weighted average of its neighbors. In the scaledown scenario, is composed of two operators applied sequentially, the first blurs the image and the second downsamples it. In both settings we assume that the operators are known, and the additive noise is white Gaussian with known variance . Our goal in both settings is to reconstruct the original image without assuming any Bayesian prior.
In order to evaluate the quality of the reconstruction, we measure the MSE between the reconstructed image and the original one,
(2) 
This criterion is very commonly used in image processing despite its known shortcomings Bovik (). Two related measures that are also common are the Peak Signal to Noise Ratio (PSNR), and the Improvement in Signal to Noise Ratio (ISNR)^{1}^{1}1These are defined as and , where is a referenceMSE to compare against., which are both inversely proportional to the MSE. Thus, all of the criteria favor low MSE.
Since the goodness of our fit is measured by the MSE, a natural approach would be to choose an estimate that directly minimizes the MSE. Unfortunately, however, the MSE depends in general on and therefore cannot be minimized directly. This difficulty is explained in more detail in eldar2009biased (). To avoid direct MSE minimization, in the context of imaging it is often assumed that has a sparse representation under a given dictionary Elad09Review (). This knowledge of is then used together with a standard leastsquares criterion to find an estimate . More specifically, a popular approach to recover is to first seek the value of that minimizes an objective of the form:
(3) 
The first term requires that the reconstructed image, after degradation, should be close to the measured image . The second term forces sparsity using the penalty function . The minimizer of this function is the representation of the desired image, and its multiplication by gives the reconstructed image. Although intuitive, this strategy does not in general minimize the MSE.
The minimization of in (3) is typically performed using an iterative procedure. An appealing choice is the iterative shrinkage methods Daubechies04Iter–threshold (); Elad07awideangle (). These algorithms are iterative, applying the matrices and and their adjoints, and an additional elementwise thresholding operation in each iteration. Consequently, these methods are wellsuited for highdimensional problems with sparse unknowns. This family of techniques includes the Separable Surrogate functionals (SSF) method Daubechies04Iter–threshold (), the Parallel Coordinate Descent (PCD) method Elad06PCD (); Elad07Coord–subspace (), and some others. In this work we focus on the SSF and PCD techniques. The SSF algorithm proposes the iteration formula
(4) 
which is proven to converge Daubechies04Iter–threshold () to a local minimum of (3). In the case where is convex, this is also the global minimum. The term is a shrinkage operator dependant on with threshold . The constant depends on the combined operator . The resulting estimator is
(5) 
where is the number of SSF iterations performed. When our task is minimizing the objective in (3), is chosen to be the point that the decreasing rate is under a certain threshold. When aiming at minimization of the MSE, both and need to be tuned carefully: in many cases, from a certain iteration, the objective value continues to descend, while the MSE value increases.
In Elad06PCD (); Elad07Coord–subspace () an alternative iterated shrinkage approach is presented, which leads to faster convergence. In this technique, termed PCD, the iteration formula is given by
(6) 
where
(7) 
The diagonal matrix contains the norms of the columns of , which are computed off line. Once is computed, a line search is performed along the trajectory , using the parameter . This method is also proven to converge Elad07Coord–subspace (). Our estimator in this case is computed by (5) like in the SSF.
In order to use the SSF and the PCD methods for the inverse problems mentioned, the penalty function and the dictionary need to be chosen as well as the and the number of iterations. Following Elad et al.Elad07Coord–subspace (), we use the function
(8) 
This is a smoothed version of the norm in the range . The parameter defines which norm we are approximating in this range. For small values of (e.g., ) this function serves as a smoothed version of the norm, and we use it as the penalty function throughout this work. This choice of dictates the shape of the shrinkage operator Elad07Coord–subspace ().
There are various possibilities for choosing . In our simulations, and following the work reported in Figueiredo05BOA (); Figueiredo03anem (); Elad07awideangle (), we use the undecimated Haar wavelet with 3 layers of resolution. The motivation for this choice is that when looking for sparse representations, redundant dictionaries are favorable.
The parameter which determines the weight of the penalty function also needs setting, as do the number of iterations. Ideally, we would like to choose the parameters that minimize the MSE between the original image and its estimate. However, as we have seen already, the dependence of the MSE on precludes its direct minimization. We now turn to present several methods that aim to bypass this difficulty.
3 Conventional Parameter Tuning methods
The main approaches the literature offers for automatic parameter setting are the GCV Golub1979Generalized (), the LCurve Hansen93Lcurve () and the discrepancy principle Morozov66OnSol (). For linear deconvolution methods, the parameter selection can also be based on the Unbiased Predictive Risk Estimator (UPRE) Mallows73Cp (); however, the algorithms we treat here are nonlinear.
In this section we accompany the description of these methods with an image deblurring experiment along the following lines: Working on the image cameraman, we use the blur kernel
(9) 
and an additive white Gaussian noise with variance . This experiment and its chosen parameters are adopted from Figueiredo et al.Figueiredo05BOA ().
3.1 The Generalized Cross Validation (GCV) Method
For a reconstruction algorithm and a degraded image , we define the residual of the algorithm to be
(10) 
In the case where is linear, the GCV method selects the parameters to be those that minimize the GCV functional given by
(11) 
where is the identity matrix, and is the length of . In high dimensions, the trace in the denominator of (11) is costly to compute directly. Stochastic estimation of this expression can be used instead Hutchinson90stochasticEst (). Specifically, noting that where is a random vector , we can approximate the trace by one such instance of the random vector. More than one instance of the random variable can be used for better estimation of the trace by averaging, but in this work we use just one such realization of . Replacing the trace in (11) yields
(12) 
Some extensions of the GCV to nonlinear cases were treated in Fu2005Nonlinear (); Haber2000GCV (); OSullivan1985CrossVal (); Vogel2002ComputMethods (), but as far as we know the literature does not offer extensions of the GCV to nonlinear deconvolution algorithms like the one we address in this work. We suggest using (12) as the GCV objective to be minimized for the parameters selection, despite the nonlinearity of .
Figure 1 presents an experiment using GCV. For fixed number of iterations, , the GCV as a function of is presented in this figure. The value of is chosen to be the minimizer of the GCV function. This minimal point can be found by a line search, but such a procedure may get trapped in a local minimum of the GCV curve, since the obtained curve is not necessarily unimodal. Here and in later figures, the value chosen based on the true MSE is marked by an asterisk. In this case, as can be seen, the GCV performs well.
3.2 The Lcurve method
For setting a parameter with the Lcurve method, a log of the squared norm of the result of the algorithm versus the squared norm of its residual is plotted over a range of parameter values. The result is typically a curve with an L shape. The Lcurve criterion is to choose the parameter value that corresponds to the “corner” of the curve – the point with maximum curvature Hansen93Lcurve (). For a parameter of interest , the curve is defined by the xaxis and the yaxis . The curvature is then given by
(13) 
The value of is selected to maximize (13). Since the Lcurve can only be sampled at specific discrete points, the calculation of the second order derivatives is very sensitive and thus should be avoided. Following Hansen and O’Leary Hansen93Lcurve (), we apply a local smoothing on the Lcurve points and use the new smoothed points as control points for a cubic spline. We then calculate the derivatives of the spline at those points and get the curvature on each of them. The smoothing is done by fitting five points, where the smoothed point is the center of them, to a straight line in the least squares sense.
As in the case of the GCV, according to our knowledge, the LCurve has not been applied to nonlinear deconvolution methods. Limitations of this approach for linear deconvolution are presented in Vogel2002ComputMethods (); Hanke96Limitations (). In addition, the curvature is very sensitive and thus we can very easily be diverted from the appropriate value. In addition to this shortcoming, each calculation of a curvature of a point needs the values of the Lcurve of its neighbors. Therefore, it is hard to tune the parameters in an efficient way.
Figure 2 presents the experiment results using the Lcurve method. This figure shows both the Lcurve (top) and its curvature (bottom) for a fixed . The location of the chosen parameter is marked by a diamond. As can be seen, the value chosen as the curvature maximizer, in this case, is very close to the ideal value.
3.3 The Discrepancy Principle
A very intuitive parameter tuning idea is to use the discrepancy principle. Based on the noise properties, the original image approximately satisfies . The discrepancy principle assumes that the same should hold for the reconstructed image and thus selects the parameters based on the assumption that
(14) 
The iteration number, , is chosen to be the first iteration in which .
Experimenting with the discrepancy principle, Fig. 3 presents the value of as a function of , where is fixed. The discrepancy principle selects the parameter value that achieves the minimal value on the graphs. As can be seen, although this method is very appealing, in many instances like this one, it tends to select a value of far from the optimal one.
In this section we have described three parameter tuning methods: The GCV, the Lcurve and the discrepancy principle. The discrepancy principle, which is commonly used as a stopping criterion, tends to fail in many cases, as can be seen in this section. As for the GCV and the Lcurve methods, as far as we know, they were not applied before to nonlinear deconvolution algorithms of the kind we use here. While their deployment is reasonable, their performance tends to be overly sensitive, possibly leading to poor results, as will be shown in the results section. The fact that these three methods may give good parameter tuning in part of the cases may be explained by the fact that these methods do not aim at minimizing the MSE but use different considerations for the tuning. This leads us naturally to consider ways to select the parameters directly based on an estimated value of the MSE.
4 GSURE and Projected GSURE
In this section we discuss an unbiased estimator for the MSE obtained by any reconstruction algorithm. This MSE estimate will later be used for parameter selection. The Stein unbiased risk estimator (SURE) was proposed in Stein73Estimate–mean (); Stein81Estimate–mean () as an estimate of the MSE in the Gaussian denoising problem, namely, when and the noise is i.i.d. and Gaussian. This idea was then extended in Eldar09GSURE () to handle more general inverse problems of the form discussed here, and a wider class of noise distributions. We refer to the later MSE estimate as the generalized SURE (GSURE).
The essential concept behind the GSURE principle is to seek an unbiased estimate for the MSE that is only a function of the estimate and (up to a constant which may depend on but is independent of ). Denote such an MSE estimate by . Then,
(15) 
If we have access to such a function that estimates the MSE, while being also dependent on a set of parameters (that control the reconstruction performance), then we may choose the values of so as to minimize this function, and in that way aim at minimizing the true MSE as a consequence.
For the case where has full column rank, the GSURE is given by Eldar Eldar09GSURE ()
(16) 
Here, is the sufficient statistic for the model (1), is the divergence operator
(17) 
and is a constant independent of . Given that our goal is to minimize the MSE by setting , the constant is irrelevant for the minimization process. The term stands for the Maximum Likelihood (ML) estimator — the minimizer of
(18) 
If is invertible, then this estimate is given by
(19) 
The MSE estimate presented in (16) is valid only if is invertible. When this condition is not satisfied, has a nontrivial null space. Therefore, we have no information about the components of that lie in the null space . Thus we can estimate only the projected MSE on the range of (denoted by ), which is equal to the orthogonal complement of the null space . Therefore, instead of attempting to minimize an unbiased estimate of the MSE, we consider an unbiased estimate of the projected MSE: , where
(20) 
is the orthogonal projection onto . Here denotes the MoorePernose pseudo inverse.
As shown in Eldar09GSURE (), an unbiased estimator for the projected MSE is given by the same formula as (16), but applied on the projected estimate :
(21)  
Again, is a constant independent of . Here the ML estimate is chosen as the minimumnorm minimizer of (18), given by
(22) 
5 Parameter Tuning with GSURE
The generalization of SURE provides the possibility of using this estimator for the case of general , and specifically, for the deblurring and image scaleup problems. The tuning will be demonstrated for the SSF and PCD algorithms. As seen in Section 2, these methods have mainly two unknown parameters, the iterations number and the thresholding value . Vonesch, Ramani and Unser, in Vonesch08GSURE (), have already addressed this problem for the case where the operator has full column rank. In their method, referred to as the global method, one parameter is being set at a time, given the other. We extend this for general using the projected GSURE, showing the advantage of its usage over the regular GSURE, and over the conventional parameter setting methods that were presented in Section 3.
Though the global method for parameter setting provides us with a good estimation of the parameters as demonstrated in the results section, it sets only one parameter given the other. We present an additional approach that sets both together. The parameter takes on a different value in each iteration of the algorithm in a greedy fashion. This way, we choose together with , setting it to be the point where the estimated MSE stops decreasing. This method has the same computational cost for choosing two parameters as the global method has for selecting only one, without degrading the MSE performance. We also suggest a greedy lookahead method that aims at better reconstruction results at the expense of higher complexity. We compare between these algorithms and the global method demonstrating the advantage of using these approaches for parameter tuning when both and are unknown.
5.1 The Global Method using GSURE
In order to use GSURE with the iterated shrinkage algorithms, described in Section 2, we need to calculate the ML estimator, the derivative of the reconstruction algorithm with respect to , and the projection operator in the rankdeficient case. In our experiments, the ML estimator and the projection matrix are easily calculated by exploiting the fact that the blur operators considered are block circulant. As such, they are diagonalized by the 2D discrete Fourier transform (DFT). This helps both for the deblurring and the scaleup problems we handle.
To calculate the derivatives with respect to , we first reformulate the iterative equation as a function of instead of . Rewriting (4) leads to
(23) 
where as in (5). The derivatives of are calculated recursively by first noticing that
(24)  
where is an element wise derivative of the shrinkage function, organized as a diagonal matrix. From here, the divergence of the estimator can be directly obtained by multiplying the above by from the left, gathering the diagonal of the matrix and summing it up:
(25) 
It is impracticable to calculate the Jacobian of (24) in high dimensions. Instead we use the trace estimator that was used for the GCV in Section 3.1. Following Vonesch et al.Vonesch08GSURE () we choose a random vector . Using this estimator we iterate over , which can be calculated instead of , leading to the following iterative formula
(26) 
For the estimation of the MSE for the PCD algorithm we need to calculate the trace as was done for the SSF. We rewrite the PCD iteration in terms of
(27)  
The derivative with respect to is given by
(28)  
Using randomization we get the following iterative formula for estimating the trace value,
(29)  
Now that we hold a complete expression for the GSURE MSE estimate, and can be chosen by minimizing it. This strategy was developed in Vonesch08GSURE () and is referred to as the global method. In the fullrank case, the projection matrix is the identity and the projected GSURE coincides with the regular GSURE. Thus we can use the projected GSURE equation in (21) for both cases. For a fixed and prechosen number of iterations , the which minimizes the GSURE expression is chosen. Repeating this process for various values of , one can minimize the overall global MSE with respect to both and . As an analytical minimization of the GSURE is hard to achieve, we use a golden section search Kiefer53Sequential–minimax (). We initialize these algorithms by
(30) 
and
(31) 
Denoting by the GSURE calculation time (periteration), the goldensection number of iterations, and the number of iterations of the iterated shrinkage algorithm, the time complexity of the global method for setting , when is fixed, is . For setting another golden section can be performed over the number of iterations.
5.2 The Greedy Method using GSURE
In the greedy method, instead of choosing one global for all the iterations, we take a different route. Turning to a local alternative, the value of can be determined as the minimizer of the estimated MSE of the current iteration. In this strategy, is set together with as the point where the estimated MSE (almost) stops decreasing. This allows us to avoid setting each of the parameters separately, as done in the global method. The algorithm proposed is the following:

Initialize and calculate its derivative w.r.t. .

.
The complexity of the greedy algorithm is also . When one of the parameters is fixed in the global method and we need to set only the other parameter we get the same complexity in the two approaches. In the case where both parameters are to be set, the greedy technique has an advantage since it chooses the number of iterations along with the parameter . In contrast, in the global method the number of iterations is either prechosen and thus suboptimal, or searched using yet another goldensearch for optimizing increasing the overall complexity.
To further decrease the MSE, we can modify the greedy algorithm by introducing a lookahead option. One of the problems of the greedy strategy is that it minimizes the MSE of the current iteration but can harm the overall MSE. Thus, instead of choosing that minimizes the estimated MSE of the current iteration, we select it to minimize the estimated MSE of iterations ahead, assuming that these iterations are performed using the greedy approach described above. This change provides a lookahead of iterations, formulated as the following algorithm:

Initialize and calculate its derivative w.r.t. .

.
In step 1, for each test in the goldensection, iterations of the greedy method are performed, as described for the greedy algorithm. Finally, of the current iteration is chosen such that the estimated MSE of the last th greedy iteration is minimized. The time complexity of the lookahead greedy algorithm is , which is times slower than the other two methods. However, this is partially compensated for in some cases by getting smaller due to faster convergence. Furthermore, the resulting MSE is often lower, as we illustrate in the results section.
6 Results
The greedy methods were previously presented, with preliminary results on low dimension signals, in Giryes08Automatic–parameter (). In this work we extended their use for images. In this section we present a set of experiments that demonstrate parameter tuning for the deblurring and the scaleup problems, using the PCD and the SSF algorithms. The representative experiments chosen correspond to different scenarios, which enable a demonstration of the various algorithms and their behavior. We should note that our work included various more tests, which are not brought here because of space limitation, and which lead to the same final conclusions. Table 1 lists the five tests performed. The tests we present divide into two groups: (i) preliminary ones that demonstrate the superiority of the projected GSURE over all the other alternatives, and then (ii) advanced tests that compare the global and the greedy methods.
Problem  Blur  Decimation  Algorithm  Cond#  

1. Deblurring  or  none  PCD  
2. Deblurring  none  PCD  
3. Deblurring  none  PCD  
separable  
4. ScaleUp  SSF  
separable  in each axis  
5. ScaleUp  SSF  
separable  in each axis 
6.1 Preliminary Tests
We start by demonstrating the core ability of the GSURE and the classical techniques to choose the required parameters. In Fig. 4 we present the GSURE ISNR estimation^{2}^{2}2The reference MSE is obtained by considering as the estimate. for Problem1 applied on Cameraman with as a function of the iterations. Similarly, Fig. 5 presents the GSURE ISNR as a function of for the same problem with . In both cases, the other parameter is kept fixed, choosing it to be the optimal one based on the true MSE. It can be observed in both graphs that the GSURE gives a very good estimation for the MSE and thus also a highquality estimate of both and the number of iterations . It can also be seen that it selects the parameters to be much closer to the optimal selection, compared to all the conventional methods. Among these conventional methods, the Lcurve gives the best approximation for the parameters. However, it is hard to compute in an efficient way. The other methods approximate relatively well but fail in choosing .
Repeating the experiment reported in Fig. 4 on the image Boat and fixing , Table 2 summarizes the ISNR reconstruction results for setting . It can be seen again that the GSURE gives the best reconstruction results, being approximately the same as those achieved by the optimal selection.
Parameter Tuning Method  ISNR 

Optimal Selection  6.461dB 
GSURE  6.458dB 
Lcurve  6.43dB 
GCV  5.21dB 
Discrepancy Principle  5.47dB 
In Figs. 6 and 7 a comparison is made between the ISNR estimation using the regular and the projected GSURE. The problem considered in the first graph is Problem2 and in the second is Problem3, where both are applied on Cameraman. The selected parameters of the various methods are marked. The first graph is a function of and the second is a function of the iterations . It can be seen that for both operators the estimation of and the iterations number using the projected GSURE is reasonable and always better than using the regular GSURE. Also, we see that in the case where we have a blur with real null space we get a totally unreliable estimation of the ISNR with the regular GSURE. We also see that Lcurve fails this time in estimating .
Table 3 compares the various parameter setting methods for a deblurring experiment based on Problem2 for reconstructing the image Peppers when is being tuned. The visual effect of the various parameter selections can be observed in Fig. 8. We see the same behavior as before, indicating that the projected GSURE is the best choice, leading to the best reconstruction results. The results again show that the regular GSURE (and the conventional methods) tend to fail, while the projected version performs very well.
Parameter Tuning Method  ISNR 

optimal selection  8.86dB 
projected GSURE  8.86dB 
discrepancy principle  8.28dB 
GSURE  8.17dB 
GCV  8.1dB 
Lcurve  7.51dB 
In Figs. 9 and 10 a comparison is made between the estimation of the PSNR using the regular and the projected GSURE for Problem 4 (scaleup). As before, a comparison is done between the parameter values selected by the various methods. In addition, we show the reconstruction quality achieved by the bicubic, the Lanczos, and the bilinear interpolation methods^{3}^{3}3these scaleup techniques do not require parameter tuning., as a reference performance to compare against.
The first graph is a function of the iterations and the second is a function of . It can be seen that the reconstruction algorithm with correctly tuned parameters achieves better reconstruction results than the conventional scaleup techniques, demonstrating the importance of the parameter tuning. Here as well, it can be observed that the projected GSURE provides the best estimation for the selection of and .
As in the case of deblurring, the scaleup results show that the projected GSURE gives the best reconstruction results. Because of its superiority, we shall use it hereafter in the various experiments that follow. Since in the case of a full rank operator, the GSURE and projected GSURE coincide, in every place that GSURE is referenced, the projected GSURE would be the one being intended.
6.2 Advanced Tests
We now turn to look at the greedy strategies and compare them to the global method, all aiming to jointly estimate and . In Fig. 11 we present the results obtained for Problem1 applied on Cameraman. The graph shows the true ISNR after each iteration of the greedy and the greedy lookahead methods (with ), together with their ISNR estimation. As can be seen, the estimation has the same behavior as the true ISNR. In particular, in both graphs the maxima are very close. Choosing the number of iterations based on GSURE yields an ISNR very close to the one based on the true ISNR. This justifies our claim about using the greedy algorithm as an automatic stopping rule, by detecting an increase in the ISNR. The maxima in both graphs (marked in a circle as before) are very close, both in the greedy, as well as in the greedy lookahead case. Figure 12 presents the actual result obtained for a similar experiment applied on the image House with Problem2.
A comparison of the MSE of the global, the greedy, and the lookahead techniques for the deblurring problem in images is presented in Fig. 13 for Problems1, 2, and 3 on the image Cameraman. In all cases, no real advantage is seen in terms of ISNR by each of the techniques. Since the greedy algorithm sets the iterations number together with , this means that the greedy methods are more efficient in the overall performance, at a possible slight costs in terms of the final ISNR.
Turning to the scaleup problems 4 and 5, the same comparison is done in Fig. 14. Here again we get similar results in terms of PSNR for the three methods. This implies that the two approaches (global and greedy) are equivalent in terms of the output quality, and the differences are mostly in their complexities, with a clear advantage to the greedy techniques.
To summarize, for the various deblurring and scaleup tests, there is no real advantage to one of the methods over the other, in terms of the quality of the reconstruction results. Thus, when we need to set both and , the greedy methods are to be favored.
7 Conclusion and Discussion
In this work we have considered automatic tuning of parameters for inverse problems. Our focus has been iterated shrinkage algorithms for deblurring and image scaleup, targeting an automatic way for choosing in each iteration in these methods, and choosing the number of iterations . We extended the global tuning method developed in Vonesch08GSURE () to general illposed problems, by exploiting a projected version of the GSURE. We also applied the GCV, the Lcurve, and the discrepancyprinciple methods for this task of parameter tuning, and compared their selection with those of the (projected) GSURE. The GSURE was shown to give better approximation for the true values of the tuned parameters, leading to better results in the reconstruction.
Two greedy methods for parameter tuning – a simple and a lookahead version – were presented. These two methods are shown to perform as well as the global method, with the advantage of setting an automatic stopping rule together with the parameter, whereas the previous methods set only one of the parameters, given the other.
We are aware of the fact that there are better algorithms today for image deblurring and scaleup. However, the scope of this paper is to demonstrate the applicability of the GSURE for parameter tuning for such methods and not to compete with current state of the art reconstruction results. Using the concepts presented in this paper, automatic parameter tuning can be performed also for other techniques.
Acknowledgment
The authors are grateful to Mario A.T. Figueiredo for sharing his image restoration software with them.
This research was partly supported by the European Community’s FP7FET program, SMALL project, under grant agreement no. 225913, by the ISF grant number 599/08 and by the Israel Science Foundation under Grant no. 1081/07 and by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications NEWCOM++ (contract no. 216715).
References
 (1) C. Vogel, Computational methods for inverse problems, Society for Industrial and Applied Mathematics, 2002.
 (2) M. Bertero, P. Boccacci, Introduction to inverse problems in imaging, Institute of Physics Publishing, 1998.
 (3) H. Takeda, S. Farsiu, P. Milanfar, Deblurring using regularized locally adaptive kernel regression 17 (4) (2008) 550–563.
 (4) A. Foi, K. Dabov, V. Katkovnik, K. Egiazarian, Shapeadaptive DCT for denoising and image reconstruction, in: Proc. SPIE El. Imaging 2006, Image Process.: Algorithms and Systems V, 6064A18, 2006.
 (5) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3D transformdomain collaborative filtering 16 (8) (2007) 2080–2095.
 (6) M. Fadili, J. Starck, Sparse representationbased image deconvolution by iterative thresholding, 2006.
 (7) I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communications on Pure and Applied Mathematics 57 (11) (2004) 1413–1457.
 (8) M. Elad, B. Matalon, J. Shtok, M. Zibulevsky, A wideangle view at iterated shrinkage algorithms, Vol. 6701, SPIE, 2007, p. 670102.
 (9) M. Figueiredo, R. Nowak, A bound optimization approach to waveletbased image deconvolution, in: Proceedings of the 2005 IEEE International Conference on Image Processing (ICIP’05).
 (10) Z. BenHaim, Y. C. Eldar, Blind minimax estimation, IEEE Trans. Inform. Theory 53 (2007) 3145–3157.
 (11) S. Kay, Y. C. Eldar, Rethinking biased estimation, IEEE Signal Processing Magazine 25 (3) (2008) 133–136.
 (12) Y. C. Eldar, Rethinking biased estimation: Improving maximum likelihood and the cramér–rao bound, Found. Trends Signal Process. 1 (4) (2008) 305–449.
 (13) C. Mallows, Some comments on , Technometrics 15 (4) (1973) 661–675.
 (14) G. Golub, M. Heath, G. Wahba, Generalized crossvalidation as a method for choosing a good ridge parameter, Technometrics 21 (2) (1979) 215–223.
 (15) P. Hansen, D. O’Leary, The use of the Lcurve in the regularization of discrete illposed problems, SIAM J. Sci. Comput. 14 (6) (1993) 1487–1503. doi:http://dx.doi.org/10.1137/0914086.
 (16) V. Morozov, On the solution of functional equations by the method of regularization, Soviet Math. Doklady 7 (1966) 414417.
 (17) C. M. Stein, Estimation of the mean of a multivariate distribution, Proc. Prague Symp. Asymptotic Statist. (1973) 345–381.
 (18) C. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Stat. 9 (6) (1981) 1135–1151.
 (19) D. Donoho, I. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Amer. Statist. Assoc. 90 (432) (1995) 1200–1224.
 (20) T. Blu, F. Luisier, The SURELET approach to image denoising 16 (11) (2007) 2778–2786.
 (21) J.C. Pesquet, D. Leporini, A new wavelet estimator for image denoising, in: Sixth International Conference on Image Processing and Its Applications, 1997, Vol. 1, 1997, pp. 249–253 vol.1.
 (22) A. BenazzaBenyahia, J.C. Pesquet, Building robust wavelet estimators for multicomponent images using Stein’s principle 14 (11) (2005) 1814–1830. doi:10.1109/TIP.2005.857247.
 (23) Y. Eldar, Generalized SURE for exponential families: Applications to regularization 57 (2) (2009) 471–481.
 (24) M. Elad, B. Matalon, M. Zibulevsky, Coordinate and subspace optimization methods for linear least squares with nonquadratic regularization, Applied and Computational Harmonic Analysis 23 (2007) 346–367.
 (25) M. Figueiredo, R. Nowak, An EM algorithm for waveletbased image restoration 12 (8) (2003) 906–916.
 (26) C. Vonesch, S. Ramani, M. Unser, Recursive risk estimation for nonlinear image deconvolution with a waveletdomain sparsity constraint, in: Proceedings of the 2008 IEEE International Conference on Image Processing (ICIP’08), San Diego CA, USA, 2008, pp. 665–668.
 (27) Z. Wang, B. A.C., Mean squared error: love it or leave it? a new look at signal fidelity measures, Ieee Signal Processing Magazine 26 (2009) 98–117.
 (28) A. Bruckstein, M. Elad, D. Donoho, From sparse solutions of systems of equations to sparse modeling of signals and images (March 2009).
 (29) M. Elad, Why simple shrinkage is still relevant for redundant representations? (December 2006).
 (30) M. Hutchinson, A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines, Communications in Statistics  Simulation and Computation 19 (2) (1990) 433–450.
 (31) Nonlinear GCV and quasiGCV for shrinkage models, Journal of Statistical Planning and Inference 131 (2) (2005) 333–347.
 (32) E. Haber, D. Oldenburg, A GCV based method for nonlinear illposed problems, Computational Geosciences 4 (1) (2000) 41–63. doi:10.1023/A:1011599530422.
 (33) S. O’Sullivan, G. Whaba, A cross validated bayesian retrieval algorithm for nonlinear remote sensing experiments, J. Comput. Physics 59 (3) (1985) 441–455.
 (34) M. Hanke, Limitations of the Lcurve method in illposed problems, BIT Numerical Mathematics 36 (2) (1996) 287–301. doi:10.1007/BF01731984.
 (35) J. Kiefer, Sequential minimax search for a maximum, Proceedings of the American Mathematical Society 4 (1953) 502–506.
 (36) R. Giryes, M. Elad, Y. Eldar, Automatic parameter setting for iterative shrinkage methods, in: Proceedings of the IEEE 25th Convention of Electrical and Electronics Engineers in Israel, 2008 (IEEEI’08), 2008, pp. 820–824. doi:10.1109/EEEI.2008.4736653.