Dictionary Learning for Deblurring and Digital Zoom
This paper proposes a novel approach to image deblurring and digital zooming using sparse local models of image appearance. These models, where small image patches are represented as linear combinations of a few elements drawn from some large set (dictionary) of candidates, have proven well adapted to several image restoration tasks. A key to their success has been to learn dictionaries adapted to the reconstruction of small image patches. In contrast, recent works have proposed instead to learn dictionaries which are not only adapted to data reconstruction, but also tuned for a specific task. We introduce here such an approach to deblurring and digital zoom, using pairs of blurry/sharp (or low-/high-resolution) images for training, as well as an effective stochastic gradient algorithm for solving the corresponding optimization task. Although this learning problem is not convex, once the dictionaries have been learned, the sharp/high-resolution image can be recovered via convex optimization at test time. Experiments with synthetic and real data demonstrate the effectiveness of the proposed approach, leading to state-of-the-art performance for non-blind image deblurring and digital zoom.
Keywords:deblurring super-resolution dictionary learning sparse coding digital zoom
With recent advances in sensor design, the quality of the signal output by digital reflex and hybrid/bridge cameras is remarkably high. Point-and-shoot cameras, however, remain susceptible to noise at high sensitivity settings and/or low-light conditions, and this problem is exacerbated for mobile phone cameras with their small lenses and sensor areas. Photographs taken with a long exposure time are less noisy but may be blurry due to movements in the scene or camera shake. Likewise, although the image resolution of modern cameras keeps on increasing, there is a clear demand for high-quality digital zooming from amateur and professional photographers, whether they crop their family vacation pictures or use footage from camera phones in newscasts. Thus, the classical image restoration problems of denoising, deblurring, multi-frame super-resolution and digital zooming (also called single-image super-resolution) are still of acute and in fact growing importance, and they have received renewed attention lately with the emergence of computational photography (e.g., fergus (); glasdner (); levin ()).
The image deblurring problem is naturally ill-posed: Indeed, perfect low-pass filters remove all high-frequency information from images. They are non-invertible operators, and different sharp images can give rise to the same blurry one. Thus, an appropriate image model is required to regularize the deblurring process. Several explicit priors for natural images have been proposed in the past for different tasks in image restoration. Early work relied on various smoothness assumptions, or image decompositions on fixed bases such as wavelets mallat (). More recent approaches include non-local means filtering buades (), learned sparse models elad (); YiMa (); mairalNonLocal (), piecewise linear estimator yu2010 (), Gaussian scale mixtures portilla (), fields of experts roth (), kernel regression takeda (), and block matching with 3D filtering (BM3D) dabov (). Pairs of low-/high-resolution images have also been used as an implicit image prior in digital zooming tasks freeman (), and combining the exemplar-based approach with image self-similarities at different scales has recently led to impressive results glasdner ().
We propose in this paper to build on several of these ideas with a new approach to non-blind image deblurring (the blur kernel is assumed to be fixed and known) and digital zooming. Like Freeman et al. freeman (), we use training pairs of blurry/sharp or low-/high-resolution image patches readily available for these tasks to learn our model parameters. We also exploit learned sparse local models of image appearance, as in elad (); YiMa (), which have been known to be very effective for several image reconstruction tasks. Our method shares some ideas with the work of Yang et al. YiMa (), but our formulation combines several novelties that improves the results:
- Whereas the approach of YiMa () is purely generative (this model learns how to simultaneously reconstruct pairs of low- and high-resolution patches), our approach learns how to reconstruct a high-resolution patch given a low-resolution one. In essence, the difference is the same as between generative and discriminative models in machine learning.
- We present a novel formulation for non-blind image deblurring and digital zooming, combining a linear predictor with dictionary learning, and show with extensive experiments on both synthetic and real data that our approach is competitive with the state of the art for these two tasks.
- We adapt the stochastic gradient descent of mairalPAMI () for solving the corresponding learning problem allowing the use of large databases of training patches (typically several millions).
Notation. We define for the norm of a vector in as , where denotes the -th coordinate of . We denote the Frobenius norm of a matrix in by .
2 Related Work
2.1 Deblurring and Digital Zoom
Blur is a common image degradation, and the literature on the subject is quite large (see, e.g., GEM (); fergus (); BOA (); foi2006 (); levin (); takeda ()). Most existing methods assume a shift-invariant blur operator such that a blurry image can be modelled as the convolution of the sharp image with a fixed blur kernel :
where is an additive noise, usually i.i.d. Gaussian with zero mean. This model, while often satisfactory, does not take into account the fact that blur due to defocus or rotational camera motion is not uniform levin (). But, at least locally, it is sufficient to describe many types of blurs.
In the noiseless case when the filter is a known imperfect low-pass filter—that is, there is no zero in its Fourier transform, the blurring operator is invertible and deblurring amounts to inverting the Fourier transform. However, noise is always present in natural images, and even a small amount dominates the signal in high frequencies, leading to numerous artefacts. Regularization methods have been extensively studied to tackle this problem Hansen (). They usually impose smoothness constraints on the reconstructed images. The most recent and effective algorithms in this line of work usually adopt a two-step approach BM3D (); foi2006 (); SVGSM (): first, a simple regularized inversion of the blur is performed, then the resulting image is processed with classical denoising algorithms to remove artefacts. Various denoising methods have been used for this task: for instance, a Gaussian scale mixture model (GSM) SVGSM (), the shape-adaptive discrete cosine transform foi2006 (), or block matching with 3D-filtering kernel regression BM3D ().
The digital zooming literature has seen in recent years the development of another line of research, following the exemplar-based method introduced by Freeman et al. freeman (). Correspondences between high-resolution patches and low-resolution ones are learned by building a large database of such pairs. This idea has been successfully exploited by Glasner et al. glasdner (), leading to state-of-the-art results. Along the same line, but using sparse image representations instead, pairs of corresponding patches are used by Yang et al. YiMa () to jointly learn high and low-resolution dictionaries. As shown in Section 3, the method we propose exploits these exemplar-based ideas as well, but in a significantly different way.
2.2 Learned Sparse Representations
Like several recent approaches to image restoration elad (); YiMa (), our method is based on the sparse decomposition of image patches. Using a dictionary matrix in , a signal in is reconstructed as a linear combination of a few columns of , called atoms or dictionary elements. In typical image processing applications, is relatively small, for instance for image patches of size pixels, and can be larger than , e.g., . We say that the dictionary is well adapted to a vector when there exists a sparse vector in such that can be approximated by the product .
Exploiting these types of models usually requires a “good” dictionary. It can either be prespecified or designed by adapting its content to fit a given set of signal examples. Choosing prespecified atoms is appealing: The theoretical properties of the corresponding dictionaries can often be analysed, and, in many cases, it leads to fast algorithms for computing sparse representations. This is indeed the case for wavelets mallat (), curvelets, steerable wavelet filters, short-time Fourier transforms, etc. The success of the corresponding dictionaries in applications depends on how suitable they are to sparsely describe the relevant signals.
Another approach consists of learning the dictionary on a set of signal examples. The sparse decomposition of a patch on a fixed dictionary can be achieved by solving an optimization problem called Lasso in statistics tibshirani () or basis pursuit in signal processing chen ():
where the code in is the representation of over the dictionary , and is a parameter for controlling the sparsity of the solution.111It is well known that regularization yields a sparse solution for , but there is no direct analytic link between the value of and the corresponding effective sparsity that it yields. Following an idea originally introduced in the neuroscience community by Olshausen and Field olshausen (), Aharon et al. elad () have empirically shown that learning a dictionary adapted to natural images could lead to better performance for image denoising than using off-the-shelf ones. For a database of patches of size , a dictionary is learned by solving the following optimization problem
where is the -th patch of the training set, and is its associated sparse code. To prevent the columns of from being arbitrarily large (which would lead to arbitrarily small values of ), the dictionary is constrained to belong to the set of matrices in whose columns have an norm less than or equal to one.
2.3 Deblurring with Dictionaries
Several methods using dictionaries for deblurring have been presented in recent years YiMa (); yu2010 (). Yu et al. yu2010 (), while not learning a dictionary as presented in the previous section, uses orthogonal basis obtained with principal component analysis (PCA). By “learning” several such dictionaries (one for each edge direction), and by choosing the best dictionary for each patch, the sharp patch can be reconstructed.
In the pioneering work by Yang et al. YiMa (), a pair of dictionaries (,) is used, one dictionary for preprocessed blurred patches and the other for sharp patches. The preprocessing consists in the concatenation of oriented high-pass filters (gradients and Laplacian filters). During training, and are learned for representing simultaneously (with the same sparse code) the sharp patches with and the preprocessed blurred patches with . At test time, given a new preprocessed blurry patch , a sparse code is obtained by decomposing using , and ones hopes to be a good estimate of the unknown sharp patch.
This method, while appealing by its simplicity, suffers from an asymmetry between training and testing: Whereas in the learning phase, both blurred and sharp patches are used to obtain the sparse codes, at test time the code is only computed using the blurry patches. Our method addresses this problem by a different training formulation. Moreover preprocessing the data has empirically not shown to be necessary.
3 Proposed Approach
We show in this section how to learn dictionaries adapted to the deblurring and digital zoom tasks. As in exemplar-based methods freeman (); glasdner (); YiMa (), we are given a training set of pairs of patches (obtained from pairs of blurry/sharp images), that are used to estimate model parameters. Unlike the classical dictionary learning problem of Eq. (3) which is unsupervised, our deblurring and digital zoom formulation is therefore supervised, trying to predict the sharp patches from the blurry ones.
To predict a sharp pixel value, it is necessary to observe neighbouring blurry pixels. Sharp patches and blurry patches may therefore have different sizes, which we denote respectively by and , with larger than . During the test phase, we observe a test image and try to estimate the underlying sharp image according to Eq. (1), assuming of course that its blur is of the same nature as the one used during the training phase. The following sections present different formulations to recover an estimate of .
3.1 Linear Model
Blurring is, at least locally, a linear operation resulting from the convolution of a sharp image with a filter. When the support of the blur kernel is small compared to the patch sizes and , one can assume a linear relation between the blurry and sharp patches. Thus, a simple approach to the deblurring problem consists of learning how to invert this linear transform with a simple ridge regression model.
Training Step: A training set , of pairs of blurry/sharp patches is given. The training step amounts to finding the matrix in that solves the following optimization problem:
where denotes the Frobenius norm of the matrix , is the number of training pairs of patches, and is a regularization parameter, which prevents overfitting on the training set and ensures that the learning problem is well posed. When is very large (several millions), overfitting is unlikely to occur and setting to a small value (e.g., in our experiments) leads to acceptable results in practice. For this reason, and for simplifying the notation, we drop the term in the rest of the paper.
Testing Step: The parameters are now fixed, and we are given a noisy test image , the goal being to recover a sharp estimate . However, as mentioned in Section 2, the noise dominates the signal in high frequencies, and in practice the linear model, which basically tries to invert the blur operator, leads to poor results despite the large amount of training data. Improvements can be achieved using recent denoising algorithms, either by pre-processing to remove some of its noise, and/or by post-processing the sharp estimate to remove artefacts.
We now pre-process and call its denoised version, which is obtained with a denoising algorithm mairalNonLocal (), and respectively denote by and the patches of and centered at the pixel , using any indexing of the image pixels. Note that the patches are here different from the ones in the training set, even though we use for simplicity the same notation. We assume with our learned linear model that the relation holds for the patch indexed by . According to this model, the problem of reconstructing the sharp image can be written as:
where is the number of patches in the image . By using such a local linear model, and since the patches overlap, each pixel of the image admits as many predictions as patches it belongs to. The solution of Eq. (5) is the average of the different predictions at each pixel, which is a classical way of aggregating estimates in patch-based methods elad ().
This model is easy to optimize and to understand but has several limitations. First, small mistakes made during the denoising process can be amplified by the deblurring step.
Second, when the blur kernel totally suppresses some of the high frequencies of the image, putting them to zero, one cannot recover them with a local linear model: in the Fourier domain it correspond to a multiplication of the nullified coefficient by a finite number. This is one of the motivations for introducing a nonlinear model based on sparse representations to overcome these limitations.
3.2 Dictionary Learning Formulation
In a recent paper, Yang et al. YiMa () have shown that learning multiple dictionaries to establish correspondences between low- and high-resolution image patches is an effective approach to digital zoom. Following this idea, we propose to learn a pair of dictionaries in and in to reconstruct patterns that the linear model presented in the previous section cannot recover.
Training step: Given again a training set , of pairs of blurry-noisy/sharp patches, we address
where is the solution of the following sparse coding problem
which is unique and well defined under a few reasonable assumptions on the dictionary (see mairal2 () and references therein for more details).222 We have empirically found for our deblurring and super-resolution tasks on natural image patches and our dictionaries that the solution of Eq.(7) was always unique. For different tasks or data, the possible non-uniqueness of the Lasso solution could be an issue (see mairalPAMI ()). The patch is a denoisied version of . The matrices and are two dictionaries jointly learned such that for all , is a good estimator of the sharp patch . Summing two different predictors is a classical way of combining two models. In this case, we are hoping that the addition of the dictionary term to the linear term will permit a better recovery of the high frequencies. The two models are optimized jointly and are not just an averaging of two independent predictors.
Note that does not need to be regularized in our formulation. We indeed assume that a large amount of training data is available and as a consequence our model does not suffer from overfitting.
Testing step: According to our model, and using the same notations as in Eq. (5), our estimate at test time is achieved by solving the following optimization problem
where are respectively here the patches centered at pixel of the sharp image , the blurry, noisy image and the blurry, denoisied image .
In the work of Yang et al. YiMa (), the sparse coefficients are obtained during the training phase by jointly decomposing blurry patches and sharp patches onto two learned dictionaries and . Such a model aims to ensure that there always exists a sparse code that both fits the patches and . However, at test time, since the sharp patches are not available, the vectors can only be computed from blurry patches , and the fact that the resulting should be good for the corresponding sharp patch is not guaranteed.
Our approach does not suffer from this issue since the sparse coefficients are always obtained on blurry patches only, both during the training and testing phase. We learn the dictionaries and and the linear predictor such that is well predicted given a patch . Whereas this solves the issue mentioned above, it leads to more challenging optimization problems than YiMa (). The optimization method we propose builds upon mairalPAMI (), which provides a general framework for solving such dictionary learning problems. The method is presented briefly in Section 4.
We have presented so far a framework adapted to the deblurring task, where we wanted to obtain a sharp image from a blurry one. The problem of digital zoom consists of increasing the resolution of an image, but can be formulated as a deblurring problem in a simple way: A low-resolution image can indeed be turned into a blurry high-resolution image with any interpolation technique, the task of digital zoom being then to deblur this new image. The training pairs of images can be generated by downsampling high-resolution images. Note that the antialiasing filter applied during downsampling and the choice of the interpolation method are important. We worked with the antialiasing from the Matlab function imresize.
The formulation of Eq (6) for learning a pair of dictionaries and and a linear predictor for the deblurring task is a large-scale learning problem, where many training samples can easily be available. The main difficulty in the optimization comes from the terms , which are defined as solutions to the sparse coding problem of Eq. (7). The vectors therefore depend on the dictionary and are not differentiable with respect to it, preventing us from using a direct gradient descent method.
However, despite these two drawbacks, it has been shown in mairalPAMI () that such problems enjoy a few asymptotic properties that make it possible to use stochastic gradient descent when the number of training samples is large. Assuming an infinite training set that are i.i.d. samples drawn from some probability distribution, and under mild assumptions, we define the asymptotic cost function
where are random variables distributed according to the joint probability distribution of low/high-resolution patches.
The optimization of cost functions that have the form of an expectation over a supposedly infinite training set is usually tackled with stochastic gradient techniques (see mairalPAMI (); mairal2 () and references therein), that are iterative procedures drawing randomly one element of the training set at a time. Of course training sets are in practice finite, but we have empirically obtained good results by optimizing on a large training set of millions of training patches. This is indeed the approach proposed in mairalPAMI () for such problems, from which the following proposition can be derived.
[Differentiability of ] Assume that the training data admits a continuous probability density, and assume the same hypotheses on the dictionary as in mairalPAMI (). Then, is differentiable and
where denotes , and
where denotes the indices of the nonzero coefficients of , for any vector , the vector contains the values of the vector corresponding to the indices , and for any matrix , the matrix contains the columns of corresponding to the indices .
Algorithm 1 presents our method for learning and . It is a stochastic gradient descent algorithm, which adapts mairalPAMI () to our formulation. It draws randomly one element of the training set at each iteration, computes the terms inside the expectations of Eq. (10), and moves the parameters one step in these directions.
Since is constrained to be in the set defined in Eq. (3), an orthogonal projection on this set is required at each iteration of the algorithm. It is denoted by .
To improve the efficiency of the algorithm, we use a classical heuristic often referred to as : Instead of drawing a single pair of the training set at the same time, we draw of them, e.g., , compute directions given by Eq. (10), and move the model parameters in the average direction. This improves the stability of the stochastic gradient descent algorithm, and experimentally gives a faster convergence. Since our optimization problem is not convex, it requires a good initialization. We proceed as follows: (i) We learn a dictionary using the unsupervised formulation of Eq. (3) with the software333The SPAMS toolbox is an open-source software available at: http://www.di.ens.fr/willow/SPAMS/ accompanying mairal2 () on the set of patches . (ii) We fix and optimize Eq. (6) with respect to and , which is a convex optimization problem. In experiments, this procedure provides us with a good initialization.
We present here experimental results obtained with our method and comparisons with state-of-the-art methods. In all our experiments, after an initialization step described in the previous section, we use the stochastic gradient descent algorithm with one pass over a database of approximately millions of training patches, which are extracted from a set of natural images. All the images from this dataset are unrelated with the images used for testing our method. Our implementation is coded in C++ and Matlab. Learning a dictionary takes usually a few hours on a recent computer, while testing an image is faster (less than one minute for most of our test images).
5.1 Non-Blind Deblurring with Isotropic Kernels
To compare our method for the non-blind deblurring task, we have chosen a classical set of images and types of blurs, which has been used in several recent image processing papers (see yu2010 () and references therein). Even though addressing such a synthetic non-blind deblurring task of course slightly deviates from real restoration problems with digital cameras, it is still an active topic in the image processing community, and has in fact proven useful in the past, leading to high-impact applications in astronomic imaging starck () for example (see Section 5.2).
The different combinations of blurs and noises are detailed in Table 1, with the shape of the blur kernel and the variance of the noise (which is Gaussian and white). They are used in other papers and go from strong-blur/weak-noise to weak-blur/strong-noise cases.
|1||9 x 9 uniform blur|
|5||Gaussian blur of variance||25|
|6||Gaussian blur of variance||25|
For each blur level, we have generated pairs of blurry/sharp images from our training database, and learned dictionaries of size elements. We have observed that the results quality usually improves with the dictionary size, being a good compromise between quality and computational cost. Since our database is large, the parameters is always set to a negligible value, . The size of patches and are respectively set to and for all experiments. The only parameter which should be carefully tuned to obtain good results is the regularization parameter . Following BM3D (); foi2006 (); SVGSM (), we have manually chosen a value of via a rough grid search for each type of blur and used it for every image. We report quantitative results in Table 2 in terms of improvement in signal-to-noise ratio (ISNR),444 Denoting by MSE the mean-squared-error for images whose intensities are between and , the PSNR is defined as and is measured in dB. A gain of dB reduces the MSE by approximately . and compare our method to the classical Richardson-Lucy algorithm richardson (), and to recent state-of-the-art methods BM3D (); foi2006 (); levin (). A few values are missing in the table: these experiments were not done by the authors of the papers. We observe that our method is competitive or better than the state of the art in experiments , where the supports of the blur kernels are relatively small. On the contrary, our algorithm is significantly behind other approaches in the case 1, probably because our patches are too small compared to the kernel size. The simple linear model while not at the state of the art, is giving surprisingly good results for most of the blurs. Its combination with the dictionaries shows a significant improvement, leading to state-of-the-art performances. Qualitative results are presented in Figures 1, 2 and 3.
|Exp. 1||Exp. 2||Exp. 3||Exp. 4||Exp. 5||Exp .6||Exp. 1||Exp. 2||Exp. 3||Exp. 4||Exp. 5||Exp .6|
|PSNR input image||20.76||22.35||22.29||24.7||25.53||23.44||25.84||27.57||27.35||29.00||30.74||28.97|
|Richardson-Lucy richardson ()||4.47||5.53||3.58||0.49||1.21||1.04||4.80||5.29||2.71||0.02||0.26||0.53|
|Sparse gradient sparseGradient ()||7.73||6.89||4.78||2.24||2.64||2.70||7.02||2.83||5.44||4.06||3.30||3.33|
|SA-DCT foi2006 ()||8.55||8.11||6.33||3.37||-||-||7.79||7.55||6.10||4.49||3.56||3.46|
|BM3D BM3D ()||8.34||8.19||6.40||3.34||3.73||3.83||7.97||7.95||6.53||4.81||4.18||4.12|
|Linear + Dictionary||4.76||8.35||6.47||3.57||3.94||3.35||4.83||7.79||6.13||5.16||4.34||4.17|
|PSNR input image||24.11||26.28||26.10||28.51||30.16||28.18||22.49||23.49||23.35||24.28||25.02||23.46|
|Richardson-Lucy richardson ()||6.46||5.86||3.68||0.04||0.25||0.59||2.26||2.70||1.13||-0.06||0.12||0.02|
|Sparse gradient sparseGradient ()||10.16||8.03||6.43||4.09||3.47||3.92||2.88||6.87||1.51||0.57||0.66||1.11|
|SA-DCT foi2006 ()||10.5||9.02||7.74||4.99||4.14||4.21||4.79||5.45||2.54||1.31||-||-|
|Dabov et al. BM3D ()||10.85||9.32||8.14||5.13||4.79||5.30||5.86||7.80||3.94||1.90||3.17||1.94|
|Linear + Dictionary||6.99||9.32||7.71||5.74||4.98||5.09||2.65||7.64||4.59||2.00||3.11||1.70|
5.2 Astronomical Images
Our method is not designed specifically for the restoration of natural images. It adapts itself to the training set and can so be applied on various data. This versatility is illustrated here on astronomical imaging, which is a field where non-blind deblurring has had a major industrial impact. The experiment setting is based on a classical astronomical case. A star image has to be recovered from a blurred and noisy version of it. The blur kernel is the Hubble Space Telescope kernel as given in starck (). The additive noise is Gaussian. The training set is constructed from several others star images.
Figure 4 presents the results with several deblurring algorithm. Our method result is quantitatively better than the other algorithms: While the two algorithms adapted to natural images BM3D (); sparseGradient () gives a PSNR of 30.8 and 31.3, our method gets 33.5. In particular, our algorithm manages to recover really high values on the brightest stars. This is not surprising, several of these algorithms use priors that do not fit well astronomical images, but it validates the capability of our method to adapt to various data.
5.3 Non-Blind Deblurring with Anisotropic Kernels
While deblurring isotropic blurs is sufficient in many applications, anisotropic blurring appears in practical cases, e.g., camera-shaking blur. To test our algorithm on this setting, we used the kernels from the database by Levin et al. levin (). The local nature of our algorithm makes computationally challenging the treatment of large blurs and so we only worked with downsampled versions of the proposed kernels (by a factor 2). The 8 kernels used are shown in Figure 5. White Gaussian noise of variance 2 is added to the blurry images before deblurring. We compare in Table 3 with the sparse-gradient-based algorithm from Levin et al. sparseGradient () which is, to the best of our knowledge, the one giving the state-of-the-art results for this type of kernels.
Our method does significantly worse than sparseGradient () on three of these kernels: there are the ones where the kernel is large and we think it is probably due to the locality of our predictor. For the kernels, we worked with patches of size and it might be not sufficient for too big kernels.
|Sparse gradient sparseGradient ()||9.04||6.91||7.49||10.67|
|Sparse gradient sparseGradient ()||8.64||9.18||11.15||10.24|
5.4 Digital Zoom
Following the same experimental protocol than for the deblurring experiments, we have evaluated our method for the digital zooming task. The dictionary size is , and the patch sizes are and . Digital zooming is usually done on good quality images, with a very small noise: for this reason we use a small regularization parameter , which is set to .
It is always difficult to evaluate quantitatively the results of digital zoom algorithms. Indeed, upsampling and downsampling methods are often subject to sub-pixel misalignments, which are visually imperceptible, but make important mean square error differences. Moreover, the antialiasing filter that has to be applied during the downsampling is rarely detailed, making comparisons difficult. For this experiment, we used the Matlab function imresize with a bicubic interpolation to create the low-resolution images. The choice of the antialiasing, which allows to create the training set, is really important. With a too strong antialiasing our method might sharpen too much the images, while with a weak antialiasing it might not deblur enough.
We compare quantitatively with the method from Yang et al. YiMa () that also uses dictionaries, proving the efficiency of the discriminative approach. The dictionaries sizes are the same as ours (512), and the parameter is chosen on a validation set of images. This method works in two steps, first, it predicts a high-resolution image from a filtered version of the low resolution one using pairs of dictionaries, then, the image is cleaned using a backprojection. We compare the results at both steps with our method in Table 4.
Our method outperforms the full method from Yang et al. YiMa () by a small margin. But their results obtained only with dictionaries are significatively worse than ours. The discriminative learning of the dictionaries and the addition of the linear predictor improve greatly the results.
|Cubic spline||Yang et al. YiMa ()||Ours|
|Lena||31.91||32.13 / 33.06||33.31|
|Girl||31.44||31.48 / 31.93||32.00|
|Flower||38.48||38.69 / 39.59||39.92|
Figure 6 compares our results with the ones of Yang et al. using one image from YiMa (). We have observed that both methods improve significantly upon the bicubic interpolation and gives similar results (with the backprojection step for Yang et al. YiMa () method).
We have also compared qualitatively our method with others works: In Figure 7, we present digital zooming results (by a factor 4) obtained on one image from fattal (); glasdner (). Our results are in general slightly better visually than fattal () (see the texture of the baby’s hat for instance), but slightly behind glasdner () in terms of sharpness of edges (e.g. the baby’s mouth). On the other hand, Glasdner et al. glasdner ()’s algorithm reconstructs sometimes structures not present in the original image (e.g., square edges in the baby eye). In textured areas, we perform as good as glasdner ().
In this paper, we have presented a new formulation for image deblurring and digital zooming using a supervised formulation of dictionary learning combined with a linear predictor. With a stochastic gradient descent algorithm, our approach is efficient and allows the use of millions of training samples. Experiments on natural images show that our method is competitive with the state of the art for the non-blind deblurring and digital zooming tasks. Future work will consist of extending the approach to the blind deblurring problem, where a blur kernel has to be learned at the same time as the learned dictionaries, and exploiting self-similarities in images, which have proven to be very successful for digital zooming glasdner () and image denoisingmairalNonLocal ().
Acknowledgements.This research was partially supported by the Agence Nationale de la Recherche (MGA Project) and the European Research Council (SIERRA and VideoWorld projects). In addition, Julien Mairal has been supported in part by the NSF grant SES-0835531 and NSF award CCF-0939370. The authors would like to thanks Jean-Luc Starck for sharing the astronomical data used in subsection 5.2 and Jianchao Yang for providing us with his code of digital zooming.
- (1) Buades, A., Coll, B., Morel, J.: A non-local algorithm for image denoising. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2005)
- (2) Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing 20, 33–61 (1999)
- (3) Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing 16(8), 2080–2095 (2007)
- (4) Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image restoration by sparse 3D transform-domain collaborative filtering. In: SPIE Electronic Imaging, vol. 6812 (2008)
- (5) Dias, J.: Fast gem wavelet-based image deconvolution algorithm. IEEE International Conference on Image Processing (2003)
- (6) Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 54(12), 3736–3745 (2006)
- (7) Fattal, R.: Image upsampling via imposed edge statistics. ACM Transactions on Graphics 26(3) (2007)
- (8) Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. ACM Trans. Graph. 25(3), 787–794 (2006)
- (9) Figueiredo, M., Nowak, R.: A bound optimization approach to wavelet-based image deconvolution. IEEE International Conference on Image Processing (2005)
- (10) Foi, A., Dabov, K., Katkovnik, V., Egiazarian, K.: Shape-adaptive DCT for denoising and image reconstruction. In: Proceedings of SPIE, vol. 6064, pp. 203–214 (2006)
- (11) Freeman, W., Jones, T., Pasztor, E.: Example-based super-resolution. IEEE Computer Graphics and Applications pp. 56–65 (2002)
- (12) Glasner, D., Bagon, S., Irani, M.: Super-resolution from a single image. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2009)
- (13) Guerrero-Colon, J., Mancera, L., Portilla, J.: Image restoration using space-variant gaussian scale mixtures in overcomplete pyramids. IEEE Transactions on Image Processing 17(1), 27–41 (2008)
- (14) Hansen, P.: Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion. Society for Industrial Mathematics (1998)
- (15) Levin, A., Fergus, R., Durand, F., Freeman, W.: Deconvolution using natural image priors. ACM Transactions on Graphics 26
- (16) Levin, A., Weiss, Y., Durand, F., Freeman, W.: Understanding and evaluating blind deconvolution algorithms. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009)
- (17) Mairal, J., Bach, F., Ponce, J.: Task-Driven Dictionary Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (2011). To appear.
- (18) Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11, 19–60 (2010)
- (19) Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Non-local sparse models for image restoration. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2009)
- (20) Mallat, S.: A Wavelet Tour of Signal Processing, Second Edition. Academic Press, New York (1999)
- (21) Olshausen, B., Field, D.: Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research 37(23), 3311–3325 (1997)
- (22) Portilla, J., Strela, V., Wainwright, M., Simoncelli, E.: Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Transactions on Image Processing 12(11), 1338–1351 (2003)
- (23) Richardson, W.: Bayesian-based iterative method of image restoration. Journal of the Optical Society of America 62(1), 55–59 (1972)
- (24) Roth, S., Black, M.J.: Fields of experts: A framework for learning image priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2005)
- (25) Starck, J., Murtagh, F.: Astronomical image and data analysis. Springer-Verlag (2006)
- (26) Takeda, H., Farsiu, S., Milanfar, P.: Deblurring using regularized locally adaptive kernel regression. IEEE Transactions on Image Processing 17(4), 550–563 (2008)
- (27) Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B. 58(1), 267–288 (1996)
- (28) Yang, J., Wright, J., Huang, T., Ma, Y.: Image super-resolution via sparse representation. IEEE Transactions on Image Processing 19(11), 2861–2873 (2010)
- (29) Yu, G., Sapiro, G., Mallat, S.: Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity. Preprint arXiv:1006.3056 (2010)