Handling noise in blind deblurring while still being fast
Efficient blind deblurring under high noise levels
The goal of blind image deblurring is to recover a sharp image from a motion blurred one without knowing the camera motion. Current state-of-the-art methods have a remarkably good performance on images with no noise or very low noise levels. However, the noiseless assumption is not realistic considering that low light conditions are the main reason for the presence of motion blur due to requiring longer exposure times. In fact, motion blur and high to moderate noise often appear together. Most works approach this problem by first estimating the blur kernel and then deconvolving the noisy blurred image. In this work, we first show that current state-of-the-art kernel estimation methods based on the gradient prior can be adapted to handle high noise levels while keeping their efficiency. Then, we show that a fast non-blind deconvolution method can be significantly improved by first denoising the blurry image. The proposed approach yields results that are equivalent to those obtained with much more computationally demanding methods.
Efficient blind deblurring under high noise levels
|Jérémy Anger, Mauricio Delbracio, and Gabriele Facciolo††thanks: Work partly financed by Office of Naval research grant N00014-17-1-2552, Agencia Nacional de Investigación e Innovación (ANII, Uruguay) grant FCE_1_2017_135458, Programme ECOS Sud – UdelaR - Paris Descartes U17E04, DGA Astrid project « filmer la Terre » nANR-17-ASTR-0013-01, MENRT; DGA PhD scholarship jointly supported with FMJH.|
|CMLA, ENS Cachan, CNRS, Université Paris-Saclay, 94235 Cachan, France|
|IIE, Universidad de la República, Uruguay|
Index Terms— Image deblurring, blur kernel estimation, deconvolution, high noise
Blind image deblurring is an ill-posed image restoration problem that aims to restore a sharp image given a blurry one. Motion blur occurs when there is relative motion between the camera and the scene during the exposure time. This phenomenon is most visible in low light conditions, when the integration time has to be longer to compensate for the lack of photons. The formation of a blurry image is frequently modeled as the convolution between the sharp image and a latent blur kernel leading to
where denotes the convolution, and models acquisition noise (usually white Gaussian noise). The goal of blind image deblurring is to recover the image without knowing . Most methods propose a two step process: first estimating the blur kernel and then applying a non-blind deconvolution algorithm [1, 2, 3, 4]. The above stationary kernel model can be generally extended to a non-uniform model [5, 3]. However, this comes at the price of a non-negligible computational cost with, in general, only a minor quality improvement [6, 7].
Current state-of-the-art methods, either variational [4, 11, 12] or learning based [8, 13], work very well on images with no noise or very low noise levels. However, the noiseless assumption is not realistic considering the low light conditions that lead to the motion blur in the first place.
Kernel estimation. Only a handful of blind deblurring algorithms from the literature consider the realistic case of moderate or high noise. Tai et al.  show that denoising the image before estimating the kernel leads to an oversmoothing of details in the blurry image and thus errors in the estimated kernel. Instead, they propose to iteratively denoise the image and estimate the kernel. The ad-hoc denoising step uses the motion information from the kernel. Xu et al.  propose a two step kernel estimation. The first step only estimates a coarse kernel. The second step uses an iterative support refinement of the kernel that enforces sparsity without an explicit prior. Zhong et al.  also observe that denoising before kernel estimation results in poor performance. To circumvent this, they design directional filters which reduce the noise level while preserving blur information in the orthogonal direction. The blur kernel is then reconstructed from projections using the inverse Radon transform. Pan et al.  propose a kernel estimation method based on the image gradient prior which allows high quality estimations in low noise level settings . However, the authors indicate that the method under-performs in medium and high noise conditions . In this paper, we propose an adaptation of the -based kernel estimation method which is both efficient and robust to noise.
Non-blind deconvolution. Once the blurring kernel is estimated, most methods apply a non-blind deconvolution algorithm to restore the sharp image . The fastest deconvolution methods usually rely on image priors that do not perform well under high noise conditions (e.g., Total Variation). In the past decade, better image priors have been introduced to offer higher quality non-blind deconvolution. For example, EPLL  learns a mixture of Gaussian models to encode representative patches from natural images, and proposes an iterative algorithm to restore the image in presence of Gaussian blur. Generic frameworks such as Plug-and-Play priors  and more recently Regularization by Denoising , allow to use any image denoiser as a prior for restoration problems. Similarly, Zhong et al.  propose to use NL-means at each step of an iterative non-blind deconvolution, and Tai and Lin  incorporate a motion-aware denoiser for blind deblurring. While these methods significantly outperform basic priors such as TV, they are usually prohibitively slow due to the complex optimizations involved. Other methods propose to first inverse the blur with little regularization and then denoise the result [20, 21]. While computationally efficient, these methods require to solve the difficult problem of removing correlated noise.
Contributions. We study the robustness to noise of the kernel estimation method introduced by Pan et al.  and improve it by making it robust to noise (up to while maintaining a good performance in terms of quality and speed. These adaptations are not specific to this particular method and can be included in most methods that alternate between sharp image prediction and kernel estimation. We then propose a non-blind deconvolution method capable of handling moderate to high noise. The method uses denoising as a preprocessing step. While being conceptually simple, the proposed method is competitive with the state-of-the-art that iterate denoising inside the algorithms, which is much more computationally demanding.
2 Proposed method
The proposed method first estimates the kernel by iterating between two steps: (i) sharp image prediction and (ii) kernel estimation. Then, once the kernel has been estimated, the final image is restored using a non-blind deconvolution algorithm.
2.1 Sharp image prediction
The goal of this step is to recover the main structures of the latent sharp image using the previously estimated blur kernel and imposing additional prior information about sharp images. One very effective prior is the gradient prior, introduced for image deblurring by Pan et al.  in the following optimization problem
The energy (2) is minimized using a half quadratic splitting formulation, which leads to iteratively solving two sub-problems
The closed form solution for the sub-problem (3) is the hard thresholding operator on the gradients of , whereas the sub-problem (4) corresponds to the deconvolution of with an attachment term on the vector field and . Unless specified and according to , is set to and to . The weight controls the amount of details – and noise – that should be contained in . After a complete sharp prediction step, the parameter is decreased until it reaches the threshold .
We observed that when the blurry image is contaminated with noise and is small, the solution contains spikes fitting the noise. In order to have a clean estimation of , albeit coarser, it is required to increase the regularization weight until noise is no longer included in the solution. Since the minimization acts as a hard thresholding, it is clear that using a larger threshold will result in a more conservative noise artifact removal. However, as the regularization increases, restored details that would have otherwise been included are removed from the solution.
To summarize, the sharp image prediction using step can be made robust to noise by adapting the regularization limit so that noise artifacts are filtered. This tuning should be performed per noise level.
2.2 Kernel estimation
This step uses the current sharp image prediction and the blurry image to estimate a blur kernel. Since the support of the blur kernel is significantly smaller than the image, this problem is usually well posed if both images are noiseless. In such conditions, simple priors for the kernel can be employed, leading to efficient computations. For example, a well known minimization problem for the kernel estimation step is
Variants of this energy have been proposed. For example, Cho et al.  showed that by formulating the data term in a filtered domain (e.g. using image gradients) the conditioning of the problem was improved. This speeds up convergence when using a conjugate gradient algorithm but increases the weight of the frequencies most affected by noise. As the blurry image gets noisier, noise in the estimation also increases, with little control. A trick often found in kernel estimation implementations [4, 12, 23], consists in filtering the kernel values after its estimation using both a hard thresholding and a connected component filtering, removing low amplitude noise but also biasing the estimation.
Instead, we propose to use more suited priors and kernel constraints by minimizing
where is a rectangular domain covering the support of , and and are regularization parameters. The regularizers and were motivated in Xiong  for their effectiveness for kernel estimation and the spatial constraints were studied in Almeida et al. . To highlight to importance of each constraint and prior, we evaluate their contribution by successively removing them and running the full blind kernel estimation method for two noise levels ( and ). Results are shown in Figure 2. The kernel (2b) was estimated using Equation (6) with and . We then successively set (2c), (2d), and finally use gradients in the data-term  (2e). Notice how each prior helps removing noise and the difference to the ground-truth is reduced (2a). Using a filtered domain to estimate the kernel introduces errors that can be otherwise easily avoided.
Assuming circular boundary conditions for the convolution, the subproblem (7) can be solved efficiently using two discrete Fourier transforms
The subproblem (8) enforces non-negativity and a given spatial support for , and its solution corresponds to a soft thresholding
Similarly to continuation methods, starts with a low value and is multiplied by at each iteration. The method stops when it reaches which implies that only iterations are required, with FFTs per iteration. In comparison, conjugate gradient methods usually require iterations with FFTs per iteration in ideal conditions , but are unstable in presence of noise. Finally, even though unrealistic circular boundary conditions are assumed in Equation (9), we observed that the regularization terms in conjunction with an edge-tapering procedure  are sufficient to avoid boundary artifacts.
Coarse-to-fine scheme kernel estimation. Alternating between kernel estimation and sharp image prediction allows to successfully retrieve small kernels. A coarse-to-fine scheme is generally employed to efficiently recover large kernels . Our implementation is based on  which upscales the predicted sharp image by a factor two using bicubic interpolation. However, instead of iterations per scale as performed in , our method requires only iterations by warm-starting the second one using the the previous estimation of . This allows to reduce the number of inner iterations required for the sharp prediction step by setting and in (3) and (4). These modifications constitute a significant speed-up with no loss of performance, as we show in the experimental section.
2.3 Non-blind deconvolution
Non-blind deconvolution algorithms in noiseless settings reach in general high quality results. The main difficulties come from errors in the estimated kernel or when a frequency component gets cancel by the blurring kernel. Priors such as total variation  (TV) are efficient at reducing ringing artifacts arising from these errors and fast solvers exist . However, in presence of noise, the weight associated with the regularization has to be increased, and in the case of total variation artifacts such as staircasing start to appear, hence the need for more natural image priors.
Given recent progress in the denoising field [20, 31, 32, 33], we argue that preprocessing the image with a denoising before non-blind deconvolution is now a viable, and very efficient, solution against the noise. While a direct inversion of blur on a denoised image can still produce ringing artifacts, using a TV prior with a low regularization is sufficient to remove ringing while keeping a staircasing free image, giving it a more natural aspect than a high TV regularization without denoising as preprocessing. A similar approach was studied in Badri et al. .
In what follows we present several deblurring results on synthetic and real images. We compare our results against Zhong et al.  which is robust to noise, Pan et al.  which uses the gradient prior and more recent blind methods [10, 8]. We first assess the performance of our kernel estimation method under challenging noise levels, then show qualitative results from our non-blind deconvolution procedure before evaluating blind results. Finally, we compare blind deblurring results on a real-world image. We first assess the performance of our kernel estimation method under challenging noise levels, then show qualitative results from our non-blind deconvolution procedure before evaluating blind results. Finally, we compare on a real-world image.
Noise-robust kernel estimation. In order to assess the performance of our kernel estimation, we extend the dataset of Levin et al.  by adding three levels of Gaussian noise to the blurry images: , and . As a measure of quality of the estimated kernels, we compute the root mean square error (RMSE) minimized by translating the kernel by integer shifts. Table 1 shows the results for Pan et al. , Zhong et al.  and our kernel estimation on this dataset. As expected, in the noiseless case all kernels are well estimated. However as the noise increases, the results of Pan et al. degrade quickly while Zhong’s and ours show robustness.
In addition to this quantitative study, we show a sample of estimated kernels by the three methods in Figure 3 for the noise level . Visual inspection of the kernels are in accordance with the quantitative measure: Pan et al. show no robustness to noise, Zhong et al. kernels exhibit a correct recovering of the kernel’s shape while our method is able to estimate sharper kernels.
|Pan et al. ||0.132||0.163||0.171|
|Zhong et al. ||0.137||0.143||0.158|
Non-blind deconvolution under high noise. We proposed a non-blind deblurring method based on denoising the image before deconvolution. We compare three non-blind deconvolution methods: Zhong et al. , Krishnan et al.  (with as regularization), and our method composed of denoising using FFDNet and the deconvolution of . Figure 4 compares non-blind deconvolution results using the ground-truth kernel with a noise level of . Regularization weights for all three methods are tuned for best average PSNR over five images from  (including the image in Figure (4a)). We observe that our method is able to recover more details than Zhong et al.  while having a smoother aspect than Krishnan et al.  thanks to the denoising preprocessing.
Blind deblurring comparison. The previous experiments indicated good performance for the kernel estimation and non-blind deconvolution. We now validate the complete blind deblurring method and compare against competitive methods on three levels of noise. Table 2 shows PSNR111PSNR is computed after registering the images with the ground-truth and cropping to avoid boundary effects. computed over 5 images from . Running times are also reported in Table 2 for single thread CPU execution on an Intel Xeon E5-2650. For this experiment, we set and , and kept for all noise levels. A visual comparison of the results for is shown on Figure 1. In such challenging situations, most methods fail to estimate the kernel and the deconvolution introduces ringing or regularization artifacts that are much less present in our result. More visual results and source code are available online at the project webpage222https://goo.gl/p5Rndy.
|Pan et al. ||26.60||24.29||23.81||165s|
|Zhou et al. ||27.35||25.31||24.01||72s|
|Tao et al. ||24.99||22.76||20.28||123s|
|Zhong et al. ||24.39||23.84||23.38||154s|
Real world images. Figure 5 shows the results on a real-world image from Zhong et al. . We estimated the noise standard deviation to be approximately and applied our blind deblurring method. Even though the deblurring results are close, the method of Zhong et al. took 250s for kernel estimation and 370s for non-blind deconvolution (MATLAB implementation) while our method took 10s to estimate the kernel, 6s to denoise and 10s to deconvolve the image of size (C++ implementation).
We showed that even though kernel estimation is often understood as being very unstable in the presence of noise, it is possible to have robust estimations. First, we showed that the gradient prior could be actually very robust to noise if the regularization weight is set sufficiently high, leading to a noiseless sharp image prediction. Then, the kernel estimation step should also take the noise into account, and we proposed a splitting strategy to exploit spatial and non-negativity constraints as well as two regularizations terms on the kernel. Finally, for the final non-blind deconvolution, a simple and efficient way to handle high noise is simply to denoise the blurry image before using deconvolution. Qualitative and quantitative results highlighted the strength of our method when compared to other noise handling methods.
As future work, we would like to improve the non-blind deconvolution part by using a network trained on blurry image as well as use other restoration methods to remove JPEG compression artifacts for example.
-  R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph., vol. 25, no. 3, pp. 787, 2006.
-  S. Cho and S. Lee, “Fast motion deblurring,” ACM Trans. Graph., vol. 28, no. 5, pp. 1, 2009.
-  M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in ICCV, 2011.
-  J. Pan, Z. Hu, Z. Su, and M.-h. Yang, “Deblurring Text Images via L0-Regularized Intensity and Gradient Prior,” in CVPR, 2014.
-  O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images.,” Int. J. Comput. Vis., vol. 98, no. 2, pp. 168–186, 2012.
-  W.-S. Lai, Z. Hu, N. Ahuja, and M.-H. Yang, “A Comparative Study for Single Image Blind Deblurring,” CVPR, 2016.
-  R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling, “Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database,” LNCS, vol. 7578, no. 7, pp. 27–40, 2012.
-  X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in CVPR, 2018.
-  L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang, “Handling noise in single image deblurring using directional filters,” in CVPR, 2013.
-  X. Zhou, M. Vega, F. Zhou, R. Molina, and A. K. Katsaggelos, “Fast Bayesian blind deconvolution with Huber Super Gaussian priors,” Digital Signal Processing: A Review Journal, vol. 60, pp. 122–133, 2017.
-  L. Xu, S. Zheng, and J. Jia, “Unnatural L0 sparse representation for natural image deblurring,” in CVPR, 2013.
-  J. Pan, D. Sun, H. Pfister, and M.-H. Yang, “Blind Image Deblurring Using Dark Channel Prior,” in CVPR, 2016.
-  M.-H. Y. Jiawei Zhang, Jinshan Pan, Jimmy Ren, Yibing Song, Linchao Bao, Rynson W.H. Lau, “Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks,” in CVPR, 2018.
-  Y. W. Tai and S. Lin, “Motion-aware noise filtering for deblurring of noisy and blurry images,” in CVPR, 2012.
-  L. Xu and J. Jia, “Two-phase kernel estimation for robust motion deblurring,” LNCS, vol. 6311, no. 1, pp. 157–170, 2010.
-  J. Pan, Z. Hu, Z. Su, and M. H. Yang, “L0-Regularized Intensity and Gradient Prior for Deblurring Text Images and beyond,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 2, pp. 342–355, 2015.
-  D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in ICCV, 2011.
-  S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in GlobalSIP. IEEE, 2013, pp. 945–948.
-  Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (red),” SIAM J Imaging Sci, vol. 10, no. 4, pp. 1804–1844, 2017.
-  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
-  C. J. Schuler, H. Christopher Burger, S. Harmeling, and B. Scholkopf, “A machine learning approach for non-blind image deconvolution,” in CVPR, 2013.
-  J. Pan and Z. Su, “Fast l0-Regularized Kernel Estimation for Robust Motion Deblurring,” IEEE Signal Processing Letters, vol. 20, no. 9, pp. 841–844, 2013.
-  A. Chakrabarti, “A neural approach to blind motion deblurring,” in LNCS, 2016, vol. 9907 LNCS, pp. 221–235.
-  N. Xiong, R. W. Liu, M. Liang, D. Wu, Z. Liu, and H. Wu, “Effective alternating direction optimization methods for sparsity-constrained blind image deblurring,” Sensors (Switzerland), vol. 17, no. 1, pp. 1–27, 2017.
-  M. S. C. Almeida and M. Figueiredo, “Blind Image Deblurring With Unknown Boundaries Using the Alternating Direction Method of Multipliers,” in IEEE ICIP, 2013.
-  D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process., vol. 4, no. 7, pp. 932–946, 1995.
-  S. Reeves, “Fast image restoration without boundary artifacts,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1448–1453, oct 2005.
-  J. Anger, G. Facciolo, and M. Delbracio, “Blind image deblurring using the l0 gradient prior,” Image Proc. OnLine, 2019.
-  L. I. Rudin and S. Osher, “Total variation based image restoration with free local constraints,” in IEEE ICIP, 1994.
-  D. Krishnan and R. Fergus, “Fast Image Deconvolution using Hyper-Laplacian Priors,” in Advances in Neural Information Processing Systems, 2009, pp. 1033–1041.
-  M. Lebrun, A. Buades, and J.-M. Morel, “Implementation of the "Non-Local Bayes" (NL-Bayes) Image Denoising Algorithm,” Image Proc. OnLine, vol. 3, pp. 1–42, 2013.
-  G. Facciolo, N. Pierazzo, and J.-M. Morel, “Conservative Scale Recomposition for Multiscale Denoising (The Devil is in the High Frequency Detail),” SIAM J Imaging Sci, vol. 10, no. 3, pp. 1603–1626, jan 2017.
-  K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. Image Process., 2018.
-  H. Badri and H. Yahia, “Handling noise in image deconvolution with local/non-local priors,” in IEEE ICIP, 2014.
-  M. Tassano, J. Delon, and T. Veit, “An Analysis and Implementation of the FFDNet Image Denoising Method,” Image Proc. OnLine, vol. 9, pp. 1–25, 2019.
-  A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in CVPR, 2009.
-  J. Anger, G. Facciolo, and M. Delbracio, “Estimating an image’s blur kernel using natural image statistics, and deblurring it: An analysis of the goldstein-fattal method,” Image Proc. OnLine, vol. 8, pp. 282–304, 2018.