SemiBlind SpatiallyVariant Deconvolution in Optical Microscopy with Local Point Spread Function Estimation By Use Of Convolutional Neural Networks
Abstract
We present a semiblind, spatiallyvariant deconvolution technique aimed at optical microscopy that combines a local estimation step of the point spread function (PSF) and deconvolution using a spatially variant, regularized RichardsonLucy algorithm [1]. To find the local PSF map in a computationally tractable way, we train a convolutional neural network to perform regression of an optical parametric model on synthetically blurred image patches. We deconvolved both synthetic and experimentallyacquired data, and achieved an improvement of image SNR of 1.00 dB on average, compared to other deconvolution algorithms.
Adrian Shajkofci, Michael Liebling
\addressIdiap Research Institute, CH1920 Martigny, Switzerland
École Polytechnique Fédérale de Lausanne, CH1015 Lausanne, Switzerland
Electrical & Computer Engineering, University of California, Santa Barbara, CA 93106, USA
Microscopy, blind deconvolution, point spread function, convolutional neural networks
1 Introduction
Optical microscopy is a powerful tool to comprehend biological systems, enabling researchers and physicians to acquire qualitative and quantitative data about cellular function, organ development, or diseases. However, light traveling through any imaging system undergoes diffraction, which leads to image blur [2]. This represents an intrinsic limit and the determining factor for the resolution of an optical instrument, and thus limits visual access to details. Indeed, the optical system only collects a fraction of the light emitted by any one point on the object, and cannot focus the light into a perfect image. Instead, the light spreads into a threedimensional diffraction pattern. Image formation can be modeled as the convolution of the original object with a PSF, which sums up the optical aberrations [3]. For thin, yet not flat, samples, the PSF remains shiftinvariant within small areas of the 2D image, but the threedimensional depth of the imaged object produces a local blur. Using a PSF corresponding to the blur in a deconvolution algorithm can be used to restore details in the image [4].
Deconvolution techniques can be categorized into three classes: (1) Nonblind methods, (2) entirely blind methods, and (3) parametric semiblind algorithms. Nonblind methods require knowledge of the PSF [5]. One of the main difficulties in practice is to determine the original PSF that characterizes the actual optical system without discovering it empirically by acquiring a 3D image of a fluorescent bead, which is a tedious and timeconsuming calibration step. The two latter classes fall into blind deconvolution (BD) techniques, which improve the image without prior knowledge of the PSF, the object or other optical parameters. Entirely blind algorithms, such as [6] are based the optimization of a penalty function or a maximum a posteriori (MAP) estimation of the latent image or kernel [7]. However, these methods typically use strong constraints such as sharpness along the object edges and do not always generalize to unexpected or noisy types of data [8], which are common in microscopy images. Also, many BD techniques are computationally expensive, especially for larger convolution kernels, and assume spatially invariant PSFs.
Finally, parametric or semiblind algorithms are blind methods that are constrained by knowledge about the transfer function distribution, such as a diffraction model or a prior on the shape of the PSF ([9], [10]). Parametric models allow reducing the complexity of the optimization problem, increasing the overall robustness, and avoiding issues such as overfitting. However, it remains hard to estimate the parameters from experimental data. We will focus on this third class of deconvolution methods, by addressing the central problem of how to best infer the parameters without measuring any of them experimentally.
Machine learning recently improved the ability to classify images [11], detect objects, or describe content [12]. Convolutional Neural Networks (CNNs) [13], in particular, are built for learning new optimal representations of image data and perform selfregulating feature extraction [14]. Because of their ability to learn correlations between high and lowresolution training samples, CNNs appear well adapted to our problem of determining the blur kernel. A similar reasoning led to recent results in [15] and [16], where the direction and amplitude of motion blur was determined by a CNN classifier from images blurred with a Gaussian kernel.
Here we present a spatiallyvariant BD technique aimed at microscopy of thin, yet nonflat objects. Our method combines local determination of the PSF and spatiallyvariant deconvolution using a regularized RichardsonLucy (RL) algorithm [1]. To find the PSF in a computationally tractable way, we train a CNN to perform regression of model parameters on synthetically blurred image patches. The novel aspects of our approach are: (1) Our method does not require the experimental measurement of a PSF, only synthetic training data is necessary. (2) Compared to nonparametric BD, the problem complexity remains low and therefore is more easily amenable to optimization. (3) Parameters with a physical meaning are inferred from the image itself. (4) The algorithm is computationally efficient, resulting in a near realtime kernel regression and mapping at the expense of a much longer, yet straightforward, training phase.
In Section 2, we describe each step of the method. In Section 3, we present experiments with synthetic and acquired data. In Section 4, we conclude.
2 Method
We describe a deconvolution method for a 2D image in 3D space, such as a thin sample suspended in a gel. Blur in the acquired 2D image can be determined by an impulse response . It is assumed that varies along the two lateral coordinates s of the image. Thus, the optical system can be represented by the following image formation equation:
(1) 
In a nutshell, we propose the following steps to infer the parameters of the spatiallydependent function :
 [align=parleft,leftmargin=11pt,labelsep=8pt,topsep=6pt,itemsep=1ex,partopsep=1ex,parsep=1ex]

We develop a parametric model for allowing the generation of PSF/parameters pairs. We gather a training library of microscopy images, degraded by the spaceinvariant convolution of synthetic PSFs.

We train a CNN on the above dataset to recognize the PSF model parameters given a unique blurred patch (Figure 1).

Using a sliding window, we extract small patches from a large input image. For each patch, we regress the PSF parameters using the CNN. We combine the results and generate a map of PSFs over the input image.

We recover a deblurred image using a Total Variation (TV) regularized spacevariant RL algorithm from the estimated map of PSFs.
In the following subsections, we provide details on each of these four steps.
2.1 Training library generation
Optical abnormalities, such as defocus, astigmatism, or spherical aberrations, can be modeled with superpositions of Zernike polynomials in the expansion of the pupil function [17]:
(2) 
where denotes the twovector of spatial coordinates, the maximal order of considered aberrations, and the parameters corresponding to each Zernike polynomial . The pupil function can ultimately be converted, using a Fourier transform, into a PSF , with the Fourier transform. A training dataset is created by convolving images , , with generated PSFs of parameters drawn from an uniform distribution. The degraded training images are obtained as follows:
(3) 
We crop the images to size and pair them with their respective in order to form the training set (see Figure 1, part 2.1).
Implementation: We gathered a library of microscopy images from publicly available sources, augmented with synthetic cell images. We blurred the images by PSFs generated with parameters drawn from an uniform distribution. We generated PSFs with and . When , the covered aberration was the focus, when , the aberrations were the former plus the astigmatism. The images were cropped to . We added Poisson noise and subtracted the pixelwise mean from every patch. Images that did not comply with a minimal variance and white pixel ratio were rejected from the library. We applied image rotation to further augment to 2 million pairs.
2.2 CNN training for PSF parameter regression
Given an image patch as input, we wish to estimate the parameters of the PSF that was used to degrade the patch. For that purpose, we train a CNN for regression of these parameters from the established training set (Figure 1). We aim at training the neural network to learn the Zernike parameters that specify the the PSF by minimizing the Euclidean loss function:
(4) 
where is the current estimate of the th Zernike parameter on the training image and its groundtruth as stored in .
Implementation: We trained two CNN models that we selected based on their good performance in the ImageNet competition: AlexNet [11], adapted to grayscale input images, and a 34layer ResNet [18]. We modified the last fullyconnected layer to have outputs. We trained both networks with the Adam optimizer in Caffe [19] for 10 epochs.
2.3 Blind spatially variant kernel estimation of local PSF parameter map
Given the trained model able to recover the degradation parameters from a single image patch, we now turn to the problem of locally estimating the parameter of the PSF that degraded a larger input image. To achieve this, we use an overlapping sliding window over the input image with stride that is fed into the locally invariant regression CNN trained in 2.2 (see Figure 1, part 2.3). We store the resulting parameters in a map , where is the PSF corresponding to the patch with parameters and the total number of patches and thus local PSF kernels.
Implementation: The window size was the same as the training patches . For example, a input image using patches and yields spatiallydependent PSF kernels. In order to avoid disparities and artifacts, we smoothened the parameter map with median filtering.
2.4 Spatially variant deconvolution
Given the degraded image and a local map of PSF parameters, we can now restore the input using TVRL deconvolution. RL is an iterative maximumlikelihood approach derived from Bayes’s theorem and assumes that the noise follows a Poisson distribution [20], which is well adapted for microscopy. The method is subject to noise amplification, which can, however, be counterbalanced by a regularization term that penalizes the norm of the gradient of the signal [1]. Here, we assume that the PSF is spatially invariant in small parts of the image. We detail these steps in the subsections below.
Spatially variant filtering
Spatially variant convolution techniques have been extensively reviewed by Denis et al. (2015) [21]. Hirsch et al. have shown that our local invariance assumption can be improved by convolving the input with every local PSF and then reconstructing the image using interpolation [6]. We extend this method by using it as the basic convolution block in the TVRL algorithm. Rather than interpolating deconvolved images, the method interpolates the PSF for each point in the image space [22]. The idea for this pseudo spacevariant filtering is: (i) to cover the image with overlapping patches using smooth interpolation, (ii) to apply to each patch a different PSF, (iii) to add the patches to obtain a single large image. The equivalent for convolution can be written as:
(5) 
where is the masking operator of the th patch. We illustrated the masking and deconvolution steps in Figure 2.
0.91  0.58  1.90 dB  4.48 dB  3.48 dB  1.4 dB  1.54 dB  0.10  0.51  0.28  0.29  0.52 
Regularization of the RL algorithm
In noisy images, the RL algorithm tends to exacerbate small variations. TV regularization is used to obtain a smooth solution while preserving the borders [1]. The resulting image at each RL iteration becomes:
(6) 
with the blurry image, the deconvolved image at iteration , the deconvolved patch at iteration , the number of patches (and different PSFs) in one image , the PSF for patch and the regularization factor. is the finite difference operator, which approximates the spatial gradient.
Implementation: Here, was a bilinear interpolating function. If is small, RL is dominated by the data, and if , RL is dominated by the regularization [1]. Therefore, we used a balanced .
Since using FFTbased calculations implies that is circulant, we had to take into account the field extension to prevent spatial aliasing. The number of RL iterations, , was fixed to .
3 Experimental results
We validated the method using synthetic and live microscopy experiments. The results were compared using the SNR and SSIM [26] with three BD methods : 1. MATLAB’s deconvblind [23]; 2. the nonCNN deconvolution method [24], and 3. the spacevariant BD described in [25]. Due to gain difference between methods and boundary effects, we computed the maximum SNR obtainable following linear scaling and offset of the gray values.
3.1 Synthetic data
We generated a grayscale grid pattern, as shown in Figure 2. Using the technique in equation (5), we degraded each quadrant of the input image with a specific, randomlygenerated PSFs using parameters drawn from a uniform random distribution. We subsequently inferred a PSF map via the CNN from the blurry image as described in 2.3. Finally, we deconvolved the image and determined the match between the Zernike parameters used for PSF generation and the recovered parameters, as well as the reconstruction quality. Results in Table 1 indicate an improvement of the spatially variant BD in comparison to invariant BD.
3.2 Acquired data
We used a widefield microscope to acquire the image of a housefly wing. The sample was tilted 20 degrees so that only the central part of the image was in focus. The goal was to recover the in and outoffocus PSFs in order to restore the whole image. Our method gives visual results that are similar to [24] (Figure 3). With regions, we generated the map in 18 milliseconds with a consumergrade GPU. The additional time necessary for determining the PSF maps is negligible compared to the time necessary for deconvolution.
4 Discussion
Spacevarying BD is a more complex process than spaceinvariant deconvolution. However, our experiments showed that it is possible to add parametric constraints to the problem and thereby allowing to map patches of spatiallyinvariant PSFs onto the input image.
With our CNNbased method, we were able to detect their original blur kernel with an average accuracy of 91.1% (Table 1), given only synthetic images as the training input (no experimental measurement of the PSF was necessary). From these results, we have been able to deconvolve these image with an SNR on average 1.00 dB higher than other blind deconvolution techniques. We used the method on acquired experimental data and observed a visual improvement similar or, in some regions, better than when we used other methods. The main limitation of our approach is that the number of regressed Zernike polynomials has an important influence on the performance. For example, the parameter “focus” is well recovered, but other parameters, such as coma, have a very low regression accuracy. We will focus future efforts into researching a CNN architectures that are more specific to the problem. In contrast to endtoend deblurring methods such as [27], users may be able to easily customize our algorithm with other parametric models, CNN networks or deconvolution techniques for specific applications. Our method is therefore suitable to other applications than image enhancement, such as autofocus or depth determination from focus [28]. But mostly, our method could be used to simplify procedures that required physical measurements of PSFs such as [29], where deconvolution improved PALM/STORM microscopy using PSFs from a set of calibration images.
Source code, documentation and data will be made available upon acceptance.
5 References
References
 N. Dey, L. BlancFéraud, C. Zimmer, P. Roux, Z. Kam, J.C. OlivoMarin, and J. Zerubia, “Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microsc. Res. Tech., vol. 69, no. 4, pp. 260–266, Apr. 2006.
 S. F. Gibson and F. Lanni, “Diffraction by a circular aperture as a model for threedimensional optical microscopy,” J. Opt. Soc. Am. A, vol. 6, no. 9, pp. 1357–1367, Sept. 1989.
 D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, and M. Unser, “DeconvolutionLab2: An opensource software for deconvolution microscopy,” vol. 115, pp. 28–41, Feb. 2017.
 A. Griffa, N. Garin, and D. Sage, “Comparison of deconvolution software in 3D microscopy: A user point of view—part 1,” GIT Imaging Microsc., vol. 12, no. 1, pp. 43–45, 2010.
 F. Soulez, “A “Learn 2D, Apply 3D” method for 3D deconvolution microscopy,” in IEEE ISBI 2014, Apr. 2014, pp. 1075–1078.
 M. Hirsch, S. Sra, B. Scholkopf, and S. Harmeling, “Efficient filter flow for spacevariant multiframe blind deconvolution,” in IEEE CVPR 2010, San Francisco, CA, June 2010, pp. 607–614.
 A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding blind deconvolution algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2354–2367, Dec. 2011.
 D. A. Fish, A. M. Brinicombe, E. R. Pike, and J. G. Walker, “Blind deconvolution by means of the Richardson–Lucy algorithm,” J. Opt. Soc. Am. A, vol. 12, no. 1, pp. 58–65, Jan. 1995.
 F. Aguet, D. Van De Ville, and M. Unser, “Modelbased 2.5D deconvolution for extended depth of field in brightfield microscopy,” IEEE Trans. Image Process., vol. 17, no. 7, pp. 1144–1153, July 2008.
 R. Morin, S. Bidon, A. Basarab, and D. Kouamé, “Semiblind deconvolution for resolution enhancement in ultrasound imaging,” in IEEE ICIP 2013, Sept. 2013, pp. 1413–1417.
 A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates, Inc., 2012.
 R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in IEEE CVPR 2014, 2014.
 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
 Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” vol. 521, no. 7553, pp. 436–444, May 2015.
 J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a Convolutional Neural Network for nonuniform motion blur removal,” in IEEE CVPR 2015, 2015.
 S. Nah, T. H. Kim, and K. M. Lee, “Deep multiscale convolutional neural network for dynamic scene deblurring,” in IEEE CVPR 2017, 2017.
 F. von Zernike, “Beugungstheorie des schneidenverfahrens und seiner verbesserten form, der phasenkontrastmethode,” vol. 1, no. 7, pp. 689–704, May 1934.
 K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE CVPR 2016, 2016, pp. 770–778.
 Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” ArXiv14085093 Cs, June 2014.
 W. H. Richardson, “Bayesianbased iterative method of image restoration,” vol. 62, no. 1, pp. 55–59, Jan. 1972.
 L. Denis, E. Thiébaut, F. Soulez, J.M. Becker, and R. Mourya, “Fast approximations of shiftvariant blur,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 253–278, Dec. 2015.
 M. Temerinac Ott, O. Ronneberger, R. Nitschke, W. Driever, and H. Burkhardt, “Spatially variant LucyRichardson deconvolution for multiview fusion of microscopical 3D images,” in IEEE ISBI 2011, Mar. 2011, pp. 899–904.
 T. J. Holmes, S. Bhattacharyya, J. A. Cooper, D. Hanzel, V. Krishnamurthi, W.c. Lin, B. Roysam, D. H. Szarowski, and J. N. Turner, “Light microscopic images reconstructed by maximum likelihood deconvolution,” in Handbook of Biological Confocal Microscopy, pp. 389–402. Springer, Boston, MA, 1995.
 J. Kotera, F. Šroubek, and P. Milanfar, “Blind deconvolution using alternating maximum a posteriori estimation with heavytailed priors,” in International Conference on Computer Analysis of Images and Patterns. 2013, pp. 59–66, Springer.
 O. Whyte, J. Sivic, and A. Zisserman, “Deblurring shaken and partially saturated images,” Int. J. Comput. Vis., vol. 110, no. 2, pp. 185–201, Nov. 2014.
 Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
 P. Wieschollek, M. Hirsch, B. Schölkopf, and H. P. Lensch, “Learning blind motion deblurring,” in IEEE ICCV 2017, 2017, pp. 231–240.
 V. M. Bove, “Entropybased depth from focus,” J. Optical Soc. Amer. A, vol. 10, no. 4, pp. 561–566, 1993.
 S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D singlemolecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci., vol. 109, no. 3, pp. 675–679, 2012.