Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements

Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements

Magauiya Zhussip
Ulsan National Institute of Science and Technology (UNIST)
mzhussip@unist.ac.kr
&Se Young Chun
UNIST
sychun@unist.ac.kr
Abstract

Compressive image recovery utilizes sparse image priors such as wavelet norm, total-variation (TV) norm, or self-similarity to reconstruct good quality images from highly compressive samples. Recently, there have been some attempts to exploit data-driven image priors from massive amount of clean images in compressive image recovery such as LDAMP algorithm. By utilizing large-scale noiseless images for training deep neural network denoisers, LDAMP outperformed other conventional compressive image reconstruction methods. However, one drawback of LDAMP is that large-scale noiseless images must be acquired for deep learning based denoisers. In this article, we propose a method for simultaneous compressive image recovery and deep denoiser learning from undersampled measurements that enables compressive image recovery methods to use data-driven image priors when only large-scale compressive samples are available without ground truth images. By utilizing the structure of LDAMP and Stein’s Unbiased Risk Estimator (SURE) based deep neural network denoiser, we showed that our proposed methods were able to achieve better performance than other methods such as conventional BM3D-AMP and LDAMP methods trained with the results of BM3D-AMP for training data and/or testing data for all cases with i.i.d. Gaussian random and coded diffraction measurement matrices at various compression ratios. We also investigated accurate noise level estimation methods in LDAMP for coded diffraction measurement matrix to train deep denoiser networks for high performance.

 

Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements


  Magauiya Zhussip Ulsan National Institute of Science and Technology (UNIST) mzhussip@unist.ac.kr Se Young Chun UNIST sychun@unist.ac.kr

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Compressive sensing has provided ways to sample and to compress signals simultaneously at the price of relatively long signal reconstruction time [6, 10]. Since the advent of compressive sampling, there have been many attempts to apply theories to application areas such as sparse MRI [21, 30], sparse view CT [7], hyperspectral imaging [20], coded aperture imaging [22, 2], radar imaging [28], transmission electron microscopy [32], and radio astronomy [35]. For example, sparse MRI has been investigated intensively and recently has become commercialized for practical usage.

Typically, compressive sampling is modeled as a linear equation:

(1)

where is a measurement vector, is a sensing matrix with , is a unknown signal to reconstruct, and is a noise vector.

1.1 Priors in compressive image recovery

Signal recovery from compressive samples requires to promote the sparsity of reconstructed signal (or to utilize prior knowledge on signal) and to minimize the data fitting consistency among compressed measurements and reconstructed signal. The goal of compressive signal recovery is to find the original from . Since it is a underdetermined inverse problem, promoting the sparsity of recovered signal is essential for high quality signal recovery. In compressive image recovery, itself is not usually sparse, but rather the wavelet coefficients of is sparse. In high resolution imaging, total variation (TV) norm in promotes the sparsity of image in edges. In sparse MRI, both wavelet and TV priors have been used for image recovery [21]. Dictionary learning often promotes better sparsity than wavelet or TV. In [30], both dictionary and image recovery were performed simultaneously from highly undersampled measurements.

Minimizing norm strongly encourages the reconstructed signal to be sparse. However, minimizing while maintaining the linear relationship in (1) is NP hard. There have been lots of algorithms that have tackled norm minimization problem such as orthogonal matching pursuit (OMP) or CoSaMP [27], to name a few. Even though these algorithms were able to find very sparse solutions for (1), computation cost for large-scale inverse problems (, compressive image recovery) was too high for practical usage.

Compressive sensing theories allow to use norm for good signal recovery instead of norm [6, 10], but it requires to use more samples (still significantly lower than ). Minimizing norm is advantageous for large-scale inverse problems since norm is convex so that conventional convex optimization algorithms can be used for signal recovery. There have been many convex optimization algorithms in solving compressive image recovery problem with non-differentiable norm such as iterative shrinkage-thresholding algorithm (ISTA), fast iterative shrinkage-thresholding algorithm (FISTA) [3], alternating direction minimization (ADM) [37], and approximate message passing (AMP) [11]. Even though these advanced algorithms achieved remarkable speed up in optimization compared to conventional convex optimization algorithms, compressive image recovery is still slow in some application areas.

In image processing, there have been numerous efforts on developing powerful denoisers. There have been some attempts to use generic denoisers for compressive image recovery. Plug and play approach based on variable splitting allows model-based image reconstruction using any denoisers as a regularizer [33]. Recently, denoiser based AMP (D-AMP) [25] was proposed to utilize powerful modern denoisers such as BM3D [8] for compressive image recovery. These works achieved improved results over conventional wavelet based or TV based sparse priors.

1.2 Deep learning in compressive image recovery

Deep learning with massive amount of training data has revolutionized image processing and computer vision tasks such as image classification, object detection, and image denoising [19]. Deep learning based denoisers have been investigated extensively [16, 34, 36, 5, 39]. Recently, some of them (, DnCNN [39]) outperformed state-of-the-art denoisers such as BM3D [8]. Recently, many deep learning based methods in image recovery have been also proposed with impressive performance. One approach is to train deep neural networks to map from FBP images (filtered back projection image) with artifacts to ground truth images [17, 15]. These methods outperformed conventional methods in sparse-view CT, but new network must be trained for new imaging parameters ( changing the number of views) since they do not use imaging system information explicitly.

Another approach for deep learning based image recovery is to use deep neural network structures that unfold optimization algorithms and learned image priors, inspired by learned ISTA (LISTA) [12]. Note that this approach utilized imaging system information explicitly. In compressive sensing MR, ADMM-net [38] and variational network [14] were proposed with excellent performance. Both methods learned parametrized shrinkage function as well as transformation operator for sparse representation from given training data. CNN-Based projected gradient descent in sparse-view CT [13] and learned D-AMP (LDAMP) in compressive sensing recovery [26] were proposed with impressive performance. This method did not use explicit parametrization in shrinkage operator, but utilized deep learning based denoisers in iterative recovery algorithms. All proposed methods have one important requirement: ‘clean’ ground truth images must be available for training.

1.3 Limitation in deep learning based compressive image recovery

Even though deep learning based compressive sensing recovery outperformed other conventional state-of-the-art methods mainly due to the power of deep learning based denoisers. However, training denoisers require ground truth data. Unfortunately, there are application areas and cases where clean data is often expensive or infeasible to acquire. For example, acquiring clean data for high resolution tomographic imaging is challenging and usually requires very long acquisition time that may lead to high radiation dose to subjects in CT or that may lead to several hours of acquisition for one image volume in micrometer level resolution MR. Another example is hyperspectral imaging with high resolution spectrum where obtaining noiseless images is challenging.

One may argue that it is possible to obtain images from compressive measurements by using a favorite recovery algorithm or to use images in other domains such as natural images. However, we argue that the former is sub-optimal and the latter may be challenging for new compressive sensing modalities where image features are not known. In this article, for the first time, we propose a simultaneous compressive sensing recovery and deep denoiser learning from undersampled measurements. Compressive sensing recovery will be done with LDAMP [26], but we train deep learning based denoiser (DnCNN [39]) using Gaussian noise contaminated images, that is a by-product of D-AMP algorithm. We propose to train denoiser network by modifying SURE based method [31]. We also propose an accurate noise level estimation method in D-AMP to utilize it for training denoiser network and for reconstruction.

2 Background

2.1 Denoiser-based AMP (D-AMP) and learned D-AMP (LDAMP)

D-AMP is an algorithm designed to solve compressive sensing (CS) problems, where one needs to recover image vector from the set of measurements using prior information about . Based on the model (1), the problem can be formulated as:

(2)

where is the set of natural images. D-AMP solves (2) relying on approximate message passing (AMP) theory. More precisely, it employs appropriate Onsager correction term at each iteration, so that in Algorithm 1 becomes close to the ground truth image plus Gaussian noise. D-AMP can utilize any state-of-the-art denoiser as a mapping operator in compressive image recovery (Algorithm 1) for reducing Gaussian noise as far as the divergence of denoiser can be obtained.

input : 
1 for  to  do
2      
3 end for
output : 
Algorithm 1 (Learned) D-AMP algorithm [25, 26]

D-AMP [24] first utilized conventional state-of-the-art denoisers such as BM3D [8] for in Algorithm 1. Given a standard deviation of noise at iteration , BM3D was applied to a noisy image to yield estimated image . Since BM3D can not be represented as a linear function, analytical form for divergence of this denoiser is not available for obtaining Onsager term. The authors utilized Monte-Carlo (MC) approximation for divergence term of generic denoisers that was proposed in [29]: For and is a standard normal random vector,

(3)

Recently, learned D-AMP (LDAMP) was proposed by the same authors to use deep learning based denoisers for in Algorithm 1. Several deep neural network denoisers were trained with noiseless ground truth data for different noise levels. LDAMP consists of 10 D-AMP layers (iterations) and in each layer, state-of-the-art DnCNN was used as a denoiser operator [39] that is known to outperform conventional BM3D. Unlike other unrolled neural network versions of iterative algorithms such as L-AMP [4] for AMP and LISTA [12] for iterative shrinkage-threshold algorithm (ISTA), LDAMP exploited imaging systems models and fixed and operators while the parameters for DnCNN denoisers were trained with ground truth data in image domain.

2.2 Stein’s unbiased risk estimator (SURE) based deep neural network denoisers

Over the past years, deep neural network based denoisers have been well investigated [16, 34, 36, 5, 39] and they often outperformed conventional state-of-the-art denoisers such as BM3D. Deep neural network denoisers such as DnCNN [39] yielded state-of-the-art denoising performance at multiple noise levels and are typically trained by minimizing the mean square error (MSE) between the output image of denoiser and the noiseless ground truth image as follows:

(4)

where is a noisy image of the ground truth image contaminated with Gaussian noise with zero mean and fixed variance, is a deep learning based denoiser with large-scale parameters to train, and is a training dataset with samples in image domain.

Recently, a method to train deep learning based denoisers only with noisy training images was proposed [31] and its performance was demonstrated for deep neural network denoisers such as stacked denoising autoencoder and DnCNN. Instead of minimizing MSE, the following Monte-Carlo Stein’s unbiased risk estimator (MC-SURE) that approximates MSE was minimized with respect to large-scale weights in deep neural network without noiseless ground truth images:

(5)

In compressive image recovery applications, there are often cases where no ground truth data or no Gaussian contaminated image are available, but only compressive samples in measurement domain are available for training. Thus, MSE based and MC-SURE based deep denoiser networks may not be able to be used for compressive image recovery with iterative algorithms such as LDAMP that can use denoisers. The goal of this article is to provide a method to simultaneously recover images and train deep neural network denoisers directly from compressive samples.

3 Method

3.1 Simultaneous compressive image recovery and deep denoiser learning

There have been some works on simultaneous image reconstruction and image prior learning from highly undersampled data in medical imaging. Ravishankar and Bresler [30] proposed a joint MR image reconstruction and dictionary learning method from a highly undersampled measurement by alternating minimization of image reconstruction cost function and dictionary learning cost function. Unlike this work that requires a long reconstruction time for every new measurement, our proposed method utilizes large-scale compressive measurements to recover images and to train deep denoising networks simultaneously from undersampled compressive measurements so that once trained, fast image recovery for new measurement is feasible.

Our proposed method exploit D-AMP (LDAMP) [25, 26] to yield Gaussian noise contaminated images during compressive image recovery from large-scale undersampled measurements and train deep neural network denoisers with these noisy images at different noise levels using MC-SURE based denoiser learning [31]. Since Onsager correction term in D-AMP allows term to be close to the ground truth image plus Gaussian noise, so we conjecture that these can be utilized for MC-SURE based denoiser training. We further investigated on this in the next subsection. Our joint algorithm is detailed in Algorithm 2. Note that for large-scale compressive measurements , both images and trained denoising deep networks were able to be obtained. After training, fast and high performance compressive image recovery was possible without further training of deep denoising networks.

As in LDAMP [26], 9 deep neural network denoisers (DnCNN [39]) were trained for different noise levels (). These networks were pre-trained with reconstructed images using D-AMP with BM3D plus various levels of Gaussian noise. Then, the set of training data can be generated using LDAMP with pre-trained deep denoisers. Note that training images have different noise levels that can be estimated with and higher noise level images can be generated from them by adding additional Gaussian noise. Unfortunately, there are training images that have relatively high noise level so that generating the same images with lower noise levels is infeasible. In that case, we used images from BM3D-AMP and added Gaussian noise to them.

input : 
1 for  to  do
2       for  to  do
3             for  to  do
4                  
5             end for
6            
7       end for
8      Train with at different noise levels
9 end for
output : 
Algorithm 2 Simultaneous LDAMP and MC-SURE deep denoiser learning algorithm

3.2 Accuracy of standard deviation estimation for MC-SURE based denoiser learning

In D-AMP and LDAMP [25, 26], noise level was estimated in measurement domain using

(6)

The accuracy of this estimation was not critical for D-AMP or LDAMP since denoisers in both methods were not sensitive to different noise levels. However, accurate noise level estimation was critical for MC-SURE based deep denoiser network learning. We investigated the accuracy of (6). It turned out that the accuracy of noise level estimation depends on measurement matrices.

With Gaussian measurement matrix , (6) was very accurate and comparable to the ground truth standard deviation that was obtained from the true residual . However, with coded diffraction measurement matrix that yields complex measurements, it turned out that (6) yielded over-estimated noise level for multiple examples. Since the image is real, we propose a new standard estimation method for D-AMP as follows:

(7)

We performed comparison studies between (6), (7), and the ground truth from true residual and found that they are all very similar for Gaussian measurement matrix, but our proposed method (7) yielded more accurate estimates of standard deviation than previous method (6). Figure 1 illustrates the accuracy of our estimator compared to previous one. When normalizing the true residual, using accurate sigma estimation yields good fitting to standard normal density (red line). Normalized histogram of true residual using ground truth and our proposed standard deviation estimation yielded good fitting to that, but previous estimation method yielded sharper histogram, which indicates that previous method overestimates noise level. In the simulation, we will show that our proposed estimation was critical for high performance of our proposed method with coded diffraction measurement matrix.

(a)
(b)
(c)
Figure 1: Normalized residual histograms of ‘Boat’ image after 10 iterations using LDAMP-BM3D with pre-trained denoisers for coded diffraction matrix. Normalization was done with estimated sigma from (a) true residual (b) (from D-AMP) and (c) (proposed)

4 Simulation Results

4.1 Setup

Datasets

We used images from DIV2K [1], Berkeley’s BSD-500 [23] datasets, and standard test images for training and testing our proposed method. Training dataset was comprised of all 500 images from BSD-500, while test set of 100 images included 75 randomly chosen images from DIV2K dataset and 25 standard test images. Since proposed method uses measurement data and fixed linear operator for image reconstruction, all test and train images have to have the same size. Thus, all images were subsampled and cropped to size of 180 180 and compressively sampled to generate measurement data.

Pre-training DnCNN

The core idea of this stage is that given measurement data and linear operator to decouple DnCNN denoiser from the LDAMP SURE and pre-train it. To do so, BSD-500 images were firstly reconstructed using BM3D-AMP. Recovered images were rescaled, cropped, flipped, and rotated to generate 204,800 5050 patches. These patches were used as a ground truth to pre-train DnCNN denoiser with MSE. Since our approach does not require dataset with ground truth, it is possible to use measurement data from the test set. Thus, we generated 304,500 5050 patches from reconstructed test and train images. Pre-trained DnCNN denoiser on those patches was utilized in LDAMP network and network was labeled in tables as "LDAMP MSE-T w/ BM3D-AMP". For better performance, we trained 9 DnCNN denoisers on AWGN at different noise level ranges. Consequently, during inference, LDAMP SURE estimates standard deviation of the noise to select appropriate DnCNN denoiser.

Training LDAMP SURE

Firstly, LDAMP SURE was run iterations using pre-trained DnCNN denoisers and at the last iteration we collected images and estimated noise standard deviation with (7). Then, all images with noise levels in the same range were grouped into one set. However, we may have cases, where there is no image with noise level corresponding to a particular range. Those holes were filled in the following way: higher noise level images were generated by adding additional Gaussian noise to lower noise level images, while missed lower noise level images were compensated by Gaussian noise added BM3D-AMP recovered images. Overall, we have dataset of all images for each noise level range, which was then patchified to train DnCNN denoisers with MC-SURE. These steps were repeated times to further improve the performance of our proposed method. Although training DnCNN with MC-SURE involves estimation of a noise standard deviation for an entire image, we assume that patch from an image has the same noise level as the image itself. Thus, we generated patches without using rescaling to avoid noise distortion to train SURE based LDAMP: 84,500 5050 patches for training LDAMP SURE and 135,000 5050 patches for LDAMP SURE-T (test and train set).

Training parameters

In pre-training stage, set of DnCNN denoisers were trained for different noise levels using Adam optimizer [18] with the learning rate of 10-3, which was dropped to 10-4 after 40 epochs and further trained for 10 epochs. The batch size is set to 128 and training single DnCNN denoiser took approximately 10-15 hours on an NVIDIA Pascal Titan X. In training with SURE stage, we decreased learning rate to 10-5 and epochs to 25, while batch size was kept the same. Training process lasted about 4-6 hours depending on whether LDAMP SURE or LDAMP SURE-T. We empirically found that after times of training LDAMP SURE, the results converge for both coded diffraction and Gaussian measurement cases (see supplement Table 2). The accuracy of MC-SURE approximation and hence, LDAMP SURE performance depend on the selection of constant value [31]. In [9], it was mentioned that directly proportional to . Therefore, value was fine-tuned for each denoiser in 9 denoisers set; more detailes can be found in supplement Table 1.

4.2 Results

Gaussian measurement matrix

We compare SURE based LDAMP with the state-of-the-art image reconstruction algorithm - BM3D-AMP. NLR-CS, TVAL3, and AMP algorithms are excluded from comparison since their performances are orders of magnitude worse than BM3D-AMP. BM3D-AMP is used with default parameters and run for 30 iterations to reduce high variation in the results, although PSNR111PSNR stands for peak signal-to-noise ratio and is calculated by following expression: for pixel range approaches its maximum after 10 iterations [25]. Proposed SURE based LDAMP algorithm is run 30 iterations, but also shows convergence after 8-10 iterations.

The results reveal that with noise-free Gaussian measurements, BM3D-AMP reconstruction quality is slightly better than pre-trained LDAMP and is in the same level as LDAMP SURE. On the contrary, LDAMP SURE-T outperforms BM3D-AMP by up to 0.5 dB (see Table 1 and Figure 2). In terms of run time, SURE based LDAMP is more than 20 times faster compared to BM3D-AMP.

Method Run time (s)
BM3D-AMP 24.6 21.4 24.6 26.7 28.5 30.1
LDAMP MSE w/ BM3D-AMP 1.1 21.2 24.3 26.1 28.3 30.0
LDAMP MSE-T w/ BM3D-AMP 1.1 21.5 24.5 26.2 28.4 30.2
LDAMP SURE 1.1 21.4 24.6 26.5 28.7 30.3
LDAMP SURE-T 1.1 21.7 24.9 26.9 28.9 30.6
Table 1: Average PSNRs and run times of 100 180180 image reconstructions for i.i.d. Gaussian measurements case (no measurement noise) at various sampling rates ().
(a)
(b)
(c)
(d)
Figure 2: Reconstructions of 180180 test image Barbara with i.i.d. Gaussian measurement matrix for sampling rate (noise-free). (a) Ground Truth (b)BM3D-AMP 32.14dB (c)LDAMP SURE 32.53dB (d)LDAMP SURE-T 32.70dB
Coded diffraction measurements

Initially, LDAMP SURE was trained with noiseless coded diffraction measurements using the noise standard deviation estimated by D-AMP. The results were orders of magnitude worse than pre-trained LDAMP and are reported in supplement Table 3. The primary reason was that D-AMP noise estimator gives overestimated value for noise standard deviation, while MC-SURE requires more accurately estimated . Therefore, using (7), we have trained network with new estimated . Consequently, MC-SURE trained LDAMP with proposed noise estimator showed the best performance both quantitatively and qualitatively for all sampling rates (see Table 2 and Figure 3).

Method Run time (s)
BM3D-AMP 20.9 21.7 27.3 31.4
LDAMP MSE w/ BM3D-AMP 1.1 22.3 27.4 30.9
LDAMP MSE-T w/ BM3D-AMP 1.1 22.3 28.1 31.7
LDAMP SURE 1.1 22.4 28.7 32.4
LDAMP SURE-T 1.1 22.4 29.0 32.7
Table 2: Average PSNRs and run times of 100 180x180 image reconstructions for coded diffraction measurements case (no measurement noise) at various sampling rates ().
(a)
(b)
(c)
(d)
Figure 3: Reconstructions of 180180 test image with CDP measurement matrix for sampling rate (noise-free). (a) Ground Truth (b) BM3D-AMP 24.08dB (c)LDAMP SURE 27.82dB (d)LDAMP SURE-T 27.94dB

5 Discussion and Conclusion

We proposed simultaneous compressive image recovery and deep learning based denoiser learning method from undersampled measurements. Our proposed method yielded better image quality than conventional BM3D-AMP or LDAMP trained with reconstructed images using BM3D-AMP. Thus, it may be possible that this work can be helpful for areas where obtaining ground truth noiseless images is challenging such as hyperspectral imaging or medical imaging.

For example, in high resolution MR imaging, it is challenging to obtain high quality images without noise due to long acquisition time (several hours). In this case, acquiring large-scale compressive measurements is more desirable than obtaining noiseless images and our proposed method can be potentially desirable solution in this case. Further investigation will be necessary for this method to work with compressive sensing MR measurement matrix. Our proposed method to estimate standard deviation was found empirically, but more theoretical analysis will be interesting for various types of compressive sensing measurement matrices including MR.

Acknowledgments

This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B05035810).

References

  • [1] Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW), 2017.
  • [2] Gonzalo R Arce, David J Brady, Lawrence Carin, Henry Arguello, and David S Kittle. Compressive Coded Aperture Spectral Imaging: An Introduction. IEEE Signal Processing Magazine, 31(1):105–115, November 2013.
  • [3] Amir Beck and Marc Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1):183–202, January 2009.
  • [4] Mark Borgerding, Philip Schniter, and Sundeep Rangan. Amp-inspired deep networks for sparse linear inverse problems. IEEE Transactions on Signal Processing, 65(16):4293–4308, 2017.
  • [5] Harold C Burger, Christian J Schuler, and Stefan Harmeling. Image denoising: Can plain neural networks compete with BM3D? In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2392–2399, 2012.
  • [6] E J Candes, J Romberg, and T Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, January 2006.
  • [7] Kihwan Choi, Jing Wang, Lei Zhu, Tae-Suk Suh, Stephen Boyd, and Lei Xing. Compressed sensing based cone-beam computed tomography reconstruction with a first-order method. Medical Physics, 37(9):5113–5125, August 2010.
  • [8] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080–2095, 2007.
  • [9] Charles-Alban Deledalle, Samuel Vaiter, Jalal Fadili, and Gabriel Peyré. Stein unbiased gradient estimator of the risk (sugar) for multiple parameter selection. SIAM Journal on Imaging Sciences, 7(4):2448–2487, 2014.
  • [10] D L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, March 2006.
  • [11] D L Donoho, A Maleki, and A Montanari. Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences (PNAS), 106(45):18914–18919, November 2009.
  • [12] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In International Conference on International Conference on Machine Learning (ICML), pages 399–406, 2010.
  • [13] Harshit Gupta, Kyong Hwan Jin, Ha Q Nguyen, Michael T McCann, and Michael Unser. CNN-Based Projected Gradient Descent for Consistent CT Image Reconstruction. IEEE transactions on medical imaging, pages 1–1, May 2018.
  • [14] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated MRI data. Magnetic Resonance in Medicine, 79(6):3055–3071, November 2017.
  • [15] Yoseob Han and Jong Chul Ye. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT. IEEE transactions on medical imaging, pages 1–1, April 2018.
  • [16] Viren Jain and Sebastian Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems (NIPS), pages 769–776, 2009.
  • [17] Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE Transactions on Image Processing, 26(9):4509–4522, September 2017.
  • [18] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
  • [19] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, May 2015.
  • [20] Chengbo Li, Ting Sun, K F Kelly, and Yin Zhang. A Compressive Sensing and Unmixing Scheme for Hyperspectral Data Processing. IEEE Transactions on Image Processing, 21(3):1200–1210, February 2012.
  • [21] Michael Lustig, David Donoho, and John M Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182–1195, 2007.
  • [22] Roummel F Marcia, Zachary T Harmany, and Rebecca M Willett. Compressive coded aperture imaging. In Charles A Bouman, Eric L Miller, and Ilya Pollak, editors, IS&T/SPIE Electronic Imaging, pages 72460G–13. SPIE, February 2009.
  • [23] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In IEEE International Conference on Computer Vision (ICCV), volume 2, pages 416–423, July 2001.
  • [24] Christopher A Metzler, Arian Maleki, and Richard G Baraniuk. BM3D-AMP: A new image recovery algorithm based on BM3D denoising. In IEEE International Conference on Image Processing (ICIP), pages 3116–3120, 2015.
  • [25] Christopher A Metzler, Arian Maleki, and Richard G Baraniuk. From denoising to compressed sensing. IEEE Transactions on Information Theory, 62(9):5117–5144, 2016.
  • [26] Christopher A Metzler, Arian Maleki, and Richard G Baraniuk. Learned D-AMP: Principled Neural Network based Compressive Image Recovery. In Advances in Neural Information Processing Systems (NIPS), pages 1770–1781, 2017.
  • [27] D Needell and J A Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301–321, May 2009.
  • [28] Lee C Potter, Emre Ertin, Jason T Parker, and Müjdat Cetin. Sparsity and Compressed Sensing in Radar Imaging. Proceedings of the IEEE, 98(6):1006–1020, May 2010.
  • [29] Sathish Ramani, Thierry Blu, and Michael Unser. Monte-carlo sure: A black-box optimization of regularization parameters for general denoising algorithms. IEEE Transactions on Image Processing, 17(9):1540–1554, 2008.
  • [30] Saiprasad Ravishankar and Yoram Bresler. Mr image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Transactions on Medical Imaging, 30(5):1028–1041, 2011.
  • [31] S. Soltanayev and S. Y. Chun. Training Deep Learning based Denoisers without Ground Truth Data. ArXiv e-prints, March 2018.
  • [32] Andrew Stevens, Libor Kovarik, Patricia Abellan, Xin Yuan, Lawrence Carin, and Nigel D Browning. Applying compressive sensing to TEM video: a substantial frame rate increase on any camera. Advanced Structural and Chemical Imaging, pages 1–20, August 2015.
  • [33] Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-Play priors for model based reconstruction. In IEEE Global Conference on Signal and Information Processing (GlobalSIP), pages 945–948, December 2013.
  • [34] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre Antoine Manzagol. Stacked denoising autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. Journal of Machine Learning Research, 11:3371–3408, December 2010.
  • [35] Y Wiaux, L Jacques, G Puy, A M M Scaife, and P Vandergheynst. Compressed sensing imaging techniques for radio interferometry. Monthly Notices of the Royal Astronomical Society, 395(3):1733–1742, May 2009.
  • [36] Junyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 341–349, 2012.
  • [37] Junfeng Yang and Yin Zhang. Alternating Direction Algorithms for -Problems in Compressive Sensing. SIAM Journal on Scientific Computing, 33(1):250–278, January 2011.
  • [38] Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Deep ADMM-Net for compressive sensing MRI. In Advances in Neural Information Processing Systems (NIPS), pages 10–18, 2016.
  • [39] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
201444
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description