Nonlocal Low-Rank Tensor Factor Analysis for Image Restoration

# Nonlocal Low-Rank Tensor Factor Analysis for Image Restoration

## Abstract

Low-rank signal modeling has been widely leveraged to capture non-local correlation in image processing applications. We propose a new method that employs low-rank tensor factor analysis for tensors generated by grouped image patches. The low-rank tensors are fed into the alternative direction multiplier method (ADMM) to further improve image reconstruction. The motivating application is compressive sensing (CS), and a deep convolutional architecture is adopted to approximate the expensive matrix inversion in CS applications. An iterative algorithm based on this low-rank tensor factorization strategy, called NLR-TFA, is presented in detail. Experimental results on noiseless and noisy CS measurements demonstrate the superiority of the proposed approach, especially at low CS sampling rates. 1 2

\cvprfinalcopy

## 1 Introduction

Inspired by the nonlocal self-similarity of image patches [3], various image processing algorithms have been proposed to investigate the low-rank property of image patch groups [9, 11, 17, 31, 44]. In general, these methods first select a reference patch and then search for similar patches across the image to form a group. Following this, the patches in this group are vectorized and stacked to a matrix. Since these patches are similar, the constructed matrix has a low-rank property. Via performing this low-rank model on every (overlapping) patch in the image, state-of-the-art image restoration results have been achieved.

One common issue in the above algorithms is that the original two-dimensional (2D) patches are vectorized to construct the group matrix, which loses the spatial structure within the image patch. We propose a tensor based algorithm to retain this structure while still leveraging the advantages of low-rank patch models. Tensor factorization methods [19] offer a useful way to learn latent structure from complex multiway data, and have been used in image processing tasks [43]. These methods decompose the tensor data into a set of factor matrices (one for each mode or “way” of the tensor), that can be used as a latent feature representation for the objects in each of the tensor modes [33]. Recently tensor approaches have been applied in computer vision, such as image denoising [32], and video denoising [42]. We consider the general image restoration problem with specific applications to compressive sensing [12] using the tensor factor analysis (TFA) approach. In particular, instead of vectorizing the image patches, we impose low-rank TFA to the 3D image patch groups.

To be concrete, the image restoration problem aims to estimate the latent clean image given the observation (the corrupted/compressive measurement) and the measurement matrix , which can be formulated as

 y=Φx+e\xspace, (1)

where denotes the measurement noise and is usually modeled by . The problem has different variants for different : when is the identity matrix, this is a denoising problem [16]; when is a diagonal matrix whose diagonal elements are either 1 or 0, keeping or removing corresponding pixels, this becomes image inpainting [63]; and when and , this is a compressive sensing (CS) problem.

We focus on the CS problem [14, 28], and consider the image reconstruction with a very limited number of compressive measurements [50, 51, 52, 58]. A framework of the proposed NLR-TFA method is shown in Fig. 1. There has been recent interest in using deep learning technique for CS problems [30]. A reconstruction network is developed in [20, 58], and an ADMM-net is proposed in [47]; while the former paper focused on block-wise CS, the latter one only considered CS-MRI [28]. By contrast, we develop a tensor-based framework for general CS applications. Further, to overcome the large-scale matrix inversion in CS, we further propose to pre-train a deep convolutional neural network to approximate the expensive matrix inversion operator [41].

The remainder of this paper is organized as follows. Section 2 presents background knowledge on tensors and CS. Section 3 derives our algorithm. Extensive results are provided in Section 4 to demonstrate the superiority of the proposed algorithm relative to other leading approaches, and Section 5 concludes the paper.

## 2 Background

### 2.1 Tensor Decomposition

Tensors (of order higher than two) are arrays indexed by three or more indices. Specifically, matrices are two-way arrays, and tensors are three- or higher-way arrays. In the sequel, we mainly focus on way tensors and everything naturally generalizes to higher-order tensors.

A way tensor is an outer product of unit vectors , , and :

 T =λa⊗b⊗c (2) T(i,j,k) =λa(i)b(j)c(k)

where is the vector outer product and is a scalar.

The of a tensor is the minimum number of tensors that sum to . Therefore, a way tensor can be written as

 T =R∑r=1λrar⊗br⊗cr (3) T(i,j,k) =R∑r=1λrar(i)br(j)cr(k)

where unit vectors , , and are tensor factors and is a scalar that evaluates the significance of tensor factors . For brevity, can be denoted as , where , , , and . When the rank is low, the number of elements needed to represent is dramatically decreased after tensor decomposition, \ie, from to .

Practically, most tensor decomposition problems are NP-hard. However, in most real applications, as long as the tensor does not have too many components, and the components are not adversarially chosen, tensor decomposition can be computed in polynomial time [34]. The tensor-decomposition problem seeks to estimate , , and coefficients from a tensor . Adopting a least-squares fitness criterion, the problem is

 Extra open brace or missing close brace (4)

In this work, we employ Jenrich’s algorithm [21] to solve this problem. In Algorithm 1, “” denotes pseudo-inverse of a matrix.

### 2.2 Compressive Sensing

Compressive sensing is a signal acquisition technique that enables sampling a signal at sub-Nyquist rates [12]. In CS, a reconstruction algorithm is used to recover the original high-dimensional signal from a small number of random linear measurements. Taking compressive measurements can be viewed as a process of linearly mapping an -dimensional signal vector to an -dimensional measurement vector , , using a measurement matrix , \ie, . Since the matrix is rank-deficient, there exists more than one that yields the same measurement . In this paper, the compressive sensing rate (CSr) is defined as CSr.

To recover , one searches for the vector that possesses a certain structure among all the vectors that satisfy . In the case of a sparse , a popular method is to solve the optimization problem

 x=argminx∥x∥1, s.t. ∥y−Φx∥≤ϵ. (5)

This problem is convex and known formally as basis pursuit denoising (BPDN). It has been shown in [6][13] that if is sufficiently sparse and satisfies certain properties, then the -sparse signal can be accurately recovered from random linear measurements [5]. Equation (5) can be equivalently translated to the following unconstrained optimization problem

 x=argminx∥y−Φx∥22+λ∥x∥1, (6)

where is a regularization parameter. Various methods can be used to solve the above minimization problem [49, 59, 57], and in this work we adopt the alternative direction multiplier method (ADMM) framework [2, 23]. Specifically, we consider the application of image CS [14, 51, 52], and beyond sparsity, we propose a new tensor factorization approach to exploit the high-order structure in the image patches, seeking high reconstruction performance at extremely low CSr, \eg, CSr<0.05. Refer to Fig. 3 for one example of reconstructed image using our proposed algorithm, compared with other leading algorithms at CSr (with image size ).

## 3 Method

We propose a new model that recovers compressively sensed images using low-rank tensor factor analysis and ADMM. First, we generate a tensor from the estimated image based on patch grouping. Then we impose low rankness on the tensor after tensor decomposition. This new low-rank tensor is fed into a global objective function which is solved by ADMM. These two steps are iteratively performed until satisfying some criterion. The complete algorithm is shown in Algorithm 2.

### 3.1 Patch-Based Low-Rank Tensor Factorization

Given the observation , we first obtain the estimated image using some fast algorithm, \eg, the DCT or wavelet based algorithm [18, 59]. In denoising, can be simply set equal to . Then the estimated image is divided into overlapping patches . The basic assumption underlying the proposed approach is that a sufficient number of similar patches can be found for any reference patch, a.k.a, the nonlocal self-similarity (NSS) prior [3]. For each reference patch , we perform a -nearest-neighbor search for the nonlocal similar patches across the image to form a group , where is the number of similar patches (including the reference patch itself). Here, the Euclidean distance of pixel intensity is used as the metric to group patches. By concatenating the grouped patches on the third dimension in ascending order of Euclidean distances, we generate a tensor for reference patch

 Tp=[xp,1,xp,2,...,xp,k]. (7)

As has zeros distance to itself, it is always found as the leading patch in , \ie, . Eventually we have tensors and each tensor corresponds to a reference patch. The coordinates of the grouped patches are also recorded for later image aggregation. Suppose the size of each patch is , then the size of generated tensor is . It has been shown that the grouped patches can be denoised by low-rank approximation [4]. In this work, the low-rankness is imposed on the tensor by taking the most significant tensor factors after tensor decomposition

 Lp=∑r∈Sℓλrar⊗br⊗cr, (8)

where selects the most significant tensor factors of . Therefore, has rank . The significance of tensor factors is evaluated by ’s in (3).

Under the assumption that the grouped patches have similar structure, has a low-rank property, which ensures that can be represented by a relatively low-rank tensor . This low-rank imposition shares the same spirit with the hard thresholding algorithm [1]. In Fig. 2, we impose low rankness on tensors with size generated from a clean image using the above approach, and then aggregate the tensors back to images. As can be seen, reconstructed images with low-rank tensors can accurately approximate the original image. The images aggregated from - to - tensors are highly similar (with PSNR merely increased by dB), which indicates that in this case - tensors are adequate to represent the original image.

### 3.2 Image Recovery via ADMM

With the reconstructed low-rank tensors , the following optimization problem is proposed for CS recovery:

 ^x=argminx∥y−Φx∥22+η∑p∥~Tpx−Lp∥2F, (9)

where denotes the tensor formed for each reference patch and is a regularization parameter. The closed-form solution for this quadratic optimization problem is

 x=(ΦHΦ+η∑p~T⊤p~Tp)−1(ΦHy+η∑p~T⊤pLp), (10)

where is the Hermitian transpose operation, denotes the results of averaging all of the similar patches for each reference patch, and each entry of corresponds to an image pixel location whose value is the number of overlapping patches that cover this pixel location. In (10), the matrix to be inverted can be large, for which conjugate gradient descent (CG) is usually applied [8]. We adopt ADMM to solve this problem, introducing auxiliary variable . Applying ADMM to (9), we obtain the global objective function

 (^x,^z)=argminx,z ∥y−Φx∥22+β∥x−z+μ2β∥22 +η∑p∥~Tpz−Lp∥2F, (11)

where is the Lagrange multiplier, and is the penalty parameter. Instead of minimizing and simultaneously, ADMM decomposes the problem into two subproblems that minimizes w.r.t and , respectively. More specifically, the optimization problem in (3.2) consists of the following iterations:

 zt+1 =argminzβ∥xt−z+μt2β∥22+η∑p∥~Tpz−Li∥2F, (12) xt+1 =argminx∥y−Φx∥22+β∥x−zt+1+μt2β∥22, (13) μt+1 =μt+2β(xt+1−zt+1). (14)

Both and admit closed-form solutions. For fixed and ,

 zt+1=(η∑p~T⊤p~Tp+βI)−1(βxt+μt2+η∑p~T⊤pLp). (15)

Then we use the updated to update ,

 xt+1=U−1(ΦHy+βzt+1−μt2), (16)

where .

#### Pretrained Deep Convolutional Architectures

In (16), the term involves an expensive matrix inversion, which makes direct computation of impractical. An inversion-free approach proposed by [41] addresses this problem by learning a convolutional neural network to approximate the matrix inversion. Note that training this neural network is data-independent since is only dependent on . Applying the Sherman-Morrison-Woodbury formula, we reduce this matrix inversion to a smaller scale,

 U−1=β−1(I−ΦHV−1Φ), (17)

where has the dimension . To approximate , a trainable deep convolutional neural network parameterized by is employed, \ie, . is learned by minimizing the sum of two reconstruction losses of two auto-encoders with shared weights

 argminθEϵ[∥ϵ−CθVϵ∥22+∥ϵ−VCθϵ∥], (18)

where can be computed directly, and is sampled from publicly available image datasets [10]. Note that is pretrained for different and . By plugging the learned into (17), we obtain a reusable term as the replacement for the cumbersome inversion matrix. Hence is updated by

 xt+1=β−1(I−ΦHCθΦ)(ΦHy+βzt+1−μt2). (19)

#### Fourier space solution

Equation (16) can be solved by transforming the problem from the image space into the Fourier space when is a partial Fourier transform matrix [11]. For a down-sampling matrix and a Fourier transform matrix , is substituted into (16)

 xt+1=(DHD+βI)−1(F\xspaceHDHy+(βzt+1−μt2)), (20)

where the inverse matrix is a diagonal matrix and thus can be computed easily. Equation (20) is equivalent to

 xt+1=F\xspaceH{(DHD+βI)−1(DHy+F\xspace(βzt+1−μt2))}. (21)

Therefore, can be obtained by applying inverse Fourier transform to terms in the brackets of the right hand side of (21).

## 4 Experiments

We conduct experiments on both noisy and noiseless CS mearurements of test images: Barbara, Boats, Cameraman, Foreman, House, Lena, Monarch, and Parrots. All images are resized to . Since excellent results have been obtained when the CS rate is large, \ie [11, 29], we here focus on testing cases with limited number of measurements (). The CS measurements are generated by pseudo-radial sampling of the test images in the Fourier domain. Unlike traditional random sampling schemes, pseudo-radial sampling produces streaking artifacts which are more difficult to remove. We also perform experiments using an random sensing matrix generated by a standard Gaussian distribution. The deep convolutional architecture is pretrained by the sensing matrix then used in ADMM to solve the expensive matrix inversion. A DCT based CS recovery algorithm [52] is used for the initial image estimate. The main parameters of the proposed method are set as follows: patch size , number of similar patches for each reference patch , and the imposed low rank is . The regularization parameter and the ADMM parameter are tuned separately for each sensing rate. In practice, we have found that ADMM converges fast after a few iterations and the performance gain is mainly from our low-rank TFA. Thus we set the number of outer-loop iterations to and the number of inner loop iterations . Experimental results for noiseless CS measurements and noisy CS measurements are evaluated by peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [40].

### 4.1 Baselines

We compare our results with four competitive CS image restoration algorithms: BM3D-CS [15], TVAL3 [22], NLR-CS [11] and D-AMP [29]. BM3D-CS is a nonparametric method that applies block matching and 3D filtering algorithm [9] on compressive sensing. TVAL3 restores images by combining the classic augmented Lagrangian multiplier method with total-variation regularization. NLR-CS is currently a state-of-the-art method that uses a nonlocal low-rank regularization method along with ADMM to solve image CS problems. D-AMP employs a denoiser in an approximate message passing framework, achieving state-of-the-art performance on noisy measurements, especially at high CS rates. The source codes of these baseline methods are downloaded from the respective author’s website and parameters for these algorithms are set to their default values.

### 4.2 Experiments with Noiseless CS Measurements

We first perform CS image restoration experiments from noiseless measurements using pseudo-radial sampling scheme at five different CS rates, \ie, CSr = {, , , , }. Table 1 summarizes the results of our proposed algorithm compared with various CS inversion algorithms at different CSr values. Both BM3D-CS and the state-of-the-art methods NLR-CS and D-AMP suffer at extremely low CSr. As CSr increases, NLR-CS yields significant performance improvements, which leads one of its results at higher CSr () to surpass other methods. Our proposed method achieves the best performance in all cases when CSr. On average, our proposed NLR-TFA algorithm outperforms all other competing methods at CSr = {, , , , and }. The PSNR gains of our NLR-TFA over BM3D-CS, TVAL3, NLR-CS and D-AMP can be as much as dB, dB, dB, and dB, respectively. Furthermore, it can be observed that our average reconstruction PSNR only decreases by dB as CSr decreases from to , while this number for NLR-CS and D-AMP are dB and dB, respectively, indicating that the proposed method is very stable at low CSr, \ie, with limited number of measurements.

To evaluate the reconstruction visually, two examples of restored Foreman and Monarch images at CSr of and are shown in Figs. 3 and 4. It is evident that our method recovers the best visual quality among all competiting methods. Large-scale sharp edges and small-scale fine structures are both reconstructed in two images. In particular, at extremely low CSr of , our NLR-TFA method (Fig. 3(d)) can effectively approximate the original image while some other methods, such as NLR-CS, can only reconstruct scratches (Fig. 3(c)).

We next generate CS measurements by sampling the test images with a random Gaussian sensing matrix. A deep convolutional neural network is pretrained using the sensing matrix to solve the large-scale matrix inversion problem in ADMM. We compare our results with D-AMP which provides random Gaussian sensing implementations in their sourcing code. The reconstruction results at CSr of , , and are shown in Table 2. Our proposed method outperforms D-AMP on all test images and sensing rates. In addition, the runtime of the algorithm using this inversion-free approach is also much faster than using approaches with inner-loop updates, such as conjugate gradient (CG). This demonstrates the efficiency and accuracy of the deep convolutional architecture on approximating the inversion matrix.

Patch Size Selection We have conducted experiments using different patch sizes. As can be seen from Table 3 (in the pseudo-radial sampling scheme), smaller patch sizes lead to better reconstruction results. When we adopt patch size of on the foreman image, the PSNR is significantly increased compared to results with larger patch sizes. Similar conclusions are found on other images as well. In addition, the running time of our model is largely influenced by the tensor size. For tensor size of (image patch size and similar patches for each reference patch), the average running time at CSr= is about 10 minutes on a desktop with i7 CPU @3.4GHz and 24G RAM.

### 4.3 Experiments with Noisy CS Measurements

Similar experiments are conducted with noisy CS measurements to show the robustness of our algorithm to noise. The noisy CS measurements are obtained by adding random zero-mean Gaussian noise to the noiseless CS measurements. We test the algorithms at three levels of noise, with standard deviation ={, , }. The PSNR comparison of all methods for Boats and Monarch images at sensing rates of and , respectively, are shown in Fig. 6. Our proposed NLR-TFA outperforms other competing methods at all levels of noise. Furthermore, D-AMP shows great robustness to noise while BM3D-CS and NLR-CS are relatively sensitive to high-level noise. As the noise level increases, the reconstruction performance of our algorithm decreases slowly. This shows that the proposed method is robust to noise. In Fig. 5, we show the reconstructions of Boats images using various algorithms at CSr= and . The proposed NLR-TFA clearly yields the best reconstruction and is robust to noise.

## 5 Conclusion

We have presented a low-rank tensor-factor-analysis-based approach to solve image-restoration problems. A tensor is generated by concatenating similar patches from the estimated image for each exemplar patch. Low-rank is imposed on the tensors to exploit non-local correlation and the high-order structure information via tensor factorization. ADMM is employed on low-rank tensors, where we use either a pretrained convolutional architecture or Fourier approaches to solve a matrix inversion. Experimental results demonstrate that the proposed NLR-TFA method outperforms state-of-the-art algorithms on CS image reconstruction at low CS sampling rates.

We have demonstrated the superiority of our model in image CS with three-way tensors, which can also be used in depth CS [25, 26, 53], polarization CS  [39] and dynamic range CS [48]. Future work includes extending the model to four-way tensors for video CS [24, 27, 35, 36, 45, 46, 54, 55, 56, 60, 61], hyperspectral image CS [7, 62] and to five-way tensors for joint spectral-temporal CS [37, 38].

### Footnotes

1. This work was performed when Xinyuan Zhang was a summer intern at Nokia Bell Labs in 2017.
2. Corresponding author.

### References

1. T. Blumensath and M. E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265 – 274, 2009.
2. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011.
3. A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 60–65. IEEE, 2005.
4. A. Buades, J.-L. Lisani, and M. Miladinović. Patch-based video denoising with optical flow estimation. IEEE Transactions on Image Processing, 25(6):2573–2586, 2016.
5. E. J. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory, 52(2):489–509, 2006.
6. E. J. Candes and T. Tao. Decoding by linear programming. IEEE transactions on information theory, 51(12):4203–4215, 2005.
7. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. IEEE Signal Processing Magazine, 33(5):95–108, Sept 2016.
8. J. Chang, C.-L. Li, B. Póczos, B. Kumar, and A. C. Sankaranarayanan. One network to solve them all—solving linear inverse problems using deep projection models. arXiv preprint arXiv:1703.09912, 2017.
9. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007.
10. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
11. W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang. Compressive sensing via nonlocal low-rank regularization. IEEE Transactions on Image Processing, 23(8):3618–3632, 2014.
12. D. L. Donoho. Compressed sensing. IEEE Trans. Inf. Theor., 52(4):1289–1306, Apr. 2006.
13. D. L. Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289–1306, 2006.
14. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk. Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine, 25(2):83–91, March 2008.
15. K. Egiazarian, A. Foi, and V. Katkovnik. Compressed sensing image reconstruction via recursive spatially adaptive filtering. In Image Processing, 2007. ICIP 2007. IEEE International Conference on, volume 1, pages I–549. IEEE, 2007.
16. M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736–3745, 2006.
17. S. Gu, L. Zhang, W. Zuo, and X. Feng. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2862–2869, 2014.
18. L. He and L. Carin. Exploiting structure in wavelet-based bayesian compressive sensing. IEEE Transactions on Signal Processing, 57(9):3488–3497, September 2009.
19. T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Rev., 51(3):455–500, Aug. 2009.
20. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
21. S. Leurgans, R. Ross, and R. Abel. A decomposition for three-way arrays. SIAM Journal on Matrix Analysis and Applications, 14(4):1064–1083, 1993.
22. C. Li, W. Yin, H. Jiang, and Y. Zhang. An efficient augmented lagrangian method with applications to total variation minimization. Computational Optimization and Applications, 56(3):507–530, 2013.
23. Z. Lin, M. Chen, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.
24. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady. Coded aperture compressive temporal imaging. Optics Express, 21(9):10526–10545, 2013.
25. P. Llull, X. Yuan, L. Carin, and D. Brady. Image translation for single-shot focal tomography. Optica, 2(9):822–825, 2015.
26. P. Llull, X. Yuan, X. Liao, J. Yang, L. Carin, G. Sapiro, and D. Brady. Compressive extended depth of field using image space coding. In Computational Optical Sensing and Imaging (COSI), pages 1–3, 2014.
27. P. Llull, X. Yuan, X. Liao, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady. Compressed Sensing and its Applications: MATHEON Workshop 2013, chapter Temporal Compressive Sensing for Video, pages 41–74. Springer International Publishing, 2015.
28. M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly. Compressed sensing mri. IEEE Signal Processing Magazine.
29. C. A. Metzler, A. Maleki, and R. G. Baraniuk. From denoising to compressed sensing. IEEE Transactions on Information Theory, 62(9):5117–5144, Sept 2016.
30. A. Mousavi, A. B. Patel, and R. G. Baraniuk. A deep learning approach to structured signal recovery. In Communication, Control, and Computing (Allerton), 2015 53rd Annual Allerton Conference on, pages 1336–1343. IEEE, 2015.
31. F. Nie, H. Huang, and C. H. Ding. Low-rank matrix recovery via efficient schatten -norm minimization. In AAAI, 2012.
32. Y. Peng, D. Meng, Z. Xu, C. Gao, Y. Yang, and B. Zhang. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2949–2956, 2014.
33. P. Rai, Y. Wang, S. Guo, G. Chen, D. Dunson, and L. Carin. Scalable bayesian low-rank decomposition of incomplete multiway tensors. In Proceedings of the 31st International Conference on Machine Learning, volume 32, pages 1800–1808, 2014.
34. N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos. Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing, 65(13):3551–3582, 2017.
35. Y. Sun, X. Yuan, and S. Pang. High-speed compressive range imaging based on active illumination. Opt. Express, 24(20):22836–22846, Oct 2016.
36. Y. Sun, X. Yuan, and S. Pang. Compressive high-speed stereo imaging. Opt. Express, 25(15):18182–18190, July 2017.
37. T. Tsai, P. Llull, X. Yuan, L. Carin, and D. Brady. Coded aperture compressive spectral-temporal imaging. In Computational Optical Sensing and Imaging (COSI), pages 1–3, 2015.
38. T.-H. Tsai, P. Llull, X. Yuan, D. J. Brady, and L. Carin. Spectral-temporal compressive imaging. Optics Letters, 40(17):4054–4057, Sep 2015.
39. T.-H. Tsai, X. Yuan, and D. J. Brady. Spatial light modulator based color polarization imaging. Optics Express, 23(9):11912–11926, May 2015.
40. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. Trans. Img. Proc., 13(4):600–612, Apr. 2004.
41. Q. Wei, K. Fan, L. Carin, and K. A. Heller. An inner-loop free solution to inverse problems using deep neural networks. arXiv preprint arXiv:1709.01841, 2017.
42. B. Wen, Y. Li, L. Pfister, and Y. Bresler. Joint adaptive sparsity and low-rankness on the fly: an online tensor reconstruction scheme for video denoising. In IEEE International Conference on Computer Vision (ICCV), 2017.
43. Q. Xie, Q. Zhao, D. Meng, Z. Xu, S. Gu, W. Zuo, and L. Zhang. Multispectral images denoising by intrinsic tensor sparsity regularization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
44. Y. Xie, S. Gu, Y. Liu, W. Zuo, W. Zhang, and L. Zhang. Weighted schatten -norm minimization for image denoising and background subtraction. IEEE transactions on image processing, 25(10):4842–4857, 2016.
45. J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin. Compressive sensing by learning a Gaussian mixture model from measurements. IEEE Transaction on Image Processing, 24(1):106–119, January 2015.
46. J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin. Video compressive sensing using Gaussian mixture models. IEEE Transaction on Image Processing, 23(11):4863–4878, November 2014.
47. Y. Yang, J. Sun, H. Li, and Z. Xu. Deep admm-net for compressive sensing mri. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 10–18. Curran Associates, Inc., 2016.
48. X. Yuan. Compressive dynamic range imaging via Bayesian shrinkage dictionary learning. Optical Engineering, 55(12):123110, 2016.
49. X. Yuan. Generalized alternating projection based total variation minimization for compressive sensing. In 2016 IEEE International Conference on Image Processing (ICIP), pages 2539–2543, Sept 2016.
50. X. Yuan, G. Huang, H. Jiang, and P. A. Wilford. Block-wise lensless compressive camera. In 2017 IEEE International Conference on Image Processing (ICIP), pages 31–35, Sept 2017.
51. X. Yuan, H. Jiang, G. Huang, and P. Wilford. Lensless compressive imaging. arXiv:1508.03498, 2015.
52. X. Yuan, H. Jiang, G. Huang, and P. Wilford. SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging. IEEE Sensors Journal, 16(22):8091–8102, November 2016.
53. X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin. Efficient patch-based approach for compressive depth imaging. Applied Optics, 55(27):7556–7564, Sep 2016.
54. X. Yuan, P. Llull, X. Liao, J. Yang, D. J. Brady, G. Sapiro, and L. Carin. Low-cost compressive sensing for color video and depth. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3318–3325, 2014.
55. X. Yuan and S. Pang. Compressive video microscope via structured illumination. In 2016 IEEE International Conference on Image Processing (ICIP), pages 1589–1593, Sept 2016.
56. X. Yuan and S. Pang. Structured illumination temporal compressive microscopy. Biomedical Optics Express, 7:746–758, 2016.
57. X. Yuan and Y. Pu. Convolutional factor analysis inspired compressive sensing. In 2017 IEEE International Conference on Image Processing (ICIP), pages 550–554, Sept 2017.
58. X. Yuan and Y. Pu. Parallel lensless compressive imaging via deep convolutional neural networks. Optics Express, 26(2):1962–1977, Jan 2018.
59. X. Yuan, V. Rao, S. Han, and L. Carin. Hierarchical infinite divisibility for multiscale shrinkage. IEEE Transactions on Signal Processing, 62(17):4363–4374, Sep. 1 2014.
60. X. Yuan, Y. Sun, and S. Pang. Compressive video microscope via structured illumination. In Computational Optical Sensing and Imaging (COSI), 2016.
61. X. Yuan, Y. Sun, and S. Pang. Compressive temporal rgb-d imaging. In Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP), page CTh1B.3. Optical Society of America, 2017.
62. X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin. Compressive hyperspectral imaging with side information. IEEE Journal of Selected Topics in Signal Processing, 9(6):964–976, September 2015.
63. M. Zhou, H. Chen, L. Ren, G. Sapiro, L. Carin, and J. W. Paisley. Non-parametric bayesian dictionary learning for sparse image representations. In Advances in neural information processing systems, pages 2295–2303, 2009.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters