Experimental comparison of singlepixel imaging algorithms
Abstract
Singlepixel imaging (SPI) is a novel technique capturing 2D images using a photodiode, instead of conventional 2D array sensors. SPI owns high signaltonoise ratio, wide spectrum range, low cost, and robustness to light scattering. Various algorithms have been proposed for SPI reconstruction, including the linear correlation methods, the alternating projection method (AP), and the compressive sensing based methods. However, there has been no comprehensive review discussing respective advantages, which is important for SPI’s further applications and development. In this paper, we reviewed and compared these algorithms in a unified reconstruction framework. Besides, we proposed two other SPI algorithms including a conjugate gradient descent based method (CGD) and a Poisson maximum likelihood based method. Both simulations and experiments validate the following conclusions: to obtain comparable reconstruction accuracy, the compressive sensing based total variation regularization method (TV) requires the least measurements and consumes the least running time for smallscale reconstruction; the CGD and AP methods run fastest in largescale cases; the TV and AP methods are the most robust to measurement noise. In a word, there are tradeoffs between capture efficiency, computational complexity and robustness to noise among different SPI algorithms. We have released our source code for noncommercial use.
josaa \datesCompiled July 5, 2019 \ociscodes(170.3010) Image reconstruction techniques; (110.1758) Computational imaging; (150.1135) Algorithm. \doihttp://dx.doi.org/10.1364/ao.XX.XXXXXX
1 Introduction
Singlepixel imaging (SPI) [1] is a novel imaging technique producing 2D images with a photodiode instead of conventional 2D array sensors. Specifically, SPI uses a light modulator such as a diffuser [2] or a programmable spatial light modulator (SLM) to modulate light patterns. The correlated light from the target scene is finally collected by a photodiode, as shown in Fig. 1. The 2D target scene can be reconstructed from the modulation patterns and corresponding 1D measurements, using various algorithms including linear correlation methods [3, 4, 5, 6], alternating projection method [2] and compressive sensing (CS) based techniques [7, 8]. Due to its high signaltonoise ratio (SNR), wide spectrum range, low cost, flexible system configuration and robustness to light scattering, SPI has drawn more and more attentions in the last decade, and has been widely applied in multispectral imaging [9, 10, 11], 3D modeling [12, 13], optical encryption [14, 15], remote sensing [16, 17], object tracking [18, 19, 20], imaging through atmospheric turbulence [21, 22], etc.
SPI shares a similar imaging scheme to ghost imaging (GI) [23, 24]. They both multiplex scene information into 1D measurements optically, and demultiplex the 2D target image computationally. GI originates from quantum optics, and produces scene images using the spatial correlation of entangled photon pairs (one interacts with the scene and the other not). Later, GI has been demonstrated to be applicable for classical thermal light [25, 26]. To get rid of recording modulation patterns, Shapiro [27] proposed using an SLM to modulate light, which coincides with the developments of SPI.
Although SPI and GI have attracted more and more attentions from various fields, their studies were conducted separately in computer science and optics. It is necessary to compare the reconstruction algorithms under the same framework. Such a unified perspective offers researchers an easy choice of appropriate algorithms for their experiments. This could further push forward the development and applications of SPI.
In this paper, we reviewed SPI and GI algorithms in a unified reconstruction framework. Also, we proposed and tested two other methods for SPI reconstruction, including a conjugate gradient descent based method and a Poisson maximum likelihood based method. These algorithms are applied on both simulated data and real captured data, under different experiment settings including sampling ratio, image size, and noise level. Given comparable reconstruction accuracy, we investigate and compare their capture efficiency, computational complexity, and robustness to measurement noise.
2 SPI algorithms
The singlepixel imaging scheme is a linear system. Specifically, the measurement formation can be described as
Ax  (1) 
where denotes the light modulation matrix ( modulation patterns, and each pattern consists of pixels), stands for the target scene (aligned as a vector) to be reconstructed, and is the measurement vector. The SPI reconstruction is to calculate x from the modulation patterns A and corresponding measurements b. In the following, we present the derivations of different algorithms, which are classified into three categories including noniterative methods, linear iterative methods and nonlinear iterative methods.
2.1 Noniterative methods
The noniterative methods perform direct reconstruction of the target scene without iteration. One common and intuitive method is using inverse matrix to reverse the linear model. However, in most cases, there are usually measurements and signals where , which means that the light modulation matrix A is nonsymmetric. To make the problem solvable, is multiplied to both sides of Eq. (1), and the model becomes . Then x is calculated as . According to ref. [28], this is essentially equivalent to minimizing when . However, is nonfull rank when , and the method would not work.
Based on the fact that the SPI measurements stand for the correlation between modulation patterns and the target scene, x can be reconstructed by correlating modulation patterns with corresponding measurements as
x  
where is the th measurement in b, is the th modulation pattern (row) in A, and is the ensemble average in terms of defined as and . Basically, the reconstruction is a weighted summation of modulation patterns, with weights being corresponding measurements. A larger measurement means the modulation pattern is more similar to the target scene, and thus leads to stronger correlation and a larger weight of the pattern in reconstruction. Therefore, this kind of methods are also referred to linear correlation methods. Statistically, if the intensity of each pixel over different patterns is independently identically distributed (i.i.d.), the average of all the patterns approaches uniform as their number goes to infinity, which could produce a highquality reconstruction.
Some variant algorithms have been proposed to improve the correlation based reconstruction, among which the differential ghost imaging (DGI) method [5, 4] is widely used. DGI takes illumination fluctuations into account, and introduces an additional detector to detect the total intensity of each illumination pattern. Specifically, if we denote the pattern intensity vector as , DGI is performed as
x  (3) 
Intuitively, DGI uses the light’s total intensity to normalize illumination patterns, which can improve the signaltonoise ratio (SNR) of final reconstruction. Due to its robustness, we use DGI to represent the noniterative algorithms in following simulations and experiments.
2.2 Linear iterative methods
2.2.1 Gradient descent (GD)
The SPI reconstruction can be treated as an error (between real measurements and its estimates) reduction process. Mathematically, it can be formulated as a quadratic minimization problem:
(4) 
where the is defined as . The gradient of the above objective function is derived as
p  
Using the gradient, one can reconstruct the target scene by iteratively updating x as
(6) 
where denotes the step size. Usually, the updating process converges when the objective function becomes smaller than a given threshold.
To converge fast, the optimum step size is needed, which can be calculated by solving the following problem:
The gradient of the above function to is
where is the residual error vector. Assigning yields the closedform solution of optimum as
(7) 
2.2.2 Conjugate gradient descent (CGD)
The conjugate gradient descent method (CGD) is also designed to solve the quadratic minimization problem in Eq. (4), but with the requirement that A need to be symmetric and positivedefinite [29]. Similar to the matrix inversion method, researchers multiply to both sides of Eq. (1) and get
(8) 
where and . The residual error vector is , and the gradient is defined as
(9) 
where in the superscript denotes the th iteration. The step size is set to be
(10) 
2.2.3 Poisson maximum likelihood
Utilizing photons’ statistic that they arrive at a sensor following the Poisson distribution [31], we propose to apply the maximum likelihood estimation method [32] for SPI reconstruction. This method aims to estimate x by maximizing the likelihood of producing the measurements . The objective function of the Poisson maximum likelihood algorithm is
(11) 
which is equivalent to
(12)  
The gradient can be derived as
p  
The target scene is reconstructed following the same gradient descent updating in Eq. (6). Since the gradient in Eq. (2.2.3) is nonlinear, it is hard to derive a closedform solution of the optimum step size . Hence, we use the standard backtracking line search method [30, 33] to calculate the step size in each iteration. The search process is summarized in Alg. (1).
2.2.4 Alternating projection
The alternating projection (AP) method for SPI reconstruction was proposed by Guo et al. [2], which is adapted from the alternating projection method for phase retrieval [34, 35]. This method considers the reconstruction from the view of spatial spectrum. It treats the SPI measurement as the zerospatialfrequency coefficient of the light field arriving at the photodiode, which is denoted as with standing for dot product. In each iteration, AP switches between the Fourier and spatial domains to add support constraints. In Fourier space, is the support constraint for zero spatial frequency, and stands for the total light intensity of . Therefore, is updated as
In spatial space, the modulation patterns are the support constraints, and the target image x is updated as
Incorporating the above calculations produces the updating principle of AP as
In each iteration, all the measurements are sequentially used in Eq. (2.2.4) to update the target image.
2.3 Nonlinear iterative methods
Compressive sensing [36, 37, 38] aims to reconstruct signals from underdetermined linear systems by introducing signal priors, and it can be utilized for SPI reconstruction to reduce measurements. There are two widely used priors for natural images, including the sparse representation prior and the total variation (TV) regularization prior. The first one states that natural images are sparse when represented by some overcomplete or orthogonal bases, such as the discrete cosine transform (DCT) basis used for the JPEG compression standard [1]. The TV regularization prior states that the gradient’s integral of a natural image is statistically low. There also exist other compressive sensing studies for SPI with different extensions [39, 40, 41, 42]. However, most of them fall into these two kinds of priors, which therefore will be mainly discussed in this article.
2.3.1 Sparse representation prior
Mathematically, the basis transform matrix and corresponding coefficient vector are respectively denoted as D and c, and the optimization model becomes
(15)  
where the norm calculates the number of nonzero entries in c, and describes its sparsity.
Usually, ones use norm to approximate norm for easier reconstruction, and the above optimization can be conducted under a gradient descent framework, using the augmented Lagrange multiplier (ALM) method [43, 44]. ALM has been proved to be a robust and fast algorithm for minimization [45]. By introducing a Lagrange multiplier y to incorporate the equality constraints into the objective function, our goal is to minimize the Lagrangian function of Eq. (15) as
where stands for the inner product, and are the parameters balancing different optimization items.
The variables in the above objective function include c, x, , , and . Following the iterative scheme of ALM, the updating principle of each variable is to minimize the Lagrangian function while keeping the other variables constant. The detailed derivations are as follows.
Optimize c. Removing all the items irrelevant to c, the objective function becomes
(17) 
Following the ALM algorithm, the updating rule of c is
c  (18) 
where is a thresholding operator defined as
Optimize x. Removing all the items irrelevant to x yields
and the gradient is
By setting , we can easily obtain the closedform solution of x as
x  
Optimize y and . In the ALM algorithm, the Lagrange multipliers y and the balancing parameter are updated as
(21)  
(22)  
where and are the parameters set by users to adjust the growing speed and maximum of .
2.3.2 Total variation regularization prior
As described in ref. [46], the gradient of an image x can be denoted as where G is the gradient calculation matrix. Here we use norm to calculate the image’s total variation, namely the gradient’s integral, and the optimization model becomes
(23)  
We can see that the optimization shares the same form as the sparse representation based reconstruction in Eq. (15), and we can utilize the same algorithm introduced in Sec. 2.3.1 to solve the TV problem.
On the above, we introduced various SPI reconstruction algorithms. A summarization of their optimization models and reconstruction principles are listed in Tab. 1.
3 Quantitative comparison
In this section, we compare the performance of the above SPI reconstruction algorithms on both simulated and experimental data, to show their pros and cons.
3.1 Settings
We first clarify several experiment settings. (1) For quantitative comparison, we use the normalized rootmeansquare error (RMSE) as the metric. Normalized RMSE is defined as , with being the ensemble average operation as defined before. It measures the relative difference between the groundtruth image and the reconstructed image . (2) For the initialization of x in the above iterative algorithms, one can theoretically set any value. Here we set for all the algorithms for a fair comparison. (3) The iteration of iterative algorithms usually stops until either it reaches a preset number, or the objective function becomes smaller than a given threshold. Here, we use the change of residual error between two subsequent iterations as the criterion, namely where . If it is small enough (we set the threshold as ), the iteration stops. Also, we set the maximum iteration number as three times of the pixel number. To ensure convergence, we set the minimum iteration number as 30.
3.2 Simulations
In the following simulations, we set 10 widely used "Standard" test images [47, 48, 49] as target scenes (the pixel values are normalized to [0, 1]), as shown in Fig. 2. For each algorithm, the reconstruction RMSEs on these images are averaged to quantitatively characterize its performance. Note that we only present the reconstructed images of ’cameraman’ in the following figures to save space. The widely used random modulation is applied to synthesize measurements based on the formation model in Eq. (1). There are two parameters we would like to clarify, including sampling ratio and image size. Sampling ratio is defined as the ratio between the number of measurements and that of the signals (pixels) to be reconstructed. It determines capture efficiency. Except for the simulation specifically discussing sampling ratio, it is set constant as 1. Image size is the pixel number of the final reconstructed images, which is also the same as that of illumination patterns. Similarly, image size is set constant as pixels in the following simulations except for the subsection specifically discussing it. Besides, we repeat each simulation for 20 times, and average the evaluations to produce final quantitative results. All the algorithms are implemented using Matlab on an Intel Core i74790 3.6 GHz CPU computer, with 16G RAM and 64 bit Windows 7 system.
3.2.1 Sampling ratio
We test the algorithms on simulated data of different sampling ratios ranging from 0.2 to 5. The reconstruction results are shown in Fig. 3, from which we have the following observations:

Reconstruction error largely decreases as sampling ratio increases to 1. When sampling ratio continues to increase, the error decreasing speed becomes slower.

Iteration number and running time do not increase much as sampling ratio increases. Though, one should note that the capture time increases as sampling ratio becomes larger.

The conventional linear correlation method and the two linear iterative methods including Poisson maximum likelihood based method and the GD method produce large reconstruction error even with the sampling ratio being 5. They own low capture efficiency.

The nonlinear iterative methods (especially the TV method) need much less measurements than the other methods for comparable reconstruction quality, and converges faster than most of the linear iterative methods except for the CGD method. As a demonstration, the TV method needs only 50 measurements of signals to obtain normalized RMSE smaller than 0.03. This benefits from the introduced image priors that provide extra scene information.
3.2.2 Image size
Image size is another important factor affecting both reconstruction quality and computational complexity. Here we set sampling ratio constant as 1, and run the algorithms on simulated data of different image sizes from 3232 pixels to 160160 pixels. The results are presented in Fig. 4, from which we can see that large image size leads to less reconstruction error. However, large image size takes more iterations and running time. Besides, we have the following observations:

The running time of the nonlinear iterative methods increases much faster than the other methods as image size increases. This dues to the nonlinear calculations and updating of multiple introduced variables.

As image size increases, the iteration number of the nonlinear iterative methods does not increase. This means that if sampling ratio is high enough, a small number of iterations is enough for the nonlinear methods to converge, which might be attributed to the introduced image priors.

DGI, CGD and AP take the least running time when image size excceeds 160 160 pixels. However, DGI produces much error in final recosntruction. Therefore, CGD and AP outperform the other methods in largescale SPI reconstruction considering both reconstruction quality and computational complexity.
3.2.3 Noise
In the above simulations, we assume no measurement noise. However, measurements in reality are always corrupted with noise arising from various causes such as ambient light and circuit current. In this subsection, we study the influence of measurement noise on final reconstruction and test the algorithms’ robustness. Here we assume Gaussian white noise following the probability distribution
(24) 
where stands for noise, and is its standard deviation (std). Noise level is characterized by the ratio between and pixel number. We vary noise level by changing the ratio from 0 to 3e3. Sampling ratio is 1, and image size is pixels. Corresponding reconstruction results are shown in Fig. 5, from which we can draw the following conclusions:

As measurement noise increases, all the reconstructions degrade.

The CGD and compressive sensing based sparse representation algorithms degrade faster than the other methods.

The TV and AP methods are the two most robust methods to attenuate measurement noise.
3.3 Experiment
To further compare the performance of different SPI algorithms, we built an SPI setup to capture real data. Similar to the reported SPI setup in ref. [50], we used a commercial projector’s illumination module (numerical aperture of the projector lens is 0.27) and a digital micromirror device (DMD, Texas Instrument DLP Discovery 4100 Development Kit, .7XGA) for light modulation. Illumination patterns of 64 64 pixels are sequentially projected onto a printed transmissive film (34 mm 34 mm) as the target scene (the "pirate" image in Fig. 2). The correlated light is recorded by a highspeed photodiode (Thorlabs DET100 Silicon photodiode, 340–1100 nm) together with a 14bit acquisition board (ART PCI8514). The sampling rate is set as 10kHz. We utilize the selfsynchronization technique [51] to synchronize the DMD and the detector. Based on the observation in Sec. 3.2.1 that reconstruction error of most algorithms is smaller than 0.05 as sampling ratio reaches 1, we set sampling ratio changing from 0.1 to 1, and utilize the reconstruction of the TV method under the sampling ratio of 3 as ground truth. The reconstructed results are presented in Fig. 6, from which we can see that the experiment results arrive at the same conclusions as those of the above simulations.
4 Conclusions
In this article, we reviewed various singlepixel imaging algorithms in a unified reconstruction framework, including the differential ghost imaging method (DGI), the gradient descent method (GD), the alternating projection method (AP), the sparse representation method and the total variation regularization method (TV). Also, we proposed two other methods for SPI reconstruction, including the conjugate gradient descent method (CGD) and the Poisson maximum likelihood method. These algorithms perform SPI reconstruction from different perspectives. The DGI method considers measurements as the correlation between target scene and illumination patterns, which indicates their similarities. Both the GD and CGD methods treat SPI reconstruction as a formation fitting process. The Poisson maximum likelihood method utilizes the statistic that random individual photons arrive at sensors following the Poisson distribution. The AP method considers the reconstruction from the view of spatial spectrum. It treats SPI measurements as the zerospatialfrequency coefficients of the light fields arriving at detector. The two compressive sensing based methods introduce natural images’ priors into optimization, and focus on underdetermined SPI reconstruction with measurements less than signals.
These algorithms can be classified into three categories according to their iteration type, including the noniterative methods (DGI), the linear iterative methods (GD, CGD, Poisson maximum likelihood method, AP), and the nonlinear iterative methods (sparse representation method, TV). Generally, the noniterative methods need the least running time but the most measurements. The nonlinear iterative methods needs the least measurements. A visual comparison of the algorithms is presented in Fig. 7.
These algorithms can also be classified into global optimization and nonglobal optimization methods, in terms of whether using all the measurements simultaneously in each update [52]. The DGI algorithm and the AP algorithm belong to nonglobal methods, while the other methods methods are global. Because nonglobal methods don’t need to store all the patterns and measurements in each update, the DGI and AP methods own high storage efficiency.
We test and compare these algorithms on both simulated data and real captured data under different parameter settings including sampling ratio, image size and measurement noise. Based on the studies, we conclude that (1) considering capture efficiency, the TV method needs the least measurements for comparable reconstruction quality; (2) In terms of computational complexity, the TV and CGD methods take the least time for smallscale reconstruction; In largescale cases, the CGD and AP methods run fastest; (3) For robustness to measurement noise, the TV and AP methods are the most robust. In a word, there are tradeoffs between capture efficiency, computational complexity and noise robustness among different SPI reconstruction algorithms. One can choose appropriate algorithms according to specific SPI configurations and applications. Our open source code of all the algorithms can be downloaded at http://www.sites.google.com/site/lihengbian.
This work was funded by the National Natural Science Foundation of China (61327902, 61671266), the National Hightech Research and Development Plan (2015AA042306), and the Research Project of Tsinghua University (20161080084).
References
 [1] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Singlepixel imaging via compressive sampling,” IEEE Signal Proc. Mag. 25, 83 (2008).
 [2] K. Guo, S. Jiang, and G. Zheng, “Multilayer fluorescence imaging on a singlepixel detector,” Biomed. Opt. Express 7, 2425–2431 (2016).
 [3] Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009).
 [4] W. Gong and S. Han, “A method to improve the visibility of ghost images obtained by thermal light,” Phys. Lett. A 374, 1005 – 1008 (2010).
 [5] F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010).
 [6] B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20, 16892–16901 (2012).
 [7] O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95, 131110 (2009).
 [8] M. Amann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3 (2013).
 [9] L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6, 24752 (2016).
 [10] Y. Wang, J. Suo, J. Fan, and Q. Dai, “Hyperspectral computational ghost imaging via temporal multiplexing,” IEEE Photonic. Tech. L. 28, 288–291 (2016).
 [11] Z. Li, J. Suo, X. Hu, C. Deng, J. Fan, and Q. Dai, “Efficient singlepixel multispectral imaging via nonmechanical spatiospectral modulation,” Sci. Rep. 7, 41435 (2017).
 [12] B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3D computational imaging with singlepixel detectors,” Science 340, 844–847 (2013).
 [13] M.J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Singlepixel threedimensional imaging with timebased depth resolution,” Nat. Commun. 7 (2016).
 [14] P. Clemente, V. Durán, E. Tajahuerce, and J. Lancis, “Optical encryption based on computational ghost imaging,” Opt. Lett. 35, 2391–2393 (2010).
 [15] W. Chen and X. Chen, “Ghost imaging for threedimensional optical security,” Appl. Phys. Lett. 103, 221106 (2013).
 [16] C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
 [17] W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, ‘‘Threedimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6, 26133 (2016).
 [18] O. S. MagañaLoaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102, 231104 (2013).
 [19] E. Li, Z. Bo, M. Chen, W. Gong, and S. Han, “Ghost imaging of a moving target with an unknown constant speed,” Appl. Phys. Lett. 104, 251120 (2014).
 [20] G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. Malcolm, and M. J. Padgett, “Realtime imaging of methane gas leaks using a singlepixel camera,” Opt. Express 25, 2998–3005 (2017).
 [21] J. Cheng, “Ghost imaging through turbulent atmosphere,” Opt. Express 17, 7916–7921 (2009).
 [22] P. Zhang, W. Gong, X. Shen, and S. Han, ‘‘Correlated imaging through atmospheric turbulence,” Phys. Rev. A 82, 033817 (2010).
 [23] T. Pittman, Y. Shih, D. Strekalov, and A. Sergienko, “Optical imaging by means of twophoton quantum entanglement,” Phys. Rev. A 52, R3429 (1995).
 [24] D. Strekalov, A. Sergienko, D. Klyshko, and Y. Shih, “Observation of twophoton "ghost" interference and diffraction,” Phys. Rev. Lett. 74, 3600 (1995).
 [25] R. S. Bennink, S. J. Bentley, and R. W. Boyd, “"twophoton" coincidence imaging with a classical source,” Phys. Rev. Lett. 89, 113601 (2002).
 [26] F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “Highresolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94, 183602 (2005).
 [27] J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008).
 [28] D. Shin, J. H. Shapiro, and V. K. Goyal, “Performance analysis of lowflux leastsquares singlepixel imaging,” IEEE Signal Proc. Let. 23, 1756–1760 (2016).
 [29] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems (NBS, 1952).
 [30] D. G. Luenberger, Introduction to linear and nonlinear programming (AddisonWesley Reading, MA, 1973).
 [31] L. Bian, J. Suo, J. Chung, X. Ou, C. Yang, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6 (2016).
 [32] H. Akaike, “Information theory and an extension of the maximum likelihood principle,” in “Selected Papers of Hirotugu Akaike,” (Springer, 1998), pp. 199–213.
 [33] S. Boyd and L. Vandenberghe, Convex optimization (Cambridge university press, 2004).
 [34] J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
 [35] G. Zheng, R. Horstmeyer, and C. Yang, “Widefield, highresolution fourier ptychographic microscopy,” Nat. Photon. 7, 739–745 (2013).
 [36] D. L. Donoho, “Compressed sensing,” IEEE T. Inform. Theory 52, 1289–1306 (2006).
 [37] E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pur. Appl. Math. 59, 1207–1223 (2006).
 [38] E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Proc. Mag. 25, 21–30 (2008).
 [39] A. M and B. M., “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
 [40] W.K. Yu, M.F. Li, X.R. Yao, X.F. Liu, L.A. Wu, and G.J. Zhai, “Adaptive compressive ghost imaging based on wavelet trees and sparse representation,” Opt. Express 22, 7133–7144 (2014).
 [41] W. Gong and S. Han, “Highresolution farfield ghost imaging via sparsity constraint,” Sci. Rep. 5, 9280 (2015).
 [42] X. Hu, J. Suo, T. Yue, L. Bian, and Q. Dai, “Patchprimitive driven compressive ghost imaging,” Opt. Express 23, 11092–11104 (2015).
 [43] Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted lowrank matrices,” UIUC Technical Report UILUENG092215 (2009).
 [44] Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for lowrank representation,” in “Adv. Neural Inform. Proc. Syst.”, (Curran Associates, Inc., 2011), pp. 612–620.
 [45] A. Y. Yang, S. S. Sastry, A. Ganesh, and Y. Ma, “Fast minimization algorithms and an application in robust face recognition: A review,” in “IEEE Int. Conf. Image Proc.”, (IEEE, 2010), pp. 1849–1852.
 [46] J. Suo, L. Bian, F. Chen, and Q. Dai, “Signaldependent noise removal for color videos using temporal and crosschannel priors,” J. Vis. Commun. Image R. 36, 130 – 141 (2016).
 [47] R. C. Gonzalez and R. E. Woods, Digital Image Processing (3rd Edition) (PrenticeHall, Inc., Upper Saddle River, NJ, USA, 2006).
 [48] R. C. Gonzalez and R. E. Woods, “Image processing place,” http://www.imageprocessingplace.com/index.htm.
 [49] U. of Southern California, “Sipi image database,” http://sipi.usc.edu/database/.
 [50] L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in fourier space,” J. Opt. 18, 085704 (2016).
 [51] J. Suo, L. Bian, Y. Xiao, Y. Wang, L. Zhang, and Q. Dai, “A selfsynchronized high speed computational ghost imaging system: A leap towards dynamic capturing,” Opt. Laser Technol. 74, 65 – 71 (2015).
 [52] L.H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015).