Abstract
The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples. Specifically, the object of interest with ambient dimension is assumed to be a mixture of complex multidimensional sinusoids, while the underlying frequencies can assume any value in the unit disk. Conventional compressed sensing paradigms suffer from the basis mismatch issue when imposing a discrete dictionary on the Fourier representation. To address this problem, we develop a novel nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. The algorithm starts by arranging the data into a lowrank enhanced form with multifold Hankel structure, then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of . We also show that, in many instances, accurate completion of a lowrank multifold Hankel matrix is possible when the number of observed entries is proportional to the information theoretical limits (except for a logarithmic gap). The robustness of EMaC against bounded noise and its applicability to super resolution are further demonstrated by numerical experiments.
Spectral Compressed Sensing via Structured Matrix Completion
Yuxin Chen yxchen@stanford.edu
Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
Yuejie Chi chi@ece.osu.edu
Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
A large class of practical applications features highdimensional objects that can be modeled or approximated by a superposition of spikes in the spectral (resp. time) domain, and involves estimation of the object from its time (resp. frequency) domain samples. A partial list includes medical imaging (Lustig et al., 2007), radar systems (Potter et al., 2010), seismic imaging (Borcea et al., 2002), microscopy (Schermelleh et al., 2010), etc. The data acquisition devices, however, are often limited by physical and hardware constraints, precluding sampling with the desired resolution. It is thus of paramount interest to reduce the sensing complexity while retaining recovery resolution.
Fortunately, in many instances, it is possible to recover an object even when the number of samples is far below the ambient dimension, provided that the object has a parsimonious representation in the transform domain. In particular, recent advances in compressed sensing (CS) (Candes et al., 2006) popularize the nonparametric methods based on convex surrogates. Such tractable methods do not require prior information on the model order, and are often robust against noise.
Nevertheless, the success of CS relies on sparse representation or approximation of the object of interest in a finite discrete dictionary, while the true parameters in many application are actually specified in a continuous dictionary. For concreteness, consider an object that is a weighted sum of dimensional sinusoids at distinct frequencies . Conventional CS paradigms operate under the assumptions that these frequencies lie on a predetermined grid on the unit disk. However, cautions need to be taken when imposing a discrete dictionary on continuous frequencies, since nature never poses the frequencies on the predetermined grid, no matter how fine the grid is (Chi et al., 2011; Duarte & Baraniuk, 2012). This issue, known as basis mismatch between the true frequencies and the discretized grid, results in loss of sparsity due to spectral leakage along the Dirichlet kernel, and hence degeneration in the performance of CS algorithms. While one might impose finer gridding to mitigate this weakness, this approach often leads to numerical instability and high correlation between dictionary elements, which significantly weakens the advantage of these CS approaches (Tang et al., 2012).
In this paper, we explore the above spectral compressed sensing problem, which aims to recover a spectrally sparse object from a small set of timedomain samples. The underlying (possibly multidimensional) frequencies can assume any value in the unit disk, and need to be recovered with infinite precision. To address this problem, we develop a nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. Specifically, EMaC starts by converting the data samples into an enhanced matrix with (fold) Hankel structures, and then solves a nuclearnorm minimization program to complete the enhanced matrix. We show that, under mild incoherence conditions, EMaC admits exact recovery from random samples, where and denote respectively the spectral sparsity and the ambient dimension. Additionally, we provide theoretical guarantee for the lowrank Hankel matrix completion problem, which is of great importance in control, natural language processing, computer vision, etc. To the best of our knowledge, our results provide the first theoretical bounds that are close to the information theoretic limit. Furthermore, numerical experiments demonstrate that our algorithm is robust against noise and is applicable to the problem of super resolution.
The spectral compressed sensing problem is closely related to harmonic retrieval, which seeks to extract the underlying frequencies of an object from a collection of its timedomain samples. This spans many signal processing applications including radar localization systems (Nion & Sidiropoulos, 2010), array imaging systems (Borcea et al., 2002), wireless channel sensing (Sayeed & Aazhang, 1999; Gedalyahui et al., 2011), etc. In fact, if the timedomain representation of an object can be estimated accurately, then its underlying frequencies can be identified using harmonic superresolution methods.
Conventional approaches for these problems, such as ESPRIT (Roy & Kailath, 1989) and the matrix pencil method (Hua, 1992), are based on the eigenvalue decomposition of covariance matrices constructed from equispaced samples, which can accommodate infinite frequency precision. One weakness of these techniques lies in that they require prior information on the model order, that is, the number of underlying frequency spikes or, at least, an estimate of it. Besides, their performance largely depends on the knowledge of noise spectra; some of them are unstable in the presence of noise and outliers (Dragotti et al., 2007).
Nonparametric algorithms based on convex optimization differ from the above parametric techniques in that the model order does not need to be specified a priori. Recently, Candès and FernandezGranda (2013) proposed a totalvariation minimization algorithm to superresolve a sparse object from frequency samples at the low end of its spectrum. This algorithm allows accurate superresolution when the point sources are appropriately separated, and is stable against noise (Candes & FernandezGranda, 2012). Inspired by this approach, Tang et. al. (2012) developed an atomic norm minimization algorithm for line spectral estimation from time domain samples. This work is limited to 1D frequency models and assumes randomness in the data model.
In contrast, our approach can accommodate multidimensional frequencies, and only assumes randomness in the observation basis. The algorithm is inspired by the recent advances of matrix completion (MC) problem, which aims at recovering a lowrank matrix from partial entries. It has been shown (Candes & Recht, 2009; Gross, 2011) that exact recovery is possible via nuclear norm minimization, as soon as the number of observed entries is on the order of the information theoretic limit. Encouragingly, this line of algorithms is also robust against noise and outliers (Negahban & Wainwright, 2012). Nevertheless, the theoretical guarantees of these algorithms do not apply to the more structured observation models associated with Hankel structure. Consequently, direct application of existing MC results yields pessimistic bounds on the number of samples, which is far beyond the degrees of freedom underlying the sparse object.
The rest of this paper is organized as follows. Section LABEL:sec:ModelandAlgorithm describes the data model and the EMaC algorithm. Section LABEL:sec:MainResults presents the theoretical guarantee of EMaC, with a proof outlined in the Appendix. Numerical validation of EMaC is given in Section LABEL:sec:NumericalExperiments. We then discuss the extension to lowrank Hankel matrix completion in Section id1, and conclude the paper in Section LABEL:sec:conclusionsandfuture.
Assume that the object of interest can be modeled as a weighted sum of dimensional sinusoids at distinct frequencies (), i.e.
(1) 
where ’s denote the complex amplitude. For concreteness, our discussion is mainly devoted to a 2dimensional (2D) frequency model. This subsumes 1D line spectral estimation as a special case, and indicates how to address multidimensional models.
Consider a data matrix of size . Suppose each entry can be expressed as
where and for some set of frequency pairs (normalized by the Nyquist rate). We can then write in a matrix form as follows:
(2) 
where , and and are defined as
(3) 
(4) 
Suppose that there exists a location set of size such that is observed iff , and assume that is sampled uniformly at random. We are interested in recovering from its partial observation on the location set .
Before continuing, we introduce a few notations that will be used throughout. The spectral norm (operator norm), the Frobenius norm, and the nuclear norm (sum of singular values) of a matrix are denoted by , and , respectively. The inner product between two matrices is defined by . We let be the orthogonal projection onto the subspace of matrices that vanishes outside .
One might naturally attempt recovery by applying the matrix completion algorithms (Candes & Recht, 2009) on the original data matrix, arguing that when is small, perfect recovery of is possible from partial measurements. Specifically, this corresponds to the following optimization program
(5)  
subject to 
which is the convex relaxation of the rank minimization problem. However, generic matrix completion algorithms (Gross, 2011) require at least the order of samples, which far exceeds the degrees of freedom (which is due to a coupon collector’s effect) in our problem. What is worse, when (which is possible since can be as large as ), is no longer lowrank. This motivates us to construct other forms (e.g. Hua (1992)) that better capture the harmonic structure.
Specifically, we adopt an effective enhanced form of based on fold Hankel structure as follows. The enhanced matrix with respect to is defined as a block Hankel matrix
(6) 
where each block is a Hankel matrix defined such that for all :
This form allows us to derive, through algebraic manipulation or a tensor product approach, the Vandemonde decomposition of each row of as follows
(7) 
where , and are defined as
and . Plugging (7) into (6) yields the following:
where and characterize the column and row space of , respectively. The effectiveness of the enhanced form relies on the shiftinvariance of harmonic structures (any consecutive samples lies in the same subspace), which promotes the use of Hankel matrices.
One can now see that is lowrank or, . We then attempt recovery via the following Enhanced Matrix Completion (EMaC) algorithm:
(8)  
subject to 
which minimizes the nuclear norm of the enhanced form over the constraint set. This convex program can be solved using offtheshelf semidefinite program solvers in a tractable manner.
The EMaC method extends to higher dimensional frequency models without difficulty. For dimensional frequency models, one can convert the original data to a fold Hankel matrix of rank at most . For instance, consider a 3 dimensional (3D) model such that . An enhanced form can be defined as a 3fold Hankel matrix such that
where denotes the 2D enhanced form of the matrix consisting of all entries obeying . One can verify that is of rank at most , and can thus apply EMaC on the 3D enhanced form . To summarize, for dimensional frequency models, EMaC minimizes the nuclear norm over all fold Hankel matrices consistent with the observed entries.
In practice, measurements are always corrupted by a certain amount of noise. To make our model and algorithm more practically applicable, we can replace our measurements by through the following noisy model
where is the observed th entry, and denotes the noise. Suppose that the noise satisfies , then EMaC can be modified as:
(9)  
subject to 
Encouragingly, under certain incoherence conditions, the simple EMaC enables accurate recovery of the true data matrix from a small number of noiseless timedomain samples, and is stable against bounded noise.
For convenience of presentation, we denote as the set of locations in the enhanced matrix containing copies of , and let . For each , we use to denote a basis matrix that extracts the average of all entries in , i.e.
In general, matrix completion from a few entries is hopeless unless the underlying structure is uncorrelated with the observation basis. This inspires us to define certain incoherence measures. Let and be correlation matrices such that for any ,
with the convention . Note that and can be obtained by sampling the 2D Dirichlet kernel, which is frequently used in Fourier analysis. Our incoherence measure is defined as follows.
Definition 1 (Incoherence)
Let denote the enhanced matrix associated with , and suppose the SVD of is given by . Then is said to have incoherence if they are respectively the smallest quantities such that
(10) 
(11) 
and
(12) 
holds for all .
Some brief interpretations of the above incoherence conditions are in order:
Condition (10) specifies certain incoherence among the locations of frequency pairs, which does not coincide with and is not subsumed by the separation condition required in (Candes & FernandezGranda, 2013; Tang et al., 2012). The frequency pairs can be spread out (e.g. when their locations are generated in some random fashion), or minimally separated (e.g. when they are small perturbation of a fine grid).
Condition (11) can be satisfied when the total energy of each skew diagonal of is proportional to the dimension of this skew diagonal. This is indeed a weaker condition than the one introduced in (Candes & Recht, 2009) for matrix completion, which requires uniform energy distribution over all entries of . For instance, an ideal can often be obtained when the complex phase of all frequencies are generated in some random fashion.
Condition (12) is an incoherence measure based on the (fold) Hankel structures. For example, one can reason that a desired can be obtained if the magnitude of all entries of is mostly even. Condition (10) and (12) depend only on the frequency pairs. In fact, one can verify that and , which depend only on the locations of the frequency pairs. Condition (11), however, might also rely on the amplitudes .
Finally, the incoherence measures are mutually correlated, which is supplied as follows.
Lemma 1
The incoherence measures of satisfy
(13) 
With the above incoherence measures, the main theoretical guarantee is supplied in the following theorem.
Theorem 1
Note that (15) is an immediate consequence of (14) by Lemma 1. Theorem 1 states the following: (1) under strong incoherence condition (i.e. given that are all constants), prefect recovery is possible as soon as the number of measurements exceeds the order of ; (2) under weak incoherence condition (i.e. given only that is a constant), perfect recovery is possible from samples. Since there are at least degrees of freedom in total, this establishes the near optimality of EMaC under strong incoherence condition.
We would like to note that while we assume random observation models, the conditions imposed on the data model are deterministic. This is different from (Tang et al., 2012), where randomness are assumed for both the observation model and the data model.
On the other hand, our method enables stable recovery even when the timedomain samples are noisy copies of the true data. Here, we say the recovery is stable if the solution of EMaCNoisy is “close” to the ground truth. To this end, we establish the following theorem, which is a counterpart of Theorem 1 in the noisy setting.
Theorem 2
Theorem 2 basically implies that the recovered enhanced matrix (which contains entries) is close to the true enhanced matrix at high signaltonoise ratio. In particular, the average entry inaccuracy is bounded above by . We note that in practice, EMaCNoisy usually yields better estimate, possibly by a polynomial factor. The practical applicability will be illustrated in Section LABEL:sec:NumericalExperiments through numerical examples.
We examine phase transition of the EMaC algorithm to evaluate its practical ability. A square enhanced form was adopted with , which corresponds to the smallest . For each pair, we generated a spectrally sparse data matrix by randomly generating frequency spikes in , and sampled a subset of size entries uniformly at random. The EMaC algorithm was conducted using CVX with SDPT3. Each trial is declared successful if the normalized mean squared error , where denotes the estimate obtained through EMaC. The empirical success rate is calculated by averaging over 100 Monte Carlo trials.
Fig. 1 illustrates the results of these Monte Carlo experiments when is a matrix. The empirical success rate is reflected by the color of each cell. It can be seen that the number of samples grows approximately linearly with respect to the spectral sparsity , in line with our theoretical guarantee in Theorem 1. This phase transition diagram validates the practical applicability of our algorithm in the noiseless setting.
The above experiments were conducted using the advanced semidefinite programming solver SDPT3. This and many other popular solvers (like SeDuMi) are based on interior point methods, which are typically inapplicable to largescale data. In fact, SDPT3 fails to handle an data matrix when exceeds , which corresponds to a enhanced matrix.
One solution for largescale data is the firstorder algorithms tailored for MC problems, e.g. the singular value thresholding (SVT) algorithm developed in (Cai et al., 2010). We propose a modified SVT algorithm in Algorithm 1 to exploit the Hankel structure.
In particular, in Algorithm 1 denotes singular value shrinkage operator. Specifically, if the SVD of is given by with , then
where is the softthresholding level^{1}^{1}1A good soft thresholding level is hard to pick, and needs to be selected by cross validation. Hence, the phase transisions of SVT and EMaC do not necessarily coincide.. Besides, in the dimensional frequency model, denotes the projection of onto the subspace of enhanced matrices (i.e. fold Hankel matrices) that are consistent with the observed entries. Consequently, at each iteration, a pair is produced by first performing singular value shrinkage and then projecting the outcome onto the space of fold Hankel matrices that are consistent with observed entries.
Fig. 2 illustrates the performance of Algorithm 1. We generated a data matrix with a superposition of random complex sinusoids, and revealed 5.8% of the total entries (i.e. ) uniformly at random. The noise was i.i.d. Gaussiandistributed giving a signaltonoise amplitude ratio of . The reconstructed signal is superimposed on the ground truth in Fig. 2. The normalized reconstruction error was , validating the stability of EMaC in the presence of noise.
The proposed EMaC algorithm works beyond the random observation model in Theorem 1. Fig. 3 considers a synthetic super resolution example, where the ground truth in Fig. 3 (a) contains point sources with constant amplitude. The lowresolution observation in Fig. 3 (b) is obtained by measuring lowfrequency components of the ground truth. Due to the large width of the associated pointspread function, both the locations and amplitudes of the point sources are distorted in the lowresolution image.
We apply EMaC to extrapolate highfrequency components up to , where . The reconstruction in Fig. 3 (c) is obtained via applying directly inverse Fourier transform of the spectrum to avoid parameter estimation such as the number of modes. The resolution is greatly enhanced from Fig. 3 (b), suggesting that EMaC is a promising approach for super resolution tasks.
(a) Ground truth  (b) Lowresolution observation  (c) Highresolution reconstruction 
One problem closely related to our method is completion of fold Hankel matrices from a small number of entries. Consider for instance the 2D model. While each spectrally sparse signal can be mapped to a lowrank fold Hankel matrix, it is not clear whether all fold Hankel matrices of rank can be written as the enhanced form of an object with spectral sparsity . Therefore, one can think of recovery of fold Hankel matrices as a more general problem than the spectral compressed sensing problem. Indeed, Hankel matrix completion has found numerous applications in system identification (Fazel et al., 2011), natural language processing (Balle & Mohri, 2012), computer vision (Sankaranarayanan et al., 2010), medical imaging (Shin et al., 2012), etc.
There has been several work concerning algorithms and numerical experiments for Hankel matrix completions (Fazel et al., 2003; 2011; Markovsky, 2008). However, to the best of our knowledge, there has been little theoretical guarantee that addresses directly Hankel matrix completion. Our analysis framework in Theorem 1 can be straightforwardly adapted to the general fold Hankel matrix completions. Notice that and are defined using the SVD of in (11) and (12), and we only need to modify the definition of , as stated in the following theorem.
Theorem 3
Condition (16) requires that the left and right singular vectors are sufficiently uncorrelated with the observation basis. In fact, condition (16) is a much weaker assumption than (10).
It is worth mentioning that lowrank Hankel matrices can often be converted to lowrank Toeplitz counterparts. Both Hankel and Toeplitz matrices are important forms that capture the underlying harmonic structures. Our results and analysis framework easily extend to the Toeplitz matrix completion problem.
We present an efficient nonparametric algorithm to estimate a spectrally sparse object from its partial timedomain samples, which poses spectral compressed sensing as a lowrank Hankel structured matrix completion problem. Under mild conditions, our algorithm enables recovery of the multidimensional unknown frequencies with infinite precision, which remedies the basis mismatch issue that arises in conventional CS paradigms. To the best of our knowledge, our result on Hankel matrix completion is also the first theoretical guarantee that is close to the informationtheoretical limit (up to a logarithmic factor).
Appendix: Proof Outline of Theorem 1
The EMaC method has similar spirit as the wellknown matrix completion algorithms (Candes & Recht, 2009; Gross, 2011), however the additional Hankel or blockHankel structures on the matrices make existing theoretical results inapplicable in our framework. Nevertheless, the golfing scheme introduced in (Gross, 2011) lays the foundation of our analysis. We provide here a sketch of the proof, with detailed derivation deferred to (Chen & Chi, 2013) and the supplemental material.
Denote by the tangent space with respect to and . Let be the orthogonal projection onto , with denoting its orthogonal complement. Denote by the orthogonal projection onto the subspace spanned by . The projection onto the space spanned by all ’s and its complement are defined as
(17) 
Suppose that is obtained by sampling with replacement, i.e. contains indices that are i.i.d. generated from . We define the associated projection operators as , and introduce another projection operator similar to but with the summation only over distinct samples.
To prove exact recovery of EMaC, it is sufficient to produce a dual certificate as follows.
Lemma 2
For a location set of size , suppose that the sampling operator obeys
(18) 
If there exists a matrix that obeys
(19) 
then is the unique optimizer of EMaC.
The dual certificate is constructed via the golfing scheme introduced in (Gross, 2011). Specifically, we generate independent random location sets , which is sampled with replacement. Set such that , where , then the distribution of is the same as . For small , we have through the Taylor expansion. Consider a small constant . The construction of a dual certificate proceeds as follows:

Set , and .

For , let

Set .
We will verify that is a valid dual certificate satisfying the conditions (19) in Lemma 2. Our construction immediately yields for , which gives
Define and hence , one has
To proceed, we present Lemma 3 which shows that is sufficiently incoherent with respect to .
Lemma 3
There exists a constant such that if , then
(20) 
with probability exceeding .
Lemma 4
There exist constants such that if , then
with probability at least .
Under the assumptions of Lemma 4, we have
So far, we have successfully shown that is a valid dual certificate with high probability, and hence EMaC allows exact recovery with high probability.
References
 Balle & Mohri (2012) Balle, B. and Mohri, M. Spectral learning of general weighted automata via constrained matrix completion. Advances in Neural Information Processing Systems (NIPS), pp. 2168–2176, 2012.
 Borcea et al. (2002) Borcea, L., Papanicolaou, G., Tsogka, C., and Berryman, J. Imaging and time reversal in random media. Inverse Problems, 18(5):1247, 2002.
 Cai et al. (2010) Cai, J. F., Candes, E. J., and Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
 Candes & FernandezGranda (2012) Candes, E. J. and FernandezGranda, C. Superresolution from noisy data. Arxiv 1211.0290, November 2012.
 Candes & FernandezGranda (2013) Candes, E. J. and FernandezGranda, C. Towards a mathematical theory of superresolution. to appear in Communications on Pure and Applied Mathematics, 2013.
 Candes & Recht (2009) Candes, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, April 2009.
 Candes et al. (2006) Candes, E. J., Romberg, J., and Tao, T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, Feb. 2006.
 Chen & Chi (2013) Chen, Y. and Chi, Y. Robust spectral compressed sensing via structured matrix completion. Arxiv: 1304.8126, May 2013.
 Chi et al. (2011) Chi, Y., Scharf, L.L., Pezeshki, A., and Calderbank, A.R. Sensitivity to basis mismatch in compressed sensing. IEEE Transactions on Signal Processing, 59(5):2182–2195, May 2011.
 Dragotti et al. (2007) Dragotti, P. L., Vetterli, M., and Blu, T. Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets strangfix. IEEE Trans on Signal Processing, 55(5):1741 –1757, May 2007.
 Duarte & Baraniuk (2012) Duarte, M.F. and Baraniuk, R.G. Spectral compressive sensing. Applied and Computational Harmonic Analysis, 2012.
 Fazel et al. (2003) Fazel, M., Hindi, H., and Boyd, S. P. Logdet heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. American Control Conference, 3:2156 – 2162 vol.3, June 2003.
 Fazel et al. (2011) Fazel, M., Pong, T. K., Sun, D., and Tseng, P. Hankel matrix rank minimization with applications in system identification and realization, 2011.
 Gedalyahui et al. (2011) Gedalyahui, K., Tur, R., and Eldar, Y. C. Multichannel sampling of pulse streams at the rate of innovation. IEEE Trans on Sig. Proc, 59:1491–1504, 2011.
 Gross (2011) Gross, D. Recovering lowrank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548–1566, March 2011.
 Hua (1992) Hua, Y. Estimating twodimensional frequencies by matrix enhancement and matrix pencil. IEEE Trans on Sig. Proc., 40(9):2267 –2280, Sep 1992.
 Lustig et al. (2007) Lustig, M., Donoho, D., and Pauly, J. M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182–1195, 2007.
 Markovsky (2008) Markovsky, I. Structured lowrank approximation and its applications. Automatica, 44(4):891–909, 2008.
 Negahban & Wainwright (2012) Negahban, S. and Wainwright, M.J. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 98888:1665–1697, May 2012.
 Nion & Sidiropoulos (2010) Nion, D. and Sidiropoulos, N. D. Tensor algebra and multidimensional harmonic retrieval in signal processing for MIMO Radar. IEEE Transactions on Signal Processing, 58(11):5693 –5705, Nov. 2010.
 Potter et al. (2010) Potter, L.C., Ertin, E., Parker, J.T., and Cetin, M. Sparsity and compressed sensing in radar imaging. Proceedings of the IEEE, 98(6):1006–1020, 2010.
 Roy & Kailath (1989) Roy, R. and Kailath, T. Espritestimation of signal parameters via rotational invariance techniques. IEEE Transactions on Acoustics, Speech and Signal Processing, 37(7):984 –995, Jul 1989.
 Sankaranarayanan et al. (2010) Sankaranarayanan, A., Turaga, P., Baraniuk, R., and Chellappa, R. Compressive acquisition of dynamic scenes. ECCV 2010, pp. 129–142, 2010.
 Sayeed & Aazhang (1999) Sayeed, A. M. and Aazhang, B. Joint multipathDoppler diversity in mobile wireless communications. IEEE Transactions on Communications, 47(1):123 –132, Jan 1999.
 Schermelleh et al. (2010) Schermelleh, L., Heintzmann, R., and Leonhardt, H. A guide to superresolution fluorescence microscopy. The Journal of cell biology, 190(2):165–175, 2010.
 Shin et al. (2012) Shin, P., Larson, P., Ohliger, M., Elad, M., Pauly, J., Vigneron, D., and Lustig, M. Calibrationless parallel imaging reconstruction based on structured lowrank matrix completion. submitted to Magnetic Resonance in Medicine, 2012.
 Tang et al. (2012) Tang, G., Bhaskar, B. N., Shah, P., and Recht, B. Compressed sensing off the grid. Arxiv 1207.6053, July 2012.