Definition 1 (Incoherence)
###### Abstract

The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples. Specifically, the object of interest with ambient dimension is assumed to be a mixture of complex multi-dimensional sinusoids, while the underlying frequencies can assume any value in the unit disk. Conventional compressed sensing paradigms suffer from the basis mismatch issue when imposing a discrete dictionary on the Fourier representation. To address this problem, we develop a novel nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. The algorithm starts by arranging the data into a low-rank enhanced form with multi-fold Hankel structure, then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of . We also show that, in many instances, accurate completion of a low-rank multi-fold Hankel matrix is possible when the number of observed entries is proportional to the information theoretical limits (except for a logarithmic gap). The robustness of EMaC against bounded noise and its applicability to super resolution are further demonstrated by numerical experiments.

Spectral Compressed Sensing via Structured Matrix Completion

Yuxin Chen yxchen@stanford.edu

Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA

Yuejie Chi chi@ece.osu.edu

Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA

\@xsect

A large class of practical applications features high-dimensional objects that can be modeled or approximated by a superposition of spikes in the spectral (resp. time) domain, and involves estimation of the object from its time (resp. frequency) domain samples. A partial list includes medical imaging (Lustig et al., 2007), radar systems (Potter et al., 2010), seismic imaging (Borcea et al., 2002), microscopy (Schermelleh et al., 2010), etc. The data acquisition devices, however, are often limited by physical and hardware constraints, precluding sampling with the desired resolution. It is thus of paramount interest to reduce the sensing complexity while retaining recovery resolution.

Fortunately, in many instances, it is possible to recover an object even when the number of samples is far below the ambient dimension, provided that the object has a parsimonious representation in the transform domain. In particular, recent advances in compressed sensing (CS) (Candes et al., 2006) popularize the nonparametric methods based on convex surrogates. Such tractable methods do not require prior information on the model order, and are often robust against noise.

Nevertheless, the success of CS relies on sparse representation or approximation of the object of interest in a finite discrete dictionary, while the true parameters in many application are actually specified in a continuous dictionary. For concreteness, consider an object that is a weighted sum of -dimensional sinusoids at distinct frequencies . Conventional CS paradigms operate under the assumptions that these frequencies lie on a pre-determined grid on the unit disk. However, cautions need to be taken when imposing a discrete dictionary on continuous frequencies, since nature never poses the frequencies on the pre-determined grid, no matter how fine the grid is (Chi et al., 2011; Duarte & Baraniuk, 2012). This issue, known as basis mismatch between the true frequencies and the discretized grid, results in loss of sparsity due to spectral leakage along the Dirichlet kernel, and hence degeneration in the performance of CS algorithms. While one might impose finer gridding to mitigate this weakness, this approach often leads to numerical instability and high correlation between dictionary elements, which significantly weakens the advantage of these CS approaches (Tang et al., 2012).

In this paper, we explore the above spectral compressed sensing problem, which aims to recover a spectrally sparse object from a small set of time-domain samples. The underlying (possibly multi-dimensional) frequencies can assume any value in the unit disk, and need to be recovered with infinite precision. To address this problem, we develop a nonparametric algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion. Specifically, EMaC starts by converting the data samples into an enhanced matrix with (-fold) Hankel structures, and then solves a nuclear-norm minimization program to complete the enhanced matrix. We show that, under mild incoherence conditions, EMaC admits exact recovery from random samples, where and denote respectively the spectral sparsity and the ambient dimension. Additionally, we provide theoretical guarantee for the low-rank Hankel matrix completion problem, which is of great importance in control, natural language processing, computer vision, etc. To the best of our knowledge, our results provide the first theoretical bounds that are close to the information theoretic limit. Furthermore, numerical experiments demonstrate that our algorithm is robust against noise and is applicable to the problem of super resolution.

\@xsect

The spectral compressed sensing problem is closely related to harmonic retrieval, which seeks to extract the underlying frequencies of an object from a collection of its time-domain samples. This spans many signal processing applications including radar localization systems (Nion & Sidiropoulos, 2010), array imaging systems (Borcea et al., 2002), wireless channel sensing (Sayeed & Aazhang, 1999; Gedalyahui et al., 2011), etc. In fact, if the time-domain representation of an object can be estimated accurately, then its underlying frequencies can be identified using harmonic super-resolution methods.

Conventional approaches for these problems, such as ESPRIT (Roy & Kailath, 1989) and the matrix pencil method (Hua, 1992), are based on the eigenvalue decomposition of covariance matrices constructed from equi-spaced samples, which can accommodate infinite frequency precision. One weakness of these techniques lies in that they require prior information on the model order, that is, the number of underlying frequency spikes or, at least, an estimate of it. Besides, their performance largely depends on the knowledge of noise spectra; some of them are unstable in the presence of noise and outliers (Dragotti et al., 2007).

Nonparametric algorithms based on convex optimization differ from the above parametric techniques in that the model order does not need to be specified a priori. Recently, Candès and Fernandez-Granda (2013) proposed a total-variation minimization algorithm to super-resolve a sparse object from frequency samples at the low end of its spectrum. This algorithm allows accurate super-resolution when the point sources are appropriately separated, and is stable against noise (Candes & Fernandez-Granda, 2012). Inspired by this approach, Tang et. al. (2012) developed an atomic norm minimization algorithm for line spectral estimation from time domain samples. This work is limited to 1-D frequency models and assumes randomness in the data model.

In contrast, our approach can accommodate multi-dimensional frequencies, and only assumes randomness in the observation basis. The algorithm is inspired by the recent advances of matrix completion (MC) problem, which aims at recovering a low-rank matrix from partial entries. It has been shown (Candes & Recht, 2009; Gross, 2011) that exact recovery is possible via nuclear norm minimization, as soon as the number of observed entries is on the order of the information theoretic limit. Encouragingly, this line of algorithms is also robust against noise and outliers (Negahban & Wainwright, 2012). Nevertheless, the theoretical guarantees of these algorithms do not apply to the more structured observation models associated with Hankel structure. Consequently, direct application of existing MC results yields pessimistic bounds on the number of samples, which is far beyond the degrees of freedom underlying the sparse object.

The rest of this paper is organized as follows. Section LABEL:sec:Model-and-Algorithm describes the data model and the EMaC algorithm. Section LABEL:sec:Main-Results presents the theoretical guarantee of EMaC, with a proof outlined in the Appendix. Numerical validation of EMaC is given in Section LABEL:sec:Numerical-Experiments. We then discuss the extension to low-rank Hankel matrix completion in Section id1, and conclude the paper in Section LABEL:sec:conclusions-and-future.

\@xsect

Assume that the object of interest can be modeled as a weighted sum of -dimensional sinusoids at distinct frequencies (), i.e.

 x(t)=r∑i=1diej2π⟨t,fi⟩ (1)

where ’s denote the complex amplitude. For concreteness, our discussion is mainly devoted to a 2-dimensional (2-D) frequency model. This subsumes 1-D line spectral estimation as a special case, and indicates how to address multi-dimensional models.

\@xsect

Consider a data matrix of size . Suppose each entry can be expressed as

 xk,l=r∑i=1diykizli,

where and for some set of frequency pairs (normalized by the Nyquist rate). We can then write in a matrix form as follows:

 X=YDZT, (2)

where , and and are defined as

 Y:=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣11⋯1y1y2⋯yr⋮⋮⋮⋮yn1−11yn1−12⋯yn1−1r⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, (3)
 Z:=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣11⋯1z1z2⋯zr⋮⋮⋮⋮zn2−11zn2−12⋯zn2−1r⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (4)

Suppose that there exists a location set of size such that is observed iff , and assume that is sampled uniformly at random. We are interested in recovering from its partial observation on the location set .

Before continuing, we introduce a few notations that will be used throughout. The spectral norm (operator norm), the Frobenius norm, and the nuclear norm (sum of singular values) of a matrix are denoted by , and , respectively. The inner product between two matrices is defined by . We let be the orthogonal projection onto the subspace of matrices that vanishes outside .

\@xsect

One might naturally attempt recovery by applying the matrix completion algorithms (Candes & Recht, 2009) on the original data matrix, arguing that when is small, perfect recovery of is possible from partial measurements. Specifically, this corresponds to the following optimization program

 minimizeM∈Cn1×n2 ∥M∥∗ (5) subject to PΩ(M)=PΩ(X),

which is the convex relaxation of the rank minimization problem. However, generic matrix completion algorithms (Gross, 2011) require at least the order of samples, which far exceeds the degrees of freedom (which is due to a coupon collector’s effect) in our problem. What is worse, when (which is possible since can be as large as ), is no longer low-rank. This motivates us to construct other forms (e.g. Hua (1992)) that better capture the harmonic structure.

Specifically, we adopt an effective enhanced form of based on -fold Hankel structure as follows. The enhanced matrix with respect to is defined as a block Hankel matrix

 Xe:=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣X0X1⋯Xn1−k1X1X2⋯Xn1−k1+1⋮⋮⋮⋮Xk1−1Xk1⋯Xn1−1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, (6)

where each block is a Hankel matrix defined such that for all :

 Xl:=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣xl,0xl,1⋯xl,n2−k2xl,1xl,2⋯xl,n2−k2+1⋮⋮⋮⋮xl,k2−1xl,k2⋯xl,n2−1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

This form allows us to derive, through algebraic manipulation or a tensor product approach, the Vandemonde decomposition of each row of as follows

 ∀l (0≤l

where , and are defined as

 ZL:=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣11⋯1z1z2⋯zr⋮⋮⋮⋮zk2−11zk2−12⋯zk2−1r⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,
 ZR:=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1z1⋯zn2−k211z2⋯zn2−k22⋮⋮⋮⋮1zr⋯zn2−k2r⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

and . Plugging (7) into (6) yields the following:

 Xe=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ZLZLYd⋮ZLYk1−1d⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦√k1k2ELD[ZR,YdZR,⋯,Yn1−k1dZR]√(n1−k1+1)(n2−k2+1)ER,

where and characterize the column and row space of , respectively. The effectiveness of the enhanced form relies on the shift-invariance of harmonic structures (any consecutive samples lies in the same subspace), which promotes the use of Hankel matrices.

One can now see that is low-rank or, . We then attempt recovery via the following Enhanced Matrix Completion (EMaC) algorithm:

 (EMaC):minimizeM∈Cn1×n2 ∥∥Me∥∥∗ (8) subject to PΩ(M)=PΩ(X),

which minimizes the nuclear norm of the enhanced form over the constraint set. This convex program can be solved using off-the-shelf semidefinite program solvers in a tractable manner.

\@xsect

The EMaC method extends to higher dimensional frequency models without difficulty. For -dimensional frequency models, one can convert the original data to a -fold Hankel matrix of rank at most . For instance, consider a 3 dimensional (3-D) model such that . An enhanced form can be defined as a 3-fold Hankel matrix such that

 Xe:=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X0,eX1,e⋯Xn3−k3,eX1,eX2,e⋯Xn3−k3+1,e⋮⋮⋮⋮Xk3−1,eXk1,e⋯Xn3−1,%e⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

where denotes the 2-D enhanced form of the matrix consisting of all entries obeying . One can verify that is of rank at most , and can thus apply EMaC on the 3-D enhanced form . To summarize, for -dimensional frequency models, EMaC minimizes the nuclear norm over all -fold Hankel matrices consistent with the observed entries.

\@xsect

In practice, measurements are always corrupted by a certain amount of noise. To make our model and algorithm more practically applicable, we can replace our measurements by through the following noisy model

 ∀(i,l)∈Ω:Xoil=Xil+Nil,

where is the observed -th entry, and denotes the noise. Suppose that the noise satisfies , then EMaC can be modified as:

 (EMaC-Noisy):minimizeM∈Cn1×n2 ∥∥Me∥∥∗ (9) subject to ∥∥PΩ(M−Xo% )∥∥F≤δ.
\@xsect

Encouragingly, under certain incoherence conditions, the simple EMaC enables accurate recovery of the true data matrix from a small number of noiseless time-domain samples, and is stable against bounded noise.

For convenience of presentation, we denote as the set of locations in the enhanced matrix containing copies of , and let . For each , we use to denote a basis matrix that extracts the average of all entries in , i.e.

 (A(i,l))α,β:=⎧⎨⎩1√∣∣Ωe(i,l)∣∣,if~{}(α,β)∈Ωe(i,l),0,else.
\@xsect

In general, matrix completion from a few entries is hopeless unless the underlying structure is uncorrelated with the observation basis. This inspires us to define certain incoherence measures. Let and be correlation matrices such that for any ,

 (GL)ij:=1k1k21−(y∗iyj)k11−y∗iyj1−(z∗izj)k21−z∗izj,
 (GR)ij:=1−(y∗iyj)n1−k1+1(n1−k1+1)(1−y∗iyj)1−(z∗izj)n2−k2+1(n2−k2+1)(1−z∗izj),

with the convention . Note that and can be obtained by sampling the 2-D Dirichlet kernel, which is frequently used in Fourier analysis. Our incoherence measure is defined as follows.

###### Definition 1 (Incoherence)

Let denote the enhanced matrix associated with , and suppose the SVD of is given by . Then is said to have incoherence if they are respectively the smallest quantities such that

 σmin(GL)≥1/μ1,σmin(GR)≥1/μ1; (10)
 (11)

and

 ∑a∈[n1]×[n2]∣∣⟨UU∗AbVV∗,√ωaAa⟩∣∣2≤μ3rn1n2ωb (12)

holds for all .

Some brief interpretations of the above incoherence conditions are in order:

Condition (10) specifies certain incoherence among the locations of frequency pairs, which does not coincide with and is not subsumed by the separation condition required in (Candes & Fernandez-Granda, 2013; Tang et al., 2012). The frequency pairs can be spread out (e.g. when their locations are generated in some random fashion), or minimally separated (e.g. when they are small perturbation of a fine grid).

Condition (11) can be satisfied when the total energy of each skew diagonal of is proportional to the dimension of this skew diagonal. This is indeed a weaker condition than the one introduced in (Candes & Recht, 2009) for matrix completion, which requires uniform energy distribution over all entries of . For instance, an ideal can often be obtained when the complex phase of all frequencies are generated in some random fashion.

Condition (12) is an incoherence measure based on the (-fold) Hankel structures. For example, one can reason that a desired can be obtained if the magnitude of all entries of is mostly even. Condition (10) and (12) depend only on the frequency pairs. In fact, one can verify that and , which depend only on the locations of the frequency pairs. Condition (11), however, might also rely on the amplitudes .

Finally, the incoherence measures are mutually correlated, which is supplied as follows.

###### Lemma 1

The incoherence measures of satisfy

 μ2≤μ21c2sr,andμ3≤μ21c2sr. (13)
\@xsect

With the above incoherence measures, the main theoretical guarantee is supplied in the following theorem.

###### Theorem 1

Let be a data matrix with matrix form (2), and the random location set of size . Define . If all measurements are noiseless, then there exists a constant such that under either of the following conditions:

• Conditions (10), (11) and (12) hold and

 m>c1max(μ1cs,μ2,μ3cs)rlog2(n1n2); (14)
• Condition (10) holds and

 m>c1μ21c2sr2log2(n1n2); (15)

is the unique solution of EMaC with probability exceeding .

Note that (15) is an immediate consequence of (14) by Lemma 1. Theorem 1 states the following: (1) under strong incoherence condition (i.e. given that are all constants), prefect recovery is possible as soon as the number of measurements exceeds the order of ; (2) under weak incoherence condition (i.e. given only that is a constant), perfect recovery is possible from samples. Since there are at least degrees of freedom in total, this establishes the near optimality of EMaC under strong incoherence condition.

We would like to note that while we assume random observation models, the conditions imposed on the data model are deterministic. This is different from (Tang et al., 2012), where randomness are assumed for both the observation model and the data model.

On the other hand, our method enables stable recovery even when the time-domain samples are noisy copies of the true data. Here, we say the recovery is stable if the solution of EMaC-Noisy is “close” to the ground truth. To this end, we establish the following theorem, which is a counterpart of Theorem 1 in the noisy setting.

###### Theorem 2

Suppose is a noisy copy of that satisfies . Under the conditions of Theorem 1, the solution to EMaC-Noisy in (9) satisfies

 ∥^Xe−Xe∥F≤{2√n1n2+8n1n2+8√2n21n22m}δ

with probability exceeding .

Theorem 2 basically implies that the recovered enhanced matrix (which contains entries) is close to the true enhanced matrix at high signal-to-noise ratio. In particular, the average entry inaccuracy is bounded above by . We note that in practice, EMaC-Noisy usually yields better estimate, possibly by a polynomial factor. The practical applicability will be illustrated in Section LABEL:sec:Numerical-Experiments through numerical examples.

\@xsect\@xsect

We examine phase transition of the EMaC algorithm to evaluate its practical ability. A square enhanced form was adopted with , which corresponds to the smallest . For each pair, we generated a spectrally sparse data matrix by randomly generating frequency spikes in , and sampled a subset of size entries uniformly at random. The EMaC algorithm was conducted using CVX with SDPT3. Each trial is declared successful if the normalized mean squared error , where denotes the estimate obtained through EMaC. The empirical success rate is calculated by averaging over 100 Monte Carlo trials.

Fig. 1 illustrates the results of these Monte Carlo experiments when is a matrix. The empirical success rate is reflected by the color of each cell. It can be seen that the number of samples grows approximately linearly with respect to the spectral sparsity , in line with our theoretical guarantee in Theorem 1. This phase transition diagram validates the practical applicability of our algorithm in the noiseless setting.

\@xsect

The above experiments were conducted using the advanced semidefinite programming solver SDPT3. This and many other popular solvers (like SeDuMi) are based on interior point methods, which are typically inapplicable to large-scale data. In fact, SDPT3 fails to handle an data matrix when exceeds , which corresponds to a enhanced matrix.

One solution for large-scale data is the first-order algorithms tailored for MC problems, e.g. the singular value thresholding (SVT) algorithm developed in (Cai et al., 2010). We propose a modified SVT algorithm in Algorithm 1 to exploit the Hankel structure.

In particular, in Algorithm  1 denotes singular value shrinkage operator. Specifically, if the SVD of is given by with , then

where is the soft-thresholding level111A good soft thresholding level is hard to pick, and needs to be selected by cross validation. Hence, the phase transisions of SVT and EMaC do not necessarily coincide.. Besides, in the -dimensional frequency model, denotes the projection of onto the subspace of enhanced matrices (i.e. -fold Hankel matrices) that are consistent with the observed entries. Consequently, at each iteration, a pair is produced by first performing singular value shrinkage and then projecting the outcome onto the space of -fold Hankel matrices that are consistent with observed entries.

Fig. 2 illustrates the performance of Algorithm 1. We generated a data matrix with a superposition of random complex sinusoids, and revealed 5.8% of the total entries (i.e. ) uniformly at random. The noise was i.i.d. Gaussian-distributed giving a signal-to-noise amplitude ratio of . The reconstructed signal is superimposed on the ground truth in Fig. 2. The normalized reconstruction error was , validating the stability of EMaC in the presence of noise.

\@xsect

The proposed EMaC algorithm works beyond the random observation model in Theorem 1. Fig. 3 considers a synthetic super resolution example, where the ground truth in Fig. 3 (a) contains point sources with constant amplitude. The low-resolution observation in Fig. 3 (b) is obtained by measuring low-frequency components of the ground truth. Due to the large width of the associated point-spread function, both the locations and amplitudes of the point sources are distorted in the low-resolution image.

We apply EMaC to extrapolate high-frequency components up to , where . The reconstruction in Fig. 3 (c) is obtained via applying directly inverse Fourier transform of the spectrum to avoid parameter estimation such as the number of modes. The resolution is greatly enhanced from Fig. 3 (b), suggesting that EMaC is a promising approach for super resolution tasks.

\@xsect

One problem closely related to our method is completion of -fold Hankel matrices from a small number of entries. Consider for instance the 2-D model. While each spectrally sparse signal can be mapped to a low-rank -fold Hankel matrix, it is not clear whether all -fold Hankel matrices of rank can be written as the enhanced form of an object with spectral sparsity . Therefore, one can think of recovery of -fold Hankel matrices as a more general problem than the spectral compressed sensing problem. Indeed, Hankel matrix completion has found numerous applications in system identification (Fazel et al., 2011), natural language processing (Balle & Mohri, 2012), computer vision (Sankaranarayanan et al., 2010), medical imaging (Shin et al., 2012), etc.

There has been several work concerning algorithms and numerical experiments for Hankel matrix completions (Fazel et al., 2003; 2011; Markovsky, 2008). However, to the best of our knowledge, there has been little theoretical guarantee that addresses directly Hankel matrix completion. Our analysis framework in Theorem 1 can be straightforwardly adapted to the general -fold Hankel matrix completions. Notice that and are defined using the SVD of in (11) and (12), and we only need to modify the definition of , as stated in the following theorem.

###### Theorem 3

Consider a -fold Hankel matrix of rank . The bounds in Theorem 1 and Theorem 2 continue to hold, if the incoherence is defined as the smallest quantity that satisfies:

 max{∥UU∗A(i,l)∥2F,∥A(i,l)VV∗∥2F}≤μ1csrn1n2. (16)

Condition (16) requires that the left and right singular vectors are sufficiently uncorrelated with the observation basis. In fact, condition (16) is a much weaker assumption than (10).

It is worth mentioning that low-rank Hankel matrices can often be converted to low-rank Toeplitz counterparts. Both Hankel and Toeplitz matrices are important forms that capture the underlying harmonic structures. Our results and analysis framework easily extend to the Toeplitz matrix completion problem.

\@xsect

We present an efficient nonparametric algorithm to estimate a spectrally sparse object from its partial time-domain samples, which poses spectral compressed sensing as a low-rank Hankel structured matrix completion problem. Under mild conditions, our algorithm enables recovery of the multi-dimensional unknown frequencies with infinite precision, which remedies the basis mismatch issue that arises in conventional CS paradigms. To the best of our knowledge, our result on Hankel matrix completion is also the first theoretical guarantee that is close to the information-theoretical limit (up to a logarithmic factor).

\@ssect

Appendix: Proof Outline of Theorem 1

The EMaC method has similar spirit as the well-known matrix completion algorithms (Candes & Recht, 2009; Gross, 2011), however the additional Hankel or block-Hankel structures on the matrices make existing theoretical results inapplicable in our framework. Nevertheless, the golfing scheme introduced in (Gross, 2011) lays the foundation of our analysis. We provide here a sketch of the proof, with detailed derivation deferred to (Chen & Chi, 2013) and the supplemental material.

Denote by the tangent space with respect to and . Let be the orthogonal projection onto , with denoting its orthogonal complement. Denote by the orthogonal projection onto the subspace spanned by . The projection onto the space spanned by all ’s and its complement are defined as

 A:=∑(i,l)∈[n1]×[n2]A(i,l),andA⊥=I−A. (17)

Suppose that is obtained by sampling with replacement, i.e. contains indices that are i.i.d. generated from . We define the associated projection operators as , and introduce another projection operator similar to but with the summation only over distinct samples.

To prove exact recovery of EMaC, it is sufficient to produce a dual certificate as follows.

###### Lemma 2

For a location set of size , suppose that the sampling operator obeys

 ∥∥PTAPT−n1n2mPTAΩPT∥∥≤12. (18)

If there exists a matrix that obeys

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩(A−A′Ω)(UV∗+W)=0,∥PT(W)∥F≤12n21n22,∥PT⊥(W)∥≤12, (19)

then is the unique optimizer of EMaC.

The dual certificate is constructed via the golfing scheme introduced in (Gross, 2011). Specifically, we generate independent random location sets , which is sampled with replacement. Set such that , where , then the distribution of is the same as . For small , we have through the Taylor expansion. Consider a small constant . The construction of a dual certificate proceeds as follows:

1. Set , and .

2. For , let

 Bi=Bi−1+(q−1AΩi+A⊥)PT(UV∗−Bi−1).
3. Set .

We will verify that is a valid dual certificate satisfying the conditions (19) in Lemma 2. Our construction immediately yields for , which gives

 (A−A′Ω)(UV∗+W)=0.

Define and hence , one has

 PT(Fi) =(PTAPT−q−1PTAΩiPT)(Fi−1).

To proceed, we present Lemma 3 which shows that is sufficiently incoherent with respect to .

###### Lemma 3

There exists a constant such that if , then

 ∥∥n1n2mPTAΩPT−PTAPT∥∥≤ϵ (20)

with probability exceeding .

Under the assumptions of Lemma 3, we have

 ∥PT(W)∥F=∥∥PT(Fj0)∥∥F≤ϵj0√r

It remains to show as follows.

###### Lemma 4

There exist constants such that if , then

with probability at least .

Under the assumptions of Lemma 4, we have

 ∥PT⊥(W)∥ ≤j0∑i=0∥∥PT⊥(q−1AΩ+A⊥)PT(Fi)∥∥<1/2.

So far, we have successfully shown that is a valid dual certificate with high probability, and hence EMaC allows exact recovery with high probability.

## References

• Balle & Mohri (2012) Balle, B. and Mohri, M. Spectral learning of general weighted automata via constrained matrix completion. Advances in Neural Information Processing Systems (NIPS), pp. 2168–2176, 2012.
• Borcea et al. (2002) Borcea, L., Papanicolaou, G., Tsogka, C., and Berryman, J. Imaging and time reversal in random media. Inverse Problems, 18(5):1247, 2002.
• Cai et al. (2010) Cai, J. F., Candes, E. J., and Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
• Candes & Fernandez-Granda (2012) Candes, E. J. and Fernandez-Granda, C. Super-resolution from noisy data. Arxiv 1211.0290, November 2012.
• Candes & Fernandez-Granda (2013) Candes, E. J. and Fernandez-Granda, C. Towards a mathematical theory of super-resolution. to appear in Communications on Pure and Applied Mathematics, 2013.
• Candes & Recht (2009) Candes, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, April 2009.
• Candes et al. (2006) Candes, E. J., Romberg, J., and Tao, T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, Feb. 2006.
• Chen & Chi (2013) Chen, Y. and Chi, Y. Robust spectral compressed sensing via structured matrix completion. Arxiv: 1304.8126, May 2013.
• Chi et al. (2011) Chi, Y., Scharf, L.L., Pezeshki, A., and Calderbank, A.R. Sensitivity to basis mismatch in compressed sensing. IEEE Transactions on Signal Processing, 59(5):2182–2195, May 2011.
• Dragotti et al. (2007) Dragotti, P. L., Vetterli, M., and Blu, T. Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets strang-fix. IEEE Trans on Signal Processing, 55(5):1741 –1757, May 2007.
• Duarte & Baraniuk (2012) Duarte, M.F. and Baraniuk, R.G. Spectral compressive sensing. Applied and Computational Harmonic Analysis, 2012.
• Fazel et al. (2003) Fazel, M., Hindi, H., and Boyd, S. P. Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. American Control Conference, 3:2156 – 2162 vol.3, June 2003.
• Fazel et al. (2011) Fazel, M., Pong, T. K., Sun, D., and Tseng, P. Hankel matrix rank minimization with applications in system identification and realization, 2011.
• Gedalyahui et al. (2011) Gedalyahui, K., Tur, R., and Eldar, Y. C. Multichannel sampling of pulse streams at the rate of innovation. IEEE Trans on Sig. Proc, 59:1491–1504, 2011.
• Gross (2011) Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548–1566, March 2011.
• Hua (1992) Hua, Y. Estimating two-dimensional frequencies by matrix enhancement and matrix pencil. IEEE Trans on Sig. Proc., 40(9):2267 –2280, Sep 1992.
• Lustig et al. (2007) Lustig, M., Donoho, D., and Pauly, J. M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182–1195, 2007.
• Markovsky (2008) Markovsky, I. Structured low-rank approximation and its applications. Automatica, 44(4):891–909, 2008.
• Negahban & Wainwright (2012) Negahban, S. and Wainwright, M.J. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 98888:1665–1697, May 2012.
• Nion & Sidiropoulos (2010) Nion, D. and Sidiropoulos, N. D. Tensor algebra and multidimensional harmonic retrieval in signal processing for MIMO Radar. IEEE Transactions on Signal Processing, 58(11):5693 –5705, Nov. 2010.
• Potter et al. (2010) Potter, L.C., Ertin, E., Parker, J.T., and Cetin, M. Sparsity and compressed sensing in radar imaging. Proceedings of the IEEE, 98(6):1006–1020, 2010.
• Roy & Kailath (1989) Roy, R. and Kailath, T. Esprit-estimation of signal parameters via rotational invariance techniques. IEEE Transactions on Acoustics, Speech and Signal Processing, 37(7):984 –995, Jul 1989.
• Sankaranarayanan et al. (2010) Sankaranarayanan, A., Turaga, P., Baraniuk, R., and Chellappa, R. Compressive acquisition of dynamic scenes. ECCV 2010, pp. 129–142, 2010.
• Sayeed & Aazhang (1999) Sayeed, A. M. and Aazhang, B. Joint multipath-Doppler diversity in mobile wireless communications. IEEE Transactions on Communications, 47(1):123 –132, Jan 1999.
• Schermelleh et al. (2010) Schermelleh, L., Heintzmann, R., and Leonhardt, H. A guide to super-resolution fluorescence microscopy. The Journal of cell biology, 190(2):165–175, 2010.
• Shin et al. (2012) Shin, P., Larson, P., Ohliger, M., Elad, M., Pauly, J., Vigneron, D., and Lustig, M. Calibrationless parallel imaging reconstruction based on structured low-rank matrix completion. submitted to Magnetic Resonance in Medicine, 2012.
• Tang et al. (2012) Tang, G., Bhaskar, B. N., Shah, P., and Recht, B. Compressed sensing off the grid. Arxiv 1207.6053, July 2012.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters