Reconstruction of signals from their autocorrelation and cross-correlation vectors, with applications to phase retrieval and blind channel estimation

Reconstruction of signals from their autocorrelation and cross-correlation vectors, with applications to phase retrieval and blind channel estimation

Kishore Jaganathan Babak Hassibi
Department of Electrical Engineering, California Institute of Technology, Pasadena.
K. Jaganathan and B. Hassibi were supported in part by the National Science Foundation under grants CCF-0729203, CNS-0932428 and CIF-1018927, by the Office of Naval Research under the MURI grant N00014-08-1-0747, and by a grant from Qualcomm Inc.
Abstract

We consider the problem of reconstructing two signals from the autocorrelation and cross-correlation measurements. This inverse problem is a fundamental one in signal processing, and arises in many applications, including phase retrieval and blind channel estimation. In a typical phase retrieval setup, only the autocorrelation measurements are obtainable. We show that, when the measurements are obtained using three simple “masks”, phase retrieval reduces to the aforementioned reconstruction problem.

The classic solution to this problem is based on finding common factors between the -transforms of the autocorrelation and cross-correlation vectors. This solution has enjoyed limited practical success, mainly due to the fact that it is not sufficiently stable in the noisy setting. In this work, inspired by the success of convex programming in provably and stably solving various quadratic constrained problems, we develop a semidefinite programming-based algorithm and provide theoretical guarantees. In particular, we show that almost all signals can be uniquely recovered by this algorithm (up to a global phase). Comparative numerical studies demonstrate that the proposed method significantly outperforms the classic method in the noisy setting.

{keywords}

Autocorrelation, cross-correlation, phase retrieval, blind channel estimation, convex programming.

I Introduction

I-a Problem Setup

For the sake of exposition, we begin by considering the discretized setting222The results developed in this work are also applicable to discretized signals, we refer the readers to Section III-A for details.. Suppose and are the two complex signals of interest. Let and denote the autocorrelation vectors of and respectively, defined as

(1)

where, for notational convenience, and have a value of zero outside the intervals and respectively. Similarly, let and denote the cross-correlation vectors of and , defined as

(2)

Our goal is to uniquely, stably and efficiently reconstruct and from , , and .

I-B Trivial Ambiguities

Observe that the operations of global phase-change and time-shift on and do not affect their autocorrelation and cross-correlation vectors. In particular, the autocorrelation vectors of the signals and are and respectively, and their cross-correlation vectors are and . Similarly, the autocorrelation vectors of the signals and time-shifted by units are and respectively, and their cross-correlation vectors are and . Indeed, the assumption that and have non-zero values only within the indices and respectively resolves the time-shift ambiguity when or or or .

Consequently, from the autocorrelation and cross-correlation vectors, recovery is in general possible only up to a global-phase and time-shift. These ambiguities are commonly referred to as trivial ambiguities in literature. Throughout this work, when we refer to successful recovery, it is assumed to be up to the trivial ambiguities.

I-C Classic Method

The classic approach to this reconstruction problem is based on finding common factors between the -transforms of the autocorrelation and cross-correlation vectors. Let , , , , and denote the -transforms of , , , , and respectively. The objective is equivalent to reconstruction of the polynomials and from the polynomials , , and .

The aforementioned polynomials are related as follows:

(3)

The key idea is the following: Suppose the polynomials and are co-prime, i.e., they do not have any common roots. Then, can be reconstructed by identifying the common factors between the polynomials and . Similarly, can be reconstructed by identifying the common factors between the polynomials and 333The multiplying terms and ensure that the polynomials consist of only non-negative powers of ..

In fact, in the classic paper [1], the authors show that the co-prime condition is a necessary and sufficient criterion for successful recovery. Additionally, the authors also provide an algorithm based on finding the greatest common divisor and residuals of two polynomials using Sylvester matrices [2]. Numerical simulations show that the algorithm is somewhat stable in the noisy setting.

For a brief discussion on Sylvester matrices and their use in finding the greatest common divisor and residuals of two polynomials, we refer the readers to Appendix B.

I-D Contributions

In this work, we develop a semidefinite programming (SDP)-based algorithm. We show that almost all signals can be successfully recovered by this algorithm, subject to the aforementioned co-prime condition (Theorem III.1). In the noisy setting, we conduct extensive numerical simulations and verify the efficacy of the proposed algorithm.

The rest of the paper is organized as follows: In Section 2, we discuss the practical applications of the reconstruction problem. In Section 3, we present our algorithm and provide theoretical guarantees. The results of the various numerical studies are provided in Section 4, and Section 5 concludes the paper.

Ii Motivation

In this section, we describe two major applications of the reconstruction problem: phase retrieval and blind channel estimation.

Ii-a Phase Retrieval

(a) Mask #1
(b) Mask #2
(c) Mask #3
Fig. 1: A pictorial representation of a typical 1D phase retrieval setting using the proposed set of masks. A monochromatic beam is incident on the masked sample, and the detector measures the autocorrelation vector of the part of the sample that is not blocked by the mask.
(a) Mask #1
(b) Mask #2
(c) Mask #3
Fig. 2: A pictorial representation of a typical 2D phase retrieval setting using the proposed set of masks.

In many practical measurement systems, the measurable quantity is the autocorrelation vector of the signal. Recovering the underlying signal from the autocorrelation measurements is known as phase retrieval. Phase retrieval arises in many areas of engineering and applied physics, including X-ray crystallography [3], optics [4, 5], astronomical imaging [6], bioinformatics [7] and more.

Despite an enormous amount of research for nearly hundred years, there are no known efficient and stable algorithms with theoretical guarantees. It is widely accepted that phase retrieval is a computationally difficult problem. We refer the interested readers to [8, 9] for classic surveys and to [10, 11] for contemporary reviews.

In order to overcome the computational issues of phase retrieval, a common approach in practice is to obtain additional information on the signal by introducing simple modifications to the measurement process. To this end, masking is a popular technique, in which parts of the signal are physically blocked using a mask and the autocorrelation vector of the rest of the signal is measured [12, 13, 14, 15]. The premise, in a nutshell, is to introduce redundancy in the reconstruction problem by collecting multiple autocorrelation measurements. In the following, we describe three simple masks and show that, when autocorrelation measurements are obtained using them, phase retrieval is equivalent to the problem of recovering two signals from the autocorrelation and cross-correlation measurements.

Let be the underlying signal which we wish to determine, and be its -transform. We use the notation and , where is an integer in the interval . In other words, , where is the signal constructed using the first entries of and is the signal constructed using the remaining entries of .

Suppose autocorrelation measurements are collected using the following three masks:

  • The first mask does not block any part of the signal.

  • The second mask blocks the signal in the interval .

  • The third mask blocks the signal in the interval .

A pictorial representation is provided in Fig. 1. Note that the measurements provide the knowledge of the autocorrelation vectors of , and . Since we have the relationship

the polynomials , and are provided by the measurements. Hence, we can infer the polynomial from the measurements. Since has terms consisting of only negative powers of and has terms consisting of only positive powers of , we can infer the polynomials and from the measurements.

Therefore, by collecting autocorrelation measurements using the aforementioned three masks, the autocorrelation and cross-correlation vectors of and can be inferred. Consequently, phase retrieval reduces to the problem of reconstruction of and from their autocorrelation and cross-correlation vectors.

Remarks: (i) The total number of phaseless Fourier measurements provided by these masks is : In order to obtain the autocorrelation vector of a signal of length , it is well-known that phaseless Fourier measurements are necessary and sufficient (see Appendix of [16] for example). The three masks obtain the autocorrelation vectors of signals of lengths , and . The quantity has been of significant interest to the phase retrieval community [17, 18, 19, 20].

(ii) In [21, 22], the authors propose a framework called vectorial phase retrieval (VPR). Mathematically, the framework proposed in this section is equivalent to VPR. Indeed, VPR is another framework where the reconstruction problem arises. We refer the interested readers to [21, 22] for details.

Ii-B Blind Channel Estimation

In many communication systems, channel estimation is required in order to be able to achieve reliable communication. A common way of doing this is by periodically sending training sequences known both to the transmitter and receiver [23]. In scenarios where this is not possible, blind channel estimation is a popular technique, in which the transmitted signal is inferred from the received signal using only the statistical properties of the transmitted signal [24, 25, 26].

Let be a zero-mean and unit-variance i.i.d. random process. Suppose it is transmitted through two linear time-invariant FIR channels and , or equivalently and in the -transform domain, to obtain random processes and respectively. The power spectral densities of and , denoted by and , are given by

(4)

and their cross-spectral densities, denoted by and , are given by

(5)

Therefore, the aforementioned measurements provide the knowledge of the autocorrelation and cross-correlation vectors of and . Consequently, blind channel estimation reduces to the problem of reconstruction of two signals from their autocorrelation and cross-correlation vectors.

Fig. 3: The goal of blind channel estimation is to identify and using , and only the statistical properties of .

Remark: In [27], the authors show that, if the sampling rate at the receiver is twice the transmission rate (also known as baud rate), then a single linear time-invariant FIR channel mathematically decomposes into two linear time invariant FIR channels. The key idea is the following: The channel is expressed as

where and are the channels involving only the taps corresponding to the even and odd time-slots respectively. Since transmission happens only at even time-slots, the received vector corresponding to the even time-slots is as if the transmitted signal was passed through , and the received vector corresponding to the odd time-slots is as if it was passed through , thereby converting a single linear time-invariant FIR channel into two linear time-invariant FIR channels. This extends the applicability of the reconstruction problem to scenarios where multiple channels are not available.

Iii SDP-based reconstruction

In this section, we first develop the SDP-based algorithm for signals and provide theoretical guarantees. Then, we extend the algorithm and theory to signals.

Note that the autocorrelation and cross-correlation measurements are quadratic in nature. SDP-based algorithms have been shown to yield robust solutions with theoretical guarantees to various quadratic-constrained optimization problems (see [28, 29, 30, 31, 33, 34, 35, 36, 16, 32, 37, 38, 40, 39] and references therein). Therefore, it is natural to try SDP techniques to solve this problem. An SDP formulation of the reconstruction problem can be obtained by a procedure popularly known as lifting:

Let be the vector obtained by stacking and . We embed in a higher-dimensional space using the transformation . Since the autocorrelation and cross-correlation measurements are linear in the matrix , the reconstruction problem reduces to finding a rank-one positive semidefinite matrix which satisfies particular affine constraints. In other words, the reconstruction problem can be equivalently written as

(6)

for appropriate choices of sensing matrices and measurements and , for , respectively. For example, consider the setup with and . We have , as there are autocorrelation terms and cross-correlation terms. The sensing matrices are

and the corresponding measurements are , , , and .

To obtain an SDP formulation, one possibility is to relax the rank constraint, resulting in the following convex algorithm:

Inputs: The autocorrelation and cross-correlation measurements for , the signal lengths and .
Outputs: Signal estimates and .

  • Obtain the matrix by solving

    (7)
  • Calculate the best rank-one approximation of through SVD, and get .

  • Return and .

Algorithm 1 SDP-based reconstruction algorithm

We provide the following theoretical guarantee for recovery using Algorithm 1:

Theorem III.1.

Suppose the signals and , of lengths and respectively, are such that the polynomials and are co-prime, and . For almost all such and , the convex program (7) has a unique feasible point, namely, , and thus the outputs of Algorithm 1 are and .

Proof.

The proof of this theorem involves dual certificates and Sylvester matrices. An overview of the method of dual certificates is provided in Appendix A, and relevant properties of Sylvester matrices are described in Appendix B.

As before, we use the notations , and for the sake of simplicity. Let denote the set of Hermitian matrices of the form

and be its orthogonal complement. We use and to denote the projections of a matrix onto the subspaces and respectively.

By construction, the matrix is a feasible point of (7). Standard duality arguments in semidefinite programming (see Section A for details) show that the following conditions are sufficient for to be the unique optimizer, i.e., the unique feasible point, of (7):

  1. Condition 1: There exists a dual certificate matrix , where are scalar complex numbers, with the following properties:

    • ,

    • ,

    • .

  2. Condition 2: If and for , then is the only solution.

In words, the matrix is parametrized by scalar variables through the aforementioned relationship. The process of dual certificate construction deals with assigning values to in such a way that the resulting satisfies the properties specified in Condition 1. Condition 2 typically deals with well-known properties of polynomials, and is in general straightforward to show.

The range space of , parametrized by , is the set of all Hermitian matrices which are such that the submatrices corresponding to the rows and columns, rows and columns, rows and columns, and rows and columns are Toeplitz matrices.

Let be the Sylvester matrix constructed using the two polynomials and , i.e., is the following matrix:

The columns of are such that the th column is shifted by units, and the columns are such that the th column is shifted by units. We refer the readers to Section B for a description of the intuition behind defining such a matrix.

To show that Condition 1 is satisfied for , we propose the following dual certificate:

(8)

The matrix is clearly in the range space of : Since the first columns of are shifted copies of the th column, their inner products have a Toeplitz structure. The same applies to the inner products between the remaining columns, and the inner products between the first columns and the remaining columns.

(a) is positive semidefinite by construction.

(b) Since , we have . This is due to a property of Sylvester matrices described in (15) and (16). Alternately, can be verified by simply multiplying the quantities. Therefore, we have .

(c) The condition ensures that the degrees of the polynomials and are and respectively. The polynomial is the greatest common divisor of and , due to the fact that and are co-prime. Therefore, the rank of is equal to . This is due to a property of Sylvester matrices described in (14), which states that the rank of the Sylvester matrix is equal to the sum of the degrees of the two associated polynomials minus the degree of their greatest common divisor. Consequently, we have .

Next, we show that Condition 2 is satisfied for almost all . Since , we can write for some . Instead of working with the length complex vector , we work with the length real vector , where the operations and obtain the element-wise real and imaginary parts of respectively. In other words, instead of working with the complex variables, we work with the real variables that form their real and imaginary parts.

The equation , for any , is linear with respect to . For example, the equation in complex variables

can be equivalently written as two equations in real variables:

Let denote the constraints corresponding to the equations for . Note that is an matrix, where , whose entries are either the entries of with a plus or minus sign, or . Instead of focusing on the precise structure of , we complete the proof using the following property of : The determinant of each submatrix of is a finite-degree polynomial function of the entries of .

Finite-degree polynomial functions have the following well-known property: they are either everywhere, or non-zero almost everywhere. Therefore, the determinant of any particular submatrix of is either for all , or non-zero for almost all . Consequently, one of the following is true: the determinant of every submatrix of is for all , or there exists at least one submatrix which has a non-zero determinant for almost all . By substituting , we eliminate the possibility of every determinant being for all . As a result, the rank of is at least for almost all .

Furthermore, the vector corresponding to is in the null space of for any real constant , due to the fact that the corresponding is . Therefore, for almost all , the rank of is equal to , and for any real constant is the only feasible solution. In other words, is the only matrix that satisfies both and for . ∎

Iii-a Extension to Signals

The results developed in this section for signals can be extended to signals using the following trick:

Suppose and are two signals of size and respectively. Let and be their autocorrelation and cross-correlation matrices respectively. Also, let denote the vector constructed by stacking the columns of . The autocorrelation vector of , denoted by , can be inferred from . This can be seen as follows:

For , we have

where, for notational convenience, has a value of zero outside the interval and has a value of zero outside the interval . Since the values of for are the conjugates of the values of for , is completely characterized by . Similarly, the autocorrelation and cross-correlation vectors , and can be inferred from the autocorrelation and cross-correlation matrices , and respectively.

In other words, the autocorrelation and cross-correlation vectors of and can be inferred from the measurements. Using Theorem III.1, we conclude that almost all signals and , which are such that the polynomials and are co-prime, and , can be uniquely reconstructed by Algorithm 1. Finally, the desired signals and can be recovered from and respectively by appropriate reshaping.

Consequently, the three masks proposed for phase retrieval in Section II-A generalizes to the setting as follows: Let be a signal of size , and be an integer in the interval :

  • The first mask does not block any part of the signal.

  • The second mask blocks the signal in the columns .

  • The third mask blocks the signal in the columns .

A pictorial representation of the setup is provided in Fig. 2.

Remarks: (i) One could also perform the operation by stacking rows.

(ii) The autocorrelation and cross-correlation measurements correspond to affine constraints in the lifted domain. As a result, there is no need to calculate the autocorrelation and cross-correlation measurements of the vectorized signals while implementing the algorithm in practice.

(iii) In [41], the authors explore the general connection between and phase retrieval using similar tricks.

Iii-B Noisy setting

In practice, the measurements are contaminated by additive noise. One way of implementing Algorithm 1 in the noisy setting is:

(9)

where , for , are the noisy autocorrelation and cross-correlation measurements. We choose -norm in the objective function keeping in mind the fact that measurement noise is typically AWGN. In settings where the noise vector is known to be sparse, one could choose -norm instead [42]. Since the desired solution is a rank one matrix, one could also add a term to the objective function with an appropriate regularizer [43].

Iv Numerical Simulations

In this section, we demonstrate the performance of Algorithm 1 using numerical simulations.

First, we perform a comparative study of the Sylvester matrix-based and SDP-based algorithms in the noisy setting. The Sylvester matrix-based algorithm proposed in [1] is implemented as described in the remark at the end of Appendix B, and the SDP-based algorithm is implemented as described in (9).

We perform a total of trials for and setups. In each trial, the two signals and are sampled uniformly at random from a sphere of radius and respectively. If the signals do not satisfy , then they are sampled again. Their autocorrelation and cross-correlation vectors are computed, and corrupted with additive zero mean Gaussian noise of appropriate variance (decided by the SNR).

The normalized mean-squared error (NMSE), defined as

(10)

where , is plotted as a function of SNR in Fig. 4. The approximately linear relationship between the NMSE and SNR in the logarithmic scale indicates that the reconstruction using both methods is stable in the noisy setting. Further, the superior performance of the SDP-based method can be clearly seen. Convex methods are known to be very robust to noise in general. So, this observation is along the expected lines.

Fig. 4: A comparative study of the NMSE vs SNR for the Sylvester matrix-based and SDP-based algorithms.

Next, we demonstrate another important feature of the SDP-based framework. In applications like phase retrieval, one could potentially collect additional measurements using more masks. In such setups, the Sylvester matrix-based framework cannot make use of the additional measurements. In contrast, the additional measurements can be added as extra affine constraints in the SDP-based framework.

Consider the setup with and . While the setup is similar to , there is a small difference in the way the noise is modeled. As described in Section II-A, the cross-correlation vectors are not directly measured and instead calculated using three autocorrelation measurements, because of which their variance is three times higher.

The signal is sampled as before. Fig. 5 compares the stability of the SDP-based method in the following two setups: (1) no additional measurements are considered and (2) additional measurements using masks defined by are considered. As expected, the plot suggests that the additional measurements lead to a further improvement in stability.

Fig. 5: NMSE vs SNR for the SDP-based algorithm when additional measurements are available.

V Conclusions

In this work, we considered the problem of reconstruction of signals from their autocorrelation and cross-correlation measurements. We first described two applications where this reconstruction problem naturally arises: phase retrieval and blind channel estimation. In the phase retrieval setup, where only the autocorrelation vectors can be measured, we proposed three simple masks and showed that phase retrieval is equivalent to the aforementioned reconstruction problem when measurements are obtained using them.

Then, we formulated this problem as a convex program using the standard lifting method and provided theoretical guarantees. In particular, we showed that the convex program uniquely identifies almost all signals in the noiseless setting. In the noisy setting, we demonstrated the superior stability of this approach over the standard Sylvester matrix-based approach through numerical simulations.

References

  • [1] L. Tong, G. Xu, B. Hassibi, and T. Kailath, “Blind channel identification based on second-order statistics: A frequency-domain approach,” IEEE Transactions on Information Theory 41, no. 1 (1995): 329-334.
  • [2] R. Bitmead, S-Y. Kung, B. Anderson, and T. Kailath, “Greatest common divisor via generalized Sylvester and Bezout matrices,” IEEE Transactions on Automatic Control 23, no. 6 (1978): 1043-1047.
  • [3] A. L. Patterson, “Ambiguities in the X-ray analysis of crystal structures,” Physical Review 65, no. 5-6 (1944): 195.
  • [4] A. Walther, “The question of phase retrieval in optics,” Journal of Modern Optics 10, no. 1 (1963): 41-49.
  • [5] R. P. Millane, “Phase retrieval in crystallography and optics,” JOSA A 7, no. 3 (1990): 394-411.
  • [6] J. C. Dainty and J. R. Fienup, “Phase retrieval and image reconstruction for astronomy,” Image Recovery: Theory and Application (1987): 231-275.
  • [7] M. Stefik, “Inferring DNA structures from segmentation data,” Artificial Intelligence 11, no. 1 (1978): 85-114.
  • [8] J. R. Fienup, “Phase retrieval algorithms: A comparison,” Applied Optics 21, no. 15 (1982): 2758-2769.
  • [9] H. H. Bauschke, P. L. Combettes and D. R. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: A view from convex optimization,” JOSA A 19, no. 7 (2002): 1334-1345.
  • [10] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao and M. Segev, “Phase retrieval with application to optical imaging,” IEEE Signal Processing Magazine 32, no. 3 (2015): 87-109.
  • [11] K. Jaganathan, Y. C. Eldar and B. Hassibi, “Phase retrieval: An overview of recent developments,” arXiv:1510.07713 (2015).
  • [12] G. Zheng, R. Horstmeyer and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nature photonics 7, no. 9 (2013): 739-745.
  • [13] R. Horstmeyer and C. Yang, “A phase space model of Fourier ptychographic microscopy,” Optics express 22, no. 1 (2014): 338-358.
  • [14] L. Tian, X. Li, K. Ramchandran and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomedical optics express 5, no. 7 (2014): 2376-2389.
  • [15] L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, no. 2 (2015): 104-111.
  • [16] K. Jaganathan, Y. C. Eldar and B. Hassibi, “STFT phase retrieval: Uniqueness guarantees and recovery algorithms,” IEEE Journal of Selected Topics in Signal Processing 10, no. 4 (2016): 770-781.
  • [17] R. Balan, P. Casazza and D. Edidin, “On signal reconstruction without phase,” Applied and Computational Harmonic Analysis 20, no. 3 (2006): 345-356.
  • [18] R. Balan, B. G. Bodmann, P. G. Casazza and D. Edidin, “Painless reconstruction from magnitudes of frame coefficients,” Journal of Fourier Analysis and Applications 15, no. 4 (2009): 488-501.
  • [19] A. S. Bandeira, J. Cahill, D. G. Mixon and A. A. Nelson, “Saving phase: Injectivity and stability for phase retrieval,” Applied and Computational Harmonic Analysis 37, no. 1 (2014): 106-125.
  • [20] H. Ohlsson and Y. C. Eldar, “On conditions for uniqueness in sparse phase retrieval,” IEEE International Conference on Acoustics, Speech and Signal Processing (2014): 1841-1845.
  • [21] O. Raz et. al., “Vectorial phase retrieval for linear characterization of attosecond pulses,” Physical review letters 107, no. 13 (2011): 133902.
  • [22] O. Raz, N. Dudovich, and B. Nadler, “Vectorial phase retrieval of 1-D signals,” IEEE Transactions on Signal Processing 61, no. 7 (2013): 1632-1643.
  • [23] B. Hassibi and B. M. Hochwald, “How much training is needed in multiple-antenna wireless links?,” IEEE Transactions on Information Theory 49, no. 4 (2003): 951-963.
  • [24] Y. Sato, “A method of self-recovering equalization for multilevel amplitude-modulation systems,” IEEE Transactions on communications 23, no. 6 (1975): 679-682.
  • [25] D. Godard, “Self-recovering equalization and carrier tracking in two-dimensional data communication systems,” IEEE transactions on communications 28, no. 11 (1980): 1867-1875.
  • [26] G. Xu, H. Liu, L. Tong and T. Kailath, “A least-squares approach to blind channel identification,” IEEE Transactions on signal processing 43, no. 12 (1995): 2982-2993.
  • [27] L. Tong, G. Xu and T. Kailath, “Blind identification and equalization based on second-order statistics: A time domain approach,” IEEE Transactions on information Theory 40, no. 2 (1994): 340-349.
  • [28] L. Lovasz, “On the Shannon capacity of a graph,” IEEE Transactions on Information theory 25, no. 1 (1979): 1-7.
  • [29] M. X. Goemans and D. P. Williamson, “Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming,” Journal of the ACM (JACM) 42, no. 6 (1995): 1115-1145.
  • [30] E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Communications on Pure and Applied Mathematics 66, no. 8 (2013): 1241-1274.
  • [31] E. J. Candes, Y. C. Eldar, T. Strohmer and V. Voroninski, “Phase retrieval via matrix completion,” SIAM Journal on Imaging Sciences 6, no.1 (2013): 199-225.
  • [32] K. Jaganathan, “Convex programming-based phase retrieval: Theory and applications,” PhD dissertation, California Institute of Technology (2016).
  • [33] E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns”, Applied and Computational Harmonic Analysis 39, no. 2 (2015): 277-299.
  • [34] X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM Journal on Mathematical Analysis 45, no. 5 (2013): 3019-3033.
  • [35] S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar and B. Hassibi, “Simultaneously structured models with application to sparse and low-rank matrices,” IEEE Transactions on Information Theory 61, no. 5 (2015): 2886-2908.
  • [36] K. Jaganathan, S. Oymak and B. Hassibi, “Recovery of sparse 1-D signals from the magnitudes of their Fourier transform,” IEEE International Symposium on Information Theory Proceedings (2012): 1473-1477.
  • [37] Y. Shechtman, Y. C. Eldar, A. Szameit and M. Segev, “Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing,” Optics express 19, no. 16 (2011): 14807-14822.
  • [38] K. Jaganathan, S. Oymak and B. Hassibi, “Sparse phase retrieval: Convex algorithms and limitations,” IEEE International Symposium on Information Theory Proceedings (2013): 1022-1026.
  • [39] A. Ahmed, B. Recht and J. Romberg, “Blind deconvolution using convex programming,” IEEE Transactions on Information Theory 60, no. 3 (2014): 1711-1732.
  • [40] J. A. Tropp, “Convex recovery of a structured signal from independent random linear measurements,” In Sampling Theory, a Renaissance, Springer International Publishing (2015): 67-101.
  • [41] D. Kogan, Y. C. Eldar and D. Oron, “On The 2D Phase Retrieval Problem,” arXiv:1605.08487 (2016).
  • [42] E. J. Candes and M B. Wakin, “An introduction to compressive sampling,” IEEE signal processing magazine 25, no. 2 (2008): 21-30.
  • [43] B. Recht, M. Fazel and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM review 52, no. 3 (2010): 471-501.
  • [44] P. Dreesen, “Back to the roots: polynomial system solving using linear algebra,” Ph. D. Dissertation, KU Leuven (2013): Chapter 4.

Appendix A Method of Dual Certificates

In this section, we provide an overview of the method of dual certificates. This technique is applicable to a wide class of optimization problems. Here, we focus our attention on using it as a theoretical tool to analyze feasibility-type SDPs.

Consider the following primal optimization problem:

(11)