# Phase Retrieval from 1D Fourier Measurements: Convexity, Uniqueness, and Algorithms Conference version of part of this work appears in ICASSP 2016 [1].

## Abstract

This paper considers phase retrieval from the magnitude of 1D over-sampled Fourier measurements, a classical problem that has challenged researchers in various fields of science and engineering. We show that an optimal vector in a least-squares sense can be found by solving a convex problem, thus establishing a hidden convexity in Fourier phase retrieval. We then show that the standard semidefinite relaxation approach yields the optimal cost function value (albeit not necessarily an optimal solution). A method is then derived to retrieve an optimal minimum phase solution in polynomial time. Using these results, a new measuring technique is proposed which guarantees uniqueness of the solution, along with an efficient algorithm that can solve large-scale Fourier phase retrieval problems with uniqueness and optimality guarantees.

Keywords:

phase retrieval, over-sampled Fourier measurements, minimum phase, auto-correlation retrieval, semi-definite programming, alternating direction method of multipliers, holography.

## 1Introduction

Phase retrieval seeks to recover a signal from the magnitudes of linear measurements [2]. This problem arises in various fields, including crystallography [3], microscopy, and optical imaging [4], due to the limitations of the detectors used in these applications. Different types of measurement systems have been proposed, e.g., over-sampled Fourier measurements, short-time Fourier measurements, and random Gaussian, to name just a few (see [5] for contemporary reviews), but over-sampled Fourier measurements are most common in practice. Two fundamental questions in phase retrieval are: i) is the signal uniquely determined by the noiseless magnitude measurements (up to inherent ambiguities such as global phase); and ii) is there an efficient algorithm that can provably compute an optimal estimate of the signal according to a suitable criterion?

This paper considers the 1D phase retrieval problem from over-sampled Fourier measurements. It is well known that there is no uniqueness in 1D Fourier phase retrieval, i.e. there are multiple 1D signals with the same Fourier magnitude. This is true even when we ignore trivial ambiguities which include a global phase shift, conjugate inversion and spatial shift. Nonuniqueness also holds when the support of the signal is bounded within a known range [7]. In addition, since the phase retrieval problem is nonconvex, there are no known algorithms that provably minimize the least-squares error in recovering the underlying signal [8].

Existing methods that attempt to resolve the identifiability issue include introducing sparsity assumptions on the input signal [9], taking multiple masked Fourier measurements [13], or using the short-time Fourier transform [14]. Algorithmically, the most popular techniques for phase retrieval are based on alternating projections [16], pioneered by Gerchberg and Saxton [16] and extended by Fienup [17]. More recently, phase retrieval has been treated using semidefinite programming (SDP) and low-rank matrix recovery ideas [18]. Several greedy approaches to phase retrieval have also been introduced such as GESPAR [20]. Semidefinite relaxation for phase retrieval, referred to as PhaseLift [18], is known to recover the true underlying signal when the measurement vectors are Gaussian. However, in the case of Fourier measurements, as we are considering here, there are no known optimality results for this approach.

Despite the apparent difficulty in 1D Fourier phase retrieval, we establish in this paper that under certain conditions this problem can be solved optimally in polynomial time. In particular, we first show that the least-squares formulation of this problem can always be optimally solved using semidefinite relaxation. Namely, methods such as PhaseLift do in fact optimize the nonconvex cost in the case of Fourier measurements. By slightly modifying the basic PhaseLift program we propose a new semidefinite relaxation whose solution is proven to be rank-one and to minimize the error. However, due to the nonuniqueness in the problem, the solution is not in general equal to the true underlying vector. To resolve this ambiguity, we propose a semidefinite relaxation that will always return the minimum phase solution which optimizes the least-squares cost.

Based on this result, we suggest a new approach to measure 1D signals which transforms an arbitrary signal into a minimum phase counterpart by simply adding an impulse, so that identifiability is restored. We then recover the input using the proposed SDP. This measurement strategy resembles classic holography used commonly in optics [22]. Finally, we propose an efficient iterative algorithm to recover the underlying signal without the need to resort to lifting techniques as in the SDP approach. Our method, referred to as auto-correlation retrieval – Kolmogorov factorization (CoRK), is able to solve very large scale problems efficiently. Comparing to existing methods that provide identifiability, the proposed approach is easy to implement (both conceptually and practically), requires a minimal number of measurements, and can always be solved to global optimality using efficient computational methods.

The rest of the paper is organized as follows. We introduce the problem in more detail in Section 2. In Section 3 we reformulate phase retrieval by changing the variable from the signal itself to the correlation of the signal, and discuss two ways to characterize a correlation sequence, leading to convex optimization problems. Extracting signals from a given correlation sequence, known as *spectral factorization*, is briefly reviewed in Section 4, where we also review the notion of *minimum phase*. We then propose a new measurement system based on transforming an arbitrary signal into a minimum phase one, via a very simple impulse adding technique, in Section 5. The highly scalable CoRK algorithm is developed in Section 6 to recover the signal, providing the ability to optimally solve large-scale 1D Fourier phase retrieval problems very efficiently. We summarize the proposed method in Section 7, with some discussion on how to modify the approach if we know that the sought signal is real. Computer simulations are provided in Section 8 and show superiority of our proposed techniques both in terms of accuracy and efficiency. We finally conclude the paper in Section 9.

Throughout the paper, indices for vectors and matrices start at , so that the first entry of a vector is , and the upper-left-most entry of a matrix is . The superscript denotes element-wise conjugate (without transpose), and denotes Hermitian transpose of a vector or a matrix.

## 2The Phase Retrieval Problem

In the phase retrieval problem, we are interested in estimating a signal from the squared magnitude of its Fourier transform. The *discrete-time Fourier transform* (DTFT) of a vector is a trigonometric polynomial in defined as

and is periodic with period . In practice it is easier to obtain samples from the continuous function , which leads to the -point *discrete Fourier transform* (DFT) of , where , if we sample at points

This operation is equivalent to the matrix-vector multiplication , where is the first columns of the -point DFT matrix, i.e.,

with . The matrix-vector multiplication can be carried out efficiently via the fast Fourier transform (FFT) algorithm with complexity , unlike the general case which takes flops to compute.

With this notation, our measurements are given by

where is a noise vector and is taken element-wise. To recover from we consider a least-squares cost (which coincides with the maximum likelihood criterion assuming Gaussian noise):

The most popular methods for solving (Equation 2) are the Gershburg-Saxton (GS) and Fienup’s algorithms. Both techniques start with the noiseless scenario , and reformulate it as the following feasibility problem by increasing the dimension of to and then imposing an additional *compact support* constraint:

It is easy to derive projections onto the two individual sets of equality constraints, but not both. Therefore, one can apply alternating projections, which leads to the GS algorithm, or Dykstra’s alternating projections, which results in Fienup’s algorithm when the step size is set to one [23].

Due to the non-convexity of the quadratic equations, neither algorithm is guaranteed to find a solution; nonetheless, Fienup’s approach has been observed to work successfully in converging to a point that satisfies both set of constraints. When the measurements are corrupted by noise, Fienup’s algorithm does not in general converge. An alternative interpretation of the GS algorithm shows that it monotonically decreases the cost function of the following optimization problem (which is different from ),

More recently, the general phase retrieval problem has been recognized as a non-convex *quadratically constrained quadratic program* (QCQP), for which the prevailing approach is to use semidefinite relaxation [24] to obtain a lower bound on the optimal value of (Equation 2). In the field of phase retrieval, this procedure is known as *PhaseLift* [18]. Specifically, under a Gaussian noise setting, PhaseLift solves

where denotes the set of Hermitian positive semidefinite matrices of size , is the th row of , and the term is used to encourage the solution to be low-rank. Problem (Equation 4) can be cast as an SDP and solved in polynomial time. If the solution of (Equation 4), denoted as , turns out to be rank one, then we also obtain the optimal solution of the original problem (Equation 2) by extracting the rank one component of . However, for general measurement vectors PhaseLift is not guaranteed to yield a rank one solution, especially when the measurement is noisy. In that case PhaseLift resorts to sub-optimal solutions, for example by taking the first principal component of , possibly refined by a traditional method like the GS algorithm. An SDP relaxation for the alternative formulation (Equation 3) is proposed in [25], and referred to as *PhaseCut*.

In the next section we show that despite the nonconvexity of (Equation 2), we can find an optimal solution in polynomial time using semidefinite relaxation. Since there is no uniqueness in 1D phase retrieval, there are many possible solutions even in the noise free setting. Among all solutions, we extract the minimum phase vector that minimizes the least-squares error. We will also suggest an alternative to the SDP formulation, that avoids squaring the number of variables like PhaseLift. Next we will show how any vector can be modified to be minimum phase by adding a sufficiently large impulse to it. This then paves the way to recovery of arbitrary 1D signals efficiently and provably optimally from their Fourier magnitude.

## 3Convex reformulation

In this section we show how (Equation 2) can be optimally solved in polynomial time, despite its non-convex formulation. Similar results have been shown in the application of multicast beamforming under far-field line-of-sight propagation conditions [26], and more generally for non-convex QCQPs with Toeplitz quadratics [27].

### 3.1Phase retrieval using auto-correlation

We begin by reformulating the Fourier phase retrieval problem in terms of the auto-correlation function. Consider the th entry of :

where

is the -lag auto-correlation of . Let us define

We first observe that , originally quadratic with respect to , is now linear in . Moreover, by definition . Removing the redundancy, we let

and write

where , and takes the real part of its argument. We can then rewrite (Equation 2) as

The abstract constraint imposed on in (Equation 6) is to ensure that there exists a vector such that (Equation 5) holds.

The representation in (Equation 5) of the auto-correlation sequence is nonconvex (in and jointly). In the next subsection we consider convex reformulations of this constraint based on [29]. In Section 4 we discuss how to obtain from in a computationally efficient way.

### 3.2Approximate characterization of a finite auto-correlation sequence

Denote the DTFT of by

Since is conjugate symmetric, is real for all . The sequence (or equivalently ) is a finite auto-correlation sequence if and only if [30]

which is a collection of infinitely many linear inequalities. A natural way to approximately satisfy (Equation 7) is to sample this infinite set of inequalities, and replace (Equation 7) by a finite (but large) set of linear inequalities

In matrix form, these constraints can be written as (similar to the expression for that we derived before)

where is the first columns of the -point DFT matrix.

Using this result, we can approximately formulate problem (Equation 6) as

with a sufficiently large . In practice, the approximate formulation works very well for , a modest (and typically a constant times) increase in the signal dimension. However, it is not exact—one can satisfy the constraints in (Equation 8), yet still have for some .

### 3.3Exact parameterization of a finite auto-correlation sequence

It turns out that it is possible to characterize the infinite set of inequalities (Equation 7) through a finite representation, via an auxiliary positive semidefinite matrix. One way is to use a positive semidefinite matrix to parameterize [28]

where is the th elementary Toeplitz matrix of appropriate size, with ones on the th sub-diagonal, and zeros elsewhere, and . Another way, which can be derived from the Kalman-Yakubovich-Popov (KYP) lemma [31], uses a matrix to characterize as a finite correlation sequence

where . It can be shown that the two formulations are equivalent [32].

Adopting the first characterization, we may rewrite (Equation 6) as

It is easy to see that we can actually eliminate the variable from (Equation 9), ending up with the PhaseLift formulation (Equation 4) without the trace regularization. This result is proven in Appendix A.

The implication behind the above analysis is that, even though PhaseLift does not produce a rank one solution in general^{1}

The trace parameterization based formulation (Equation 9) exactly characterizes (Equation 6), with the price that now the problem dimension is , a significant increase comparing to that of (Equation 8). In Section 6 we will derive an efficient alternative based on ADMM. In the next section we show how to extract a vector from the auto-correlation that solves (Equation 9).

## 4Spectral factorization

After we solve (Equation 6), either approximately via (Equation 8) or exactly by (Equation 9), we obtain the auto-correlation of the optimal solution of the original problem (Equation 2). The remaining question is how to find a vector that generates such an auto-correlation, as defined in (Equation 5). This is a classical problem known as *spectral factorization* (SF) in signal processing and control. There exist extensive surveys on methods and algorithms that solve this problem, e.g., [33], [34], and [28]. In this section, we first derive an algebraic solution for SF, which also explains the non-uniqueness of 1D Fourier phase retrieval, and reviews the notion of *minimum phase*. We then survey two practical methods for SF, which we will rely on when developing CoRK, a highly scalable phase-retrieval algorithm.

### 4.1Algebraic solution

The DFT of an arbitrary signal can be obtained by sampling its -transform on the unit circle . Let be the -transform of , which is a polynomial of order . It can be written in factored form as

where are the zeros (roots) of the polynomial . The quadratic measurements can similarly be interpreted as sampled from the -transform of defined as

As we can see, the zeros of always come in conjugate reciprocal pairs. Therefore, given , we cannot determine whether a zero or its conjugate reciprocal is a root of , which is the reason that cannot be reconstructed from . In other words, for a given signal , we can find the zeros of its -transform as in (Equation 10), take the conjugate reciprocal of some of them, and then take the inverse -transform to obtain another signal [35]. If we re-scale to have the same norm as , then it is easy to verify that

no matter how large is, even though clearly and are not equal.

Traditionally, this problem is often seen in design problems where uniqueness is not important, e.g., FIR filter design [34] and far-field multicast beamforming [26]. There, it is natural (from the maximal energy dissipation point of view) to pick the zeros to lie within the unit circle^{2}*minimum phase* signal.

The above analysis provides a direct method for SF. For a given auto-correlation sequence , we first calculate the roots of the polynomial

Because is a valid correlation sequence, the roots come in conjugate reciprocal pairs. Therefore, we can pick the roots that are inside the unit circle, expand the expression, and then scale it to have norm equal to .

Numerically, the roots of can be found by calculating the eigenvalues of the following companion matrix:

However, when expanding the factored form to get the coefficients of the polynomial, the procedure is very sensitive to roundoff error, which quickly becomes significant as approaches .

### 4.2SDP-based method for SF

To obtain a more stable method numerically, we can use an SDP approach to SF.

For a valid correlation , it is shown in [28] that the solution of the following SDP

is always rank one. In addition, its rank one component generates the given correlation and is minimum phase. Algorithms for SDP are numerically stable, although the complexity could be high if we use a general-purpose SDP solver.

The constraints in (Equation 11) are the same as the convex reformulation (Equation 9), which according to Appendix A is equivalent to PhaseLift without the trace regularization. Therefore, we may consider instead solving the problem

In Appendix B we show that for a suitable value of , the solution of (Equation 12) is guaranteed to be rank one, and it attains the optimal least-squares error.

The formulation (Equation 12) allows to use customized SDP solvers to obtain a solution efficiently. For example, PhaseLift was originally solved using TFOCS [36], which applies various modern first-order methods to minimize convex functions composed of a smooth part and a non-smooth part that has a simple proximity operator. For problem (Equation 12), the cost function is the smooth part, and the non-smooth part is the indicator function for the cone of semidefinite matrices.

It is now widely accepted that the trace of a semidefinite matrix, or nuclear norm for a general matrix, encourages the solution to be low rank [37]. However, interestingly, in our setting, the correct regularization is in fact which guarantees the solution to be rank one for an appropriate choice of .

### 4.3Kolmogorov’s method for SF

Kolmogorov proposed the following method for SF that is both efficient and numerically stable [31]. In what follows, we assume that contains no zeros on the unit circle.

Consider taking the logarithm of the -transform of . Every zero (and pole, which we do not have since is finite length) of then becomes a pole of . Assuming the roots of lie strictly inside the unit circle, this implies that there exists a region of convergence (ROC) for that contains the unit circle and infinity . This in turn means that is unilateral

Evaluating at , yields

implying that and are Hilbert transform pairs. Moreover, since we are given , we have that

We can therefore calculate from the Hilbert transform of , and reversely reconstruct a signal that generates the given correlation and is minimum phase.

In practice, all of the aforementioned transforms may be well approximated by a DFT with sufficiently large length . The detailed procedure of Kolmogorov’s method then becomes:

compute the real part

compute the imaginary part by taking the Hilbert transform of , approximated by a DFT

compute , and take the first elements as the output.

Kolmogorov’s method generates a valid output as long as in the first step we have that , so that is real. This fits well with the approximate formulation (Equation 8), since if we choose the same for both (Equation 8) and Kolmogorov’s method, then it is guaranteed to be able to run without encountering a syntax error. Obviously, to obtain a solution with high accuracy, we need to be sufficiently large; empirically we found to be good enough in practice. Computationally, this approach requires computing four -point (inverse-) FFTs with complexity , a very light computation burden even for large .

## 5A new measurement system

So far we have seen that an arbitrary signal cannot be uniquely determined from the magnitude of its over-sampled 1D Fourier measurements, because there always exists a minimum phase signal that yields the same measurements. Nonetheless, a least-squares estimate can be efficiently calculated by solving either (Equation 8) or (Equation 9) followed by spectral factorization, leading to an optimal solution that is minimum phase. If the true signal is indeed minimum phase, then we can optimally estimate it in polynomial-time. However, the minimum phase property is not a natural assumption to impose on a signal in general.

We propose to resolve this ambiguity by deliberately making the signal minimum phase before taking the quadratic measurements, so that the augmented signal is uniquely identified in polynomial time. The true signal is then recovered easily by reversing this operation.

For an arbitrary complex signal , we suggest adding in front of before taking measurements, where satisfies that . Denote the augmented signal as . The following proposition shows that is minimum phase.

To establish the result we need to show that the zeros of the -transform of

or equivalently the roots of the polynomial

all lie inside the unit circle. To this end, we rely on the following lemma.

Substituting the coefficients of into the inequality in Lemma ? establishes the result.

Conceptually, the approach we propose is very simple: all we need is a way to over-estimate the norm of the target signal, and a mechanism to insert an impulse in front of the signal before taking quadratic measurements. For example, if we assume each element in comes from a complex Gaussian distribution with variance , then we know that the probability that the magnitude of one element exceeds is almost negligible; therefore, we can simply construct by setting , resulting in being minimum phase with very high probability.

Our approach can be used with a number of measurements , as small as . Indeed, consider the equivalent reformulation (Equation 6), in which the measurements are linear with respect to . From elementary linear algebra, we know that complex numbers can be uniquely determined by as few as real linearly independent measurements, even without the specification that is a finite correlation sequence. From a unique , a unique minimum phase can be determined using SF.

The impulse may also be appended at the end of the signal , resulting in a *maximum phase* signal, meaning all the zeros of its -transform are outside the unit circle.

The -transform of is

Let . Then

which is a polynomial in and, according to Lemma ?, has all its roots inside the unit circle. Taking the reciprocal, this means that all the zeros of the -transform of lie outside the unit circle.

It is easy to show that for a maximum phase signal

its equivalent minimum phase signal is

This means that if we measure the intensity of the Fourier transform of instead, we can still uniquely recover via solving (Equation 6) followed by SF, take the conjugate reversal of the solution, and then delete the first element to obtain an estimate.

Furthermore, from the analysis above, we can extend our measuring technique so that the impulse is added away from the signal:

or

with an arbitrary number of zeros between and . The resulting augmented signal is still minimum/maximum phase, thus can be uniquely identified from the intensity of its Fourier transform. However, in this case more measurements are required for identifiability.

It was briefly mentioned in [39] without proof that if the signal has a strong first component, then it is minimum phase. Here we propose *adding* an impulse as a means of restoring identifiability for *arbitrary* signals. This concept is very similar to *holography*, which was invented by Gabor in 1948 [22], and was awarded the Nobel Prize in Physics in 1971. One form of holography, spectral interferometry, inserts an impulse exactly in the form of (Equation 13) or (Equation 14), and then measures the squared magnitude of the Fourier transform of the combined signal [40]. Despite the similarities in obtaining the measurements, the theory behind our approach and holography is very different, in several ways:

Holography constrains the length of the zeros between and to be at least , the length of the signal, which is not required in our method; in fact, in terms of the number of measurements needed, the shorter the better.

We require , whereas holography does not have any restriction on the energy of the impulse.

We provide an optimization based framework to recover the underlying signal directly in a robust fashion by exploiting the minimum phase property which is not part of the framework in holography.

## 6Scalable algorithms

In this section, we briefly describe how to directly use existing PhaseLift solvers to obtain guaranteed rank one solutions efficiently. Due to the extra memory required using SDP-relaxation methods for large-scale problems, we then design a new algorithm called CoRK that solves the approximate formulation (Equation 8) with high efficiency, in terms of both time and memory.

### 6.1SDP based algorithms

In the original PhaseLift paper [18], the authors used TFOCS [36] to solve the SDP (Equation 4), which is a relatively efficient way to solve large scale SDPs. As we discussed in Section 4.2, we can ensure a rank one minimum phase solution by solving (Equation 12) with an appropriate choice of . We find such a via bisection: first we solve (Equation 12) with to get the optimal value of the fitting term; next we solve it with a large enough that the solution is guaranteed to be rank one, but with a possibly larger fitting error; finally we bisect until both criteria are met. Since TFOCS allows to specify initializations, after we solve (Equation 12) for the first time, subsequent evaluations can be performed much more rapidly via warm start. Moreover, there is a wide range of that result in both a rank one solution and an optimal fitting error, so that the overall increase in time required is small compared to that of solving a single PhaseLift problem.

The disadvantage of SDP based algorithms is that one cannot avoid lifting the problem dimension to . If is moderately large, on the order of a few thousand, then it is very difficult to apply SDP based methods in practice.

### 6.2CoRK: An efficient ADMM method

We now propose a new algorithm for solving the approximate formulation (Equation 8), which avoids the dimension lifting, and easily handles problem sizes up to millions. Since (Equation 8) is a convex quadratic program, there are plenty of algorithms to solve it reliably. For example, a general purpose interior-point method solves it with worst case complexity without taking any problem structure into account. Nevertheless, noticing that all of the linear operations in (Equation 8) involve DFT matrices, we aim to further recude the complexity by exploiting the FFT operator.

We propose solving (Equation 8) using the alternating direction method of multipliers (ADMM) [42]. To apply ADMM, we first rewrite (Equation 8) by introducing an auxiliary variable :

where

is the indicator function of the non-negative orthant in . Treating as the first block of variables, as the second, and as the coupling linear equality constraint with scaled Lagrange multiplier , we arrive at the following iterative updates:

where is a constant described below. The derivation of the algorithm is given in Appendix C.

All the operations in (Equation 16) are taken element-wise or involve FFT computations of length . This leads to a complexity , and effectively if is chosen as . In contrast, a naive implementation of ADMM for quadratic programming requires solving a least-squares problem in every iteration [43], leading to an per-iteration complexity, which is significantly higher.

In terms of convergence rate, it is shown in [43] that our ADMM approach will converge linearly, for all values of . The same reference provides an optimal choice of that results in the fastest convergence rate, for the case when there are fewer inequality constraints than variables in the first block. However, this requirement unfortunately is not fulfilled for problem (Equation 8). Inspired by the choice of in [43], we found empirically that by setting , the proposed iterates (Equation 16) converge very fast, and at a rate effectively independent of the problem dimension. For practical purposes, the convergence rate we have achieved is good enough (typically in less than 100 iterations), but there exists pre-conditioning methods to further accelerate it [44].

As explained in Section 4.3, this formulation blends well with Kolmogorov’s method for spectral factorization. We thus refer to our approach for solving the 1D Fourier phase retrieval problem (Equation 2) as the *auto-rrelation etrieval – olmogorov factorization* (**CoRK**) algorithm.

## 7Summary of the proposed method

Throughout the paper, we divided 1D Fourier phase retrieval into several steps (measurement, formulation, algorithms, etc.), and for each step discussed several solution methods. For clarity and practical purposes, we summarize our approach below based on specific choices we found to be efficient.

Given an arbitrary vector , we propose the following steps if it is possible to insert an impulse to the signal before measuring its Fourier intensity; otherwise, only step 2 will be invoked, in which case we are guaranteed to solve the maximum likelihood formulation optimally, but the estimation error may still be large.

Construct by inserting in front of , i.e.,

such that . Take the -point DFT of , where , and measure its squared magnitude .

Apply CoRK as follows:

Formulate the least-squares problem with respect to the auto-correlation of as in (Equation 8),

by picking as the smallest power of that is greater than . Solve this problem using the iterative algorithm (Equation 16), repeated here by setting :

Extract the minimum phase signal that generates the correlation by Kolmogorov’s method, using the same , as follows (also given in Section 4.3):

Obtain an estimate via

Often, we have prior information that the signal of interest is real. This implies that the auto-correlation is also real so that we need to restrict the domain of and/or in problem (Equation 8) or (Equation 12) to be real. For the -transform of a real signal, the zeros are either real or come in conjugate pairs, which does not help the non-uniqueness of the solution if we directly measure the 1D Fourier magnitude. In Kolmogorov’s SF method, if the input is real and a valid correlation, then is conjugate symmetric, so that it is guaranteed to output a real valued signal. As for the new measurement system, all the claims made for constructing minimum/maximum phase signals still hold, when the signal is restricted to be real. In Appendix C we show how to modify the ADMM method (Equation 16) to accommodate real signals by adding an additional projection to the real domain in the update.

When is real, the DFT has conjugate symmetry

meaning for the squared magnitude, . Therefore, if we take samples between , then only the first measurements provide useful information. Consequently, we still need measurements sampled between to ensure identifiability.

## 8Simulations

We now demonstrate the algorithms and the new measurement system we proposed via simulations. We first show that both algorithms, iteratively solving (Equation 12) while bisecting and solving (Equation 8) followed by Kolmogorov’s method, are able to solve the original problem (Equation 2) optimally, meaning the cost attains the lower bound given by PhaseLift. Then, we show that applying the new measurement system proposed in Section 5 we can uniquely identify an arbitrary 1D signal, whereas directly measuring the over-sampled Fourier intensity does not recover the original input, regardless of the algorithms being used. All simulations are performed in MATLAB on a Linux machine.

### 8.1Minimizing the least-squares error

We first test the effectiveness of the proposed algorithms on random problem instances. Fixing , we randomly set as an integer between , and generate from an i.i.d. uniform distribution between . We compare the following algorithms:

**PhaseLift.**Using TFOCS [36] to solve problem (Equation 4) with . This in general does not give a rank one solution, therefore only serves as a theoretical lower bound on the minimal least-squares error (Equation 2).**PhaseLift-PC.**The leading principal component of the plain PhaseLift solution.**PhaseLift-SF.**Iteratively solving (Equation 12) while bisecting until an equivalent rank one solution for (Equation 4) is found, again using TFOCS. Each time TFOCS is initialized with the solution from the previous iteration for faster convergence. As we proved, this method is guaranteed to achieve the lower bound given by PhaseLift.**CoRK.**Proposed method summarized in the second step of Section 8. For both steps, is set to be the smallest power of that is greater than .**Fienup-GS.**Fienup’s algorithm [45] with 1000 iterations, then refined by the GS algorithm until the error defined in (Equation 3) converges.

The optimality gaps between the minimal error in (Equation 2) obtained by the aforementioned methods and the theoretical lower bound given by PhaseLift are shown in Figure 1; The running time of the different methods is provided in Figure 2, for the 100 Monte-Carlo trials we tested. As we can see, PhaseLift-SF provides an optimal rank one solution, although it takes more time compared to solving one single PhaseLift problem. Since PhaseLift-SF only provides a solution that is “numerically” rank one, when we take its rank one component and evaluate the actual fitting error (Equation 2), it is still a little bit away from the theoretical PhaseLift lower bound, as shown in the red circles in Figure 1. On the other hand, the proposed CoRK method is able to approximately solve the problem with high accuracy (a lot of times even better than PhaseLift-SF) in a very small amount of time (shorter than the standard Fienup-GS algorithm). The conventional Fienup-GS method and the leading principal component of the PhaseLift solution (in general not close to rank one) do not come close to the PhaseLift lower bound.

### 8.2Estimation performance

We now verify that our proposed measurement technique, described in Section 5, is able to recover a signal up to global phase ambiguity. As we have shown in the previous simulation, the CoRK method performs similar to those based on PhaseLift, but with much more efficiency. Therefore, we only compare CoRK and Fienup’s algorithm as baselines. In each Monte-Carlo trial, we first set as a random integer between , and then choose as the smallest power of that is larger than . A random signal is then generated with elements drawn from i.i.d. . Two kinds of measurements are collected for performance comparison:

**Direct:**directly measure ;**Minimum Phase:**Construct a minimum phase signal as in ( ?) with , and then measure .

For the minimum phase measurements, when Fienup’s algorithm is used, we add an additional step in which we use the solution to generate an auto-correlation sequence and then apply spectral factorization (Kolmogorov’s method) to obtain a minimum phase signal with the same fitting error. We denote this approach by **Fienup-SF**. We employ a simple prior that the added impulse is real and positive to resolve the global phase ambiguity: after obtaining the minimum phase solution, the result is first rotated so that the first entry is real and positive, and then this entry is deleted to obtain an estimate . For direct measurements, the estimation error is defined as

The estimation error in each of the 100 Monte-Carlo trials is shown in Figure 3. We obtain perfect signal recovery when the new measurement system is used together with the CoRK recovery algorithm. This is never the case for direct measurements, even though the fitting error is always close to zero. For the new measuring system, CoRK obtains a solution with much higher accuracy, and lower computation time (shown in Figure 4), compared to the widely used Fienup’s algorithm.

The new measuring system, together with the proposed CoRK method or Fienup’s algorithm followed by spectral factorization, is also robust to noise, in the sense that the mean squared error (MSE) can essentially attain the Cramér-Rao bound (CRB). The CRB for the phase-retrieval problem is derived in [46], and is used here to benchmark the performance of our new technique, which is the only method that can guarantee perfect signal reconstruction in the noiseless case. The CRB results in [46] are with respect to the real and imaginary part of the signal being measured, in our case . We therefore sum over the diagonals of the pseudo-inverse of the Fisher information matrix except for the st and st entries, which corresponds to the real and imaginary parts of , and define that as the CRB for . Furthermore, since the CRB is dependent on the true value of , we show the results with respect to a fixed signal in order to keep the CRB curve consistent with how we change one parameter setting of the simulation.

We set and generate a fixed signal from i.i.d. . Similar to the previous simulation, is constructed according to ( ?) with , so that it is minimum phase with very high probability. White Gaussian noise with variance is added to the squared magnitude of the Fourier transform of . The signal-to-noise ratio (SNR) is defined as The performance is plotted in Figure 4, where we show the normalized error versus the normalized CRB (original CRB divided by ), averaged over 100 Monte-Carlo trials. On the left, we fix SNRdB, and increase the number of measurements from to . On the right, we fix , and increase the SNR from dB to dB. The SNR may seem high here, but notice that most of the signal power is actually concentrated in the artificially added impulse , so that the actual noise power is much higher comparing to that of *per se*. In all cases the MSE obtained from our proposed method is able to attain the CRB sharply, even for as few as measurements.

## 9Conclusion

We studied the phase retrieval problem with 1D Fourier measurements, a classical estimation problem that has challenged researchers for decades. Our contributions to this challenging problem are as follows:

Convexity.

We showed that the 1D Fourier phase retrieval problem can be solved by a convex SDP.

Uniqueness.

We proposed a simple measurement technique that adds an impulse to the signal before measuring the Fourier intensities, so that any signal can be uniquely identified.

Algorithm.

We developed CoRK – a highly scalable algorithm to solve this problem, which was shown in our simulations to outperform all prior art.

In terms of future research directions, it is interesting to investigate how to incorporate constraints such as sparsity, non-negativity, or bandlimitedness into the auto-correlation parametrization. It is also tempting to consider extending our hidden convexity result and fast algorithm from 1D to other Fourier-based phase retrieval problems, for example the 2D case, which is identifiable, but difficult to solve.

## AEquivalence of PhaseLift and Problem ()

We show that the exact convex reformulation (Equation 9) is in fact equivalent to PhaseLift (Equation 4) without the trace regularization if we eliminate the variable in (Equation 9). A similar claim is given in [27] for the more general Toeplitz QCQP, but we show this again here in our context for completeness.

Consider the th entry in the vector , and replace by its trace parameterization

We then have that

where the last step is a result of the fact that

We now eliminate in (Equation 9) as follows:

which is exactly the cost function for PhaseLift (Equation 4) without the trace regularization.

## BCombining PhaseLift with spectral factorization

As we explained in the paper, 1D Fourier phase retrieval can be solved exactly using SDP in the following two steps. First, solve problem (Equation 9) (repeated here),

Denote as the optimal solution of (Equation 17), which is the unique solution since the cost of (Equation 17) is strongly convex with respect to . Next, perform an SDP-based spectral factorization by computing the solution to

and let the optimal solution of (Equation 18) be . Note that is the unique solution for (Equation 18), and it is rank one [28]. Here we want to show that these two steps can actually be combined into one, as shown in the following proposition.

Notice that is a feasible solution for (Equation 17), and in fact is the unique solution. On the other hand, is only one solution among a set of solutions for (Equation 17), but is the unique solution for problem (Equation 18). Now consider the following problem

It is easy to see that is its unique solution. From Lagrange duality, the above problem has the same solution as

where is the optimal Lagrangian multiplier with respect to the equality constraint in (Equation 19).

Finally, by dropping the constant in the cost of (Equation 20) and eliminating the variable as we described in Appendix A, we end up with the formulation ( ?), or (Equation 12), leading to our conclusion that is the unique solution to ( ?).

Proposition ? allows to solve 1D Fourier phase retrieval via solving (Equation 12), to which all methods provided by TFOCS can still be applied, and by tuning we obtain an optimal solution that is also rank one.

## CDerivation of Algorithm ()

We first review the alternating direction method of multipliers (ADMM), which is the tool used to derive algorithm (Equation 16).

Consider the following optimization problem:

ADMM solves this problem using the following updates

where is the step size. ADMM converges to an optimal solution as long as the problem is closed, convex, and proper. The flexible two-block structure of the algorithm often leads to very efficient and/or parallel algorithms that work well in practice.

We now apply ADMM to problem (Equation 15), which leads to the following steps:

The explicit update for is very straight forward—it is simply a projection of the point onto the non-negative orthant, leading to the update of as shown in (Equation 16).

We next derive the update for in detail. First note that is in general complex, so that the derivative needs to be taken with care. To this end we treat and as independent variables and take derivatives with respect to them separately. This approach is referred to as the *Wirtinger derivative* [47]. Focusing on the first term of the sub-problem, we can rewrite it as

Taking the derivative with respect to while treating as an independent variable, we obtain

Recall that represents the first columns of the DFT matrix, which are orthogonal to each other, so that . On the other hand, the columns of , except for the first one, come from the last columns of the same DFT matrix; if we assume , then we have that all columns of are orthogonal to columns of , except for the first one, which is equal to . Therefore, , where has only one entry in the upper-left corner that is equal to one, and zeros elsewhere. Since , (Equation 22) simplifies to

where we used the fact that is real. Similar expressions apply to the second term of the sub-problem. By setting the gradient of the sub-problem equal to zero, we have that

which leads to the update for given in (Equation 16). The gradient with respect to can be shown to be exactly the conjugate of the gradient of . Therefore, equating it to zero gives the same result.

Finally, if we restrict to be real, then the term (Equation 21) can be written as

Its gradient with respect to is then