Convergence of the randomized Kaczmarz method for phase retrieval
The classical Kaczmarz iteration and its randomized variants are popular tools for fast inversion of linear overdetermined systems. This method extends naturally to the setting of the phase retrieval problem via substituting at each iteration the phase of any measurement of the available approximate solution for the unknown phase of the measurement of the true solution. Despite the simplicity of the method, rigorous convergence guarantees that are available for the classical linear setting have not been established so far for the phase retrieval setting. In this short note, we provide a convergence result for the randomized Kaczmarz method for phase retrieval in . We show that with high probability a random measurement system of size will be admissible for this method in the sense that convergence in the mean square sense is guaranteed with any prescribed probability. The convergence is exponential and comparable to the linear setting.
The classical Kaczmarz iteration is a popular and convenient method for the recovery of any real or complex -dimensional vector from a collection of sufficient linear measurements , , where denotes the Euclidean inner product of and . Starting with any initial point , the algorithm produces a succession of iterates defined by
where is the index of the selected vector (and the corresponding measurement) at time . This equation has a simple interpretation: is the orthogonal projection of on the solution hyperplane . In other words, the update is the orthogonal projection of the error on the chosen direction . Kaczmarz’s original scheme cycles through the indices periodically, but it has been shown that random selection generally yields faster convergence. For this and other results, see [11, 8, 7, 2].
This method can be adapted in a straightforward manner to the phase retrieval problem where we only have access to the intensities : By simply using the sign (phase) of the approximate measurement in place of that of , we get the phase-adapting Kaczmarz iteration
where is the sign (or phase) of the scalar , defined by the relation . We will assume the convention that . This method has been proposed by various authors (e.g. [14, 6]) and has been observed to perform well in practice. For general theory and some other main approaches to the phase retrieval problem, such as PhaseLift and PhaseCut, see [4, 1, 13].
Intuitively, this scheme has the biggest chance of success if the iterates can be guaranteed to stay reasonably close to one of the solutions of the phaseless equations so that the approximate signs have a chance to frequently match (or approximate, in the complex case) the true signs and make progress. Each time there is a phase mismatch, the iterate gets an update in the wrong direction, so it is important that this event does not happen too frequently. Hence, unlike the linear classical Kaczmarz scheme (1) which is not susceptible to the initial condition, a good initialization is needed for the nonlinear phase-adapting version (2). There are now good methods for this, such as the truncated spectral initialization .
This paper will be about the real case, i.e. both and the are in . Without loss of generality, we assume that the are of unit norm, since we can always run the iteration (2) with normalized vectors and intensity measurements . Hence we will work with the iteration
There will be two sources of randomness in this paper. The first and the primary source of randomness is the following: Given any measurement system , we will assume that the indices are chosen uniformly and independently from . We will call the resulting method phase-adapting randomized Kaczmarz iteration, irrespective of how may have been chosen. In Section 3, we present a certain deterministic condition on called “-admissibility”(which consists of four individual properties), and show that with a -admissible (and for a sufficiently small ), if the starting relative error is less than , then after one iteration the error shrinks in conditional expectation (with respect to the random choice of ). We then carry out a probabilistic analysis of convergence in Section 4 via “drift analysis” and “hitting-time” bounds.
The secondary source of randomness will come into play when we want to show that most measurement systems are -admissible in the regime. To achieve this, we will assume that the are chosen independently from the uniform distribution on the unit sphere in . The standard Gaussian distribution on can also be used.
to denote the distance between and up to a global phase.
Let be chosen independently and uniformly on . There exist absolute positive constants , , and such that if , then with probability the system satisfies the following property:
For any , if the phase-adapting randomized Kaczmarz method with respect to is applied to any initial point satisfying the relative error bound
then the stability event
holds with probability at least , and conditioned on this event the expected squared error decays exponentially. More precisely, we have
for all .
We prove this theorem at the end of Section 4. Some remarks are in order:
As is the case for the randomized Kaczmarz method for linear inverse problems, the exponential convergence of to is achieved in the mean-squared sense. However, an important distinction is that this is conditional on a stability event. (In the linear case, this event is automatic due to the fact that error decreases deterministically.) We handle this problem using methods that are known as “drift analysis” (see ).
The above stated probability lower bound for the stability event is not tight. Furthermore, our preliminary calculations suggest that the methods of this paper can be extended to achieve an improved probabilistic guarantee of the form for any fixed . For the sake of exposition we do not pursue this extension in this manuscript.
We have left out performance guarantees regarding the initialization procedure from the above theorem because we have no new results to offer here. One may simply use the truncated spectral method  which is capable of providing the kind of guarantee that is compatible with the above theorem in that for any accuracy guarantee it can operate in the regime and succeed with probability .
Note for the revision: We would like to note here that simultaneously with the initial posting of this paper, Y. Shuo Tan and R. Vershynin posted a manuscript (see ) on the randomized Kaczmarz method for phase retrieval, with results that are somewhat similar to ours, but established using different methods. Subsequently, we were also informed that Zhang et al.  had previously established a conditional error contractivity result for the Gaussian measurement model and using the so-called “reshaped Wirtinger flow” method.
2 Basic relations
Let . Then (3) can be rewritten as
Since and are orthogonal, we obtain
When and have opposite signs we have so that
is always valid. Hence (5) implies
Note that (3) is invariant under the transformation . Hence we actually have
i.e. the analysis is identical for and . For convenience of notation and without loss of generality we will work to analyze and make our initial condition assumption on .
2.1 Heuristic for convergence
Let be uniformly distributed on . It is a standard fact that
and an easy calculation (see Appendix) yields
It can also be checked easily that for any two nonzero and we have
where is the angle between and , and therefore is the normalized geodesic distance on between and . Hence, by Cauchy-Schwarz inequality, we obtain
Hence, if is sufficiently small (e.g., less than ), then
Guided by these calculations, we turn to the error bound (6). We see that if is sufficiently small (which, for a fixed , would be guaranteed by a sufficiently small ) and if we were to choose each uniformly and independently on the unit sphere, then we would have
where is the sigma-algebra generated by , and for any event , is the sigma-algebra in formed by intersecting elements of with .
Hence the stochastic process is contractive in conditional expectation which is also conditional on the size of . Without the size condition on , the analysis would have been fairly straightforward, similar to the situation of the randomized Kaczmarz iteration for linear systems. As we will see, this condition makes the task non-trivial.
However, we must also establish a similar contractivity result (conditional and in expectation) for the actual random model used in this paper, i.e., when is chosen uniformly from a fixed collection . This collection itself may also have been chosen randomly, though with the above observation we can now define certain deterministic properties of that are needed for the algorithm to work.
3 Admissible measurement systems
Let and be a given collection of nonzero vectors in . Following , we say that , or more appropriately, the linear hyperspaces produce a -uniform tessellation of if for all and in , we have
Then by Theorem 1.2 of , there exists two positive absolute constants and such that if and the are chosen independently from the uniform distribution on , then with probability at least , we get a -uniform tessellation of .
If is chosen from the collection uniformly at random and is any function on , then we define the empirical mean
With a that yields a -uniform tessellation, we have that the empirical mean is within of the ensemble mean . The upper part of this bound obviously yields
The above result provides a pathway for mimicking the argument in Section 2.1 with replaced by . Under the same random model for , a useful concentration result (i.e. for the regime ) holds for the empirical mean . Indeed as it follows from [12, Theorem 5.39], there exist absolute positive constants and such that for and with probability we have
(If desired, the constants and can be chosen closer to without changing the form of this statement.)
In order to continue on the same path, one would wish to have with high probability. As it turns out,333We thank Y. Shuo Tan and R. Vershynin for bringing this fact to our attention. this is impossible in the regime . We will circumvent this obstacle by tightening the Cauchy-Schwarz argument of Section 2.1: In order to do this, we will invoke (10) coupled with the Cauchy-Schwarz inequality only in the event does not exceed a fixed multiple of its mean value , and show that the above desirable upper bound is then achievable with high probability. At the same time, we will show that the second moment contribution from the large values is in fact small, so in this event we will only invoke the trivial bound on .
To this end, given , consider the alternative weaker conditions
We will say that is -admissible if all of the four conditions (10), (11), (12), and (13) hold. Note that all of these are deterministic conditions on . We will show in Lemma 3.2 that a random measurement system is -admissible with high probability when , but first let us show how these two alternative conditions are used instead of a bound on . Suppose is -admissible. Noting that , we have
provided and is sufficiently small (e.g. ). Hence, together with the lower bound of (11), we have
where again is the sigma-algebra generated by .
At this point, it will be helpful to replace the condition by a size condition on . Note that for any two nonzero vectors and , we have
so that the condition implies . Therefore we have
With the above discussion we have established the following result:
There exists such that, if and is -admissible, then
where and .
We now show that a random is -admissible with high probability in the regime .
For every , there exists positive constants and depending only on such that if , then a random measurement system that is chosen independently from the uniform distribution on is -admissible with probability at least .
We start with (12). As is standard in this type of question, we would like to establish the stated inequality for fixed first (with high probability) and then use approximation over an -net of to achieve uniformity over . However is a discontinuous function of the random variable , presenting a difficulty for the approximation argument. The solution will follow by incorporating a suitable Lipschitz extension, as also done in .
For this purpose, let be defined by
Then is a Lipschitz function with Lipschitz constant . Furthermore,
so that for any we have
Now, let denote the random vector uniformly distributed on so that is a spherical random vector in (see [12, Section 5.2.5]). Let stand for the sub-exponential norm and the sub-Gaussian norm (see [12, Section 5.2.3 and 5.2.4]). Noting that , we have
where in the second step we have used [12, Lemma 5.14]) and in the last step the fact that the sub-Gaussian norm of a spherical random vector is bounded by an absolute constant (see [12, Section 5.2.5]; a direct computation is also possible).
Hence, by the Bernstein-type inequality [12, Proposition 5.16], there is an absolute constant such that for any we have, with probability at least ,
where in the second step we have used instead.
Now pick an -net of the unit sphere of cardinality at most where . For each and such that , we have
where in the first step we have utilized the Lipschitz continuity of , and in the last step Cauchy-Schwarz inequality coupled with the upper bound of (11). Combining (19), (20), and (21), we find that with probability at least , we have
for every . We may choose and so that and therefore (12) holds with probability at least provided .
We continue with (13). We will use the same method, but with a different Lipschitz function. Let be defined by
Then is a Lipschitz function that fixes with Lipschitz constant . We have
so that for any fixed we have
Noting that , we now have
so that by the Bernstein-type inequality (and reducing the value of if necessary), for any we have, with probability at least ,
where in the second step we have used instead. We again pick an -net of the unit sphere of cardinality at most . For each and such that , this time we have
Hence by the union bound, with probability at least we have
for every . We may choose and so that and therefore (13) holds with probability at least provided . ∎
4 Probabilistic analysis of the error sequence
Our goal in this section will be to bound the probability that exceeds at some point and to obtain probabilistic guarantees on the exponential decay of . Lemma 3.1 uses the randomness present in the selection of only. To be able to iterate this result recursively we need to condition on the event . We define the “hitting time”
Hence is the same as and the event means for all .
Suppose (17) holds. Then
for all .
We can now use Lemma 4.1 to control (i) the probability of the event that the error exceeds at some point (i.e. ), and (ii) the expected decay of squared error conditional on the event that the error remains bounded by (i.e. ).
Suppose (17) holds. Then for any we have
and for any
For the first claim it suffices to observe that so that
The result follows by bounding the right hand side of this inequality using (27).
For the second claim, note that
The result follows by setting and using (27) again. ∎
Next we give a bound on .
Suppose (17) holds. Then
in particular we have