Orthogonal Matching Pursuit with Replacement

Orthogonal Matching Pursuit with Replacement

Abstract

In this paper, we consider the problem of compressed sensing where the goal is to recover almost all the sparse vectors using a small number of fixed linear measurements. For this problem, we propose a novel partial hard-thresholding operator that leads to a general family of iterative algorithms. While one extreme of the family yields well known hard thresholding algorithms like ITI and HTPMalekiD10 (); Foucart10 (), the other end of the spectrum leads to a novel algorithm that we call Orthogonal Matching Pursuit with Replacement (OMPR). OMPR, like the classic greedy algorithm OMP, adds exactly one coordinate to the support at each iteration, based on the correlation with the current residual. However, unlike OMP, OMPR also removes one coordinate from the support. This simple change allows us to prove that OMPR has the best known guarantees for sparse recovery in terms of the Restricted Isometry Property (a condition on the measurement matrix). In contrast, OMP is known to have very weak performance guarantees under RIP. Given its simple structure, we are able to extend OMPR using locality sensitive hashing to get OMPR-Hash, the first provably sub-linear (in dimensionality) algorithm for sparse recovery. Our proof techniques are novel and flexible enough to also permit the tightest known analysis of popular iterative algorithms such as CoSaMP and Subspace Pursuit. We provide experimental results on large problems providing recovery for vectors of size up to million dimensions. We demonstrate that for large-scale problems our proposed methods are more robust and faster than existing methods.

1 Introduction

We nowadays routinely face high-dimensional datasets in diverse application areas such as biology, astronomy, finance and the web. The associated curse of dimensionality is often alleviated by prior knowledge that the object being estimated has some structure. One of the most natural and well-studied structural assumption for vectors is sparsity. Accordingly, a huge amount of recent work in machine learning, statistics and signal processing has been devoted to finding better ways to leverage sparse structures. Compressed sensing, a new and active branch of modern signal processing, deals with the problem of designing measurement matrices and recovery algorithms, such that almost all sparse signals can be recovered from a small number of measurements. It has important applications in imaging, computer vision and machine learning (see, for example, DuarteDTLSKB08 (); WrightMMSHY10 (); HsuKLZ10 ()).

In this paper, we focus on the compressed sensing setting (CandesT05, ; Donoho06, ) where we want to design a measurement matrix such that a sparse vector with can be efficiently recovered from the measurements . Initial work focused on various random ensembles of matrices such that, if was chosen randomly from that ensemble, one would be able to recover all or almost all sparse vectors from . CandesT05 () isolated a key property called the restricted Isometry property (RIP) and proved that, as long as the measurement matrix satisfies RIP, the true sparse vector can be obtained by solving an -optimization problem,

The above problem can be easily formulated as a linear program and is hence efficiently solvable. We recall for the reader that a matrix is said to satisfy RIP of order if there is some such that, for all with , we have

Several random matrix ensembles are known to satisfy with high probability provided one chooses measurements. Candes08 () showed that -minimization recovers all -sparse vectors provided satisfies although the condition has been recently improved to (Foucart10b, ). Note that, in compressed sensing, the goal is to recover all, or most, -sparse signals using the same measurement matrix . Hence, weaker conditions such as restricted convexity NRWY09 () studied in the statistical literature (where the aim is to recover a single sparse vector from noisy linear measurements) typically do not suffice. In fact, if RIP is not satisfied then multiple sparse vectors can lead to the same observation , hence making recovery of the true sparse vector impossible.

Based on its RIP guarantees, -minimization can guarantee recovery using just measurements, but it has been observed in practice that -minimization is too expensive in large scale applications (DonohoMaMo09, ), for example, when the dimensionality is in the millions. This has sparked a huge interest in iterative methods for sparse recovery. An early classic iterative method is Orthogonal Matching Pursuit (OMP) (PatiRK93, ; DavidMA97, ) that greedily chooses elements to add to the support. It is a natural, easy-to-implement and fast method but unfortunately lacks strong theoretical guarantees. Indeed, it is known that, if run for iterations, OMP cannot uniformly recover all -sparse vectors under an RIP condition of the form (Rauhut08, ; MoS11, ). However, Zhang10 () showed that OMP, if run for iterations, recovers the optimal solution for ; a significantly more restrictive condition than the ones required by other methods like -minimization.

Several other iterative approaches have been proposed that include Iterative Soft Thresholding (IST) (MalekiD10, ), Iterative Hard Thresholding (IHT) (BlumensathD09, ), Compressive Sampling Matching Pursuit (CoSaMP) (NeedellT09, ), Subspace Pursuit (SP) (DaiM09, ), Iterative Thresholding with Inversion (ITI) (Maleki09, ), Hard Thresholding Pursuit (HTP) (Foucart10, ) and many others. Among the family of iterative hard thresholding algorithms, following MalekiD10 (), we can identify two major subfamilies: one- and two-stage algorithms. As their names suggest, the distinction is based on the number of stages in each iteration of the algorithm. One-stage algorithms such as IHT, ITI and HTP, decide on the choice of the next support set and then usually solve a least squares problem on the updated support. The one-stage methods always set the support set to have size , where is the target sparsity level. On the other hand, two-stage algorithms, notable examples being CoSaMP and SP, first enlarge the support set, solve a least squares on it, and then reduce the support set back again to the desired size. A second least squares problem is then solved on the reduced support. These algorithms typically enlarge and reduce the support set by or elements. An exception is the two-stage algorithm FoBa Zhang08 () that adds and removes single elements from the support. However, it differs from our proposed methods as its analysis requires very restrictive RIP conditions ( as quoted in HsuKLZ10 ()) and the connection to locality sensitive hashing (see below) is not made. Another algorithm with replacement steps appears in ShwartzSZ10 (). However, the algorithm and the setting under which it is analyzed are different from ours.

In this paper, we present, and provide a unified analysis for a family of one-stage iterative hard thresholding algorithms. The family is parameterized by a positive integer . At the extreme value , we recover the algorithm ITI/HTP. At the other extreme , we get a novel algorithm that we call Orthogonal Matching Pursuit with Replacement (OMPR). OMPR can be thought of as a simple modification of the classic greedy algorithm OMP: instead of simply adding an element to the existing support, it replaces an existing support element with a new one. Surprisingly, this change allows us to prove sparse recovery under the condition . This is the best based RIP condition under which any method, including -minimization, can be shown to provably perform sparse recovery.

OMPR also lends itself to a faster implementation using locality sensitive hashing (LSH). This allows us to provide recovery guarantees using an algorithm whose run-time is provably sub-linear in , the number of dimensions. An added advantage of OMPR, unlike many iterative methods, is that no careful tuning of the step-size parameter is required even under noisy settings or even when RIP does not hold. The default step-size of is always guaranteed to converge to at least a local optima.

Finally, we show that our proof techniques used in the analysis of the OMPR family are useful in tightening the analysis of two-stage algorithms, such as CoSaMP and SP, as well. As a result, we are able to prove better recovery guarantees for these algorithms —  for CoSaMP and for SP. We hope that this unified analysis sheds more light on the interrelationships between the various kinds of iterative hard thresholding algorithms.

In summary, the contributions of this paper are as follows.

  • We present a family of iterative hard thresholding algorithms that on one end of the spectrum includes existing algorithms such as ITI/HTP while on the other end gives OMPR. OMPR is an improvement over the classical OMP method as it enjoys better theoretical guarantees and is also better practically as shown in our experiments.

  • Unlike other improvements over OMP, such as CoSaMP or SP, OMPR changes only one element of the support at a time. This allows us to use Locality Sensitive Hashing (LSH) to speed it up resulting in the first provably sub-linear (in the ambient dimensionality ) time sparse recovery algorithm.

  • We provide a general proof for all the algorithms in our partial hard thresholding based family. In particular, we can guarantee recovery using OMPR, under both noiseless and noisy settings, provided . This is the least restrictive condition under which any efficient sparse recovery method is known to work. Furthermore, our proof technique can be used to provide a general theorem that provides the least restrictive known guarantees for all the two-stage algorithms such as CoSamp and SP (see Appendix D).

All proofs omitted from the main body of the paper can be found in the appendix.

2 Orthogonal Matching Pursuit with Replacement

Orthogonal matching pursuit (OMP), is a classic iterative algorithm for sparse recovery. At every stage, it selects a coordinate to include in the current support set by maximizing the inner product between columns of the measurement matrix and the current residual . Once the new coordinate has been added, it solves a least squares problem to fully minimize the error on the current support set. As a result, the residual becomes orthogonal to the columns of that correspond to the current support set. Thus, the least squares step is also referred to as orthogonalization by some authors (DavenportW10, ).

Let us briefly explain some of our notation. We use the MATLAB notation:

The hard thresholding operator sorts its argument vector in decreasing order (in absolute value) and retains only the top entries. It is defined formally in the next section. Also, we use subscripts to denote sub-vectors and submatrices, e.g. if is a set of cardinality and , denotes the sub-vector of indexed by . Similarly, for a matrix denotes a sub-matrix of size with columns indexed by . The complement of set is denoted by and denotes the subvector not indexed by . The support (indices of non-zero entries) of a vector is denoted by .

Our new algorithm called Orthogonal Matching Pursuit with Replacement (OMPR ), shown as Algorithm 1, differs from OMP in two respects. First, the selection of the coordinate to include is based not just on the magnitude of entries in but instead on a weighted combination with the step-size controlling the relative importance of the two addends. Second, the selected coordinate replaces one of the existing elements in the support, namely the one corresponding to the minimum magnitude entry in the weighted combination mentioned above.

Once the support of the next iterate has been determined, the actual iterate is obtained by solving the least squares problem:

Note that if the matrix satisfies RIP of order or larger, the above problem will be well conditioned and can be solved quickly and reliably using an iterative least squares solver.

1:  Input: matrix , vector , sparsity level
2:  Parameter: step size
3:  Initialize s.t.
4:  for  to  do
5:     
6:     
7:     
8:     
9:     
10:     
11:  end for
Algorithm 1 OMPR
1:  Input: matrix , vector , sparsity level
2:  Parameter: step size
3:  Initialize s.t.
4:  for  to  do
5:     
6:      indices of top elements of
7:     
8:     
9:     
10:     
11:  end for
Algorithm 2 OMPR ()

We will show that OMPR, unlike OMP, recovers any -sparse vector under the RIP based condition . This appears to be the least restrictive recovery condition (i.e., best known condition) under which any method, be it basis pursuit (-minimization) or some iterative algorithm, is guaranteed to recover all -sparse vectors.

In the literature on sparse recovery, RIP based conditions of a different order other than are often provided. It is seldom possible to directly compare two conditions, say, one based on and the other based on . Foucart (Foucart10, ) has given a heuristic to compare such RIP conditions based on the number of samples it takes in the Gaussian ensemble to satisfy a given RIP condition. This heuristic says that an RIP condition of the form is less restrictive if the ratio is smaller. For the OMPR condition , this ratio is which makes it heuristically the least restrictive RIP condition for sparse recovery.

Theorem 1 (Noiseless Case).

Suppose the vector is -sparse and the matrix satisfies and . Then OMPR  recovers approximation to from measurements in iterations.

Theorem 2 (Noisy Case).

Suppose the vector is -sparse and the matrix satisfies and . Then, in O() iterations OMPR  converges to approximate solution, i.e., from measurements . is a universal constant and is dependent only on .

The above theorems are actually special cases of our convergence results for a family of algorithms that contains OMPR as a special case. We now turn our attention to this family. We note that the condition is very mild and will typically hold for standard random matrix ensembles as soon as the number of rows sampled is larger than a fixed universal constant.

3 A New Family of Iterative Algorithms

In this section we show that OMPR  is one particular member of a family of algorithms parameterized by a single integer . The -th member of this family, OMPR (), shown in Algorithm 2, replaces at most elements of the current support with new elements. OMPR  corresponds to the choice . Hence, OMPR  and OMPR () refer to the same algorithm.

Our first result in this section connects the OMPR  family to hard thresholding. Given a set of cardinality , define the partial hard thresholding operator

(1)

As is clear from the definition, the operator tries to find a vector close to a given vector under two constraints: (i) the vector should have bounded support (), and (ii) its support should not include more than new elements outside a given support .

The name partial hard thresholding operator is justified because of the following reasoning. When , the constraint is trivially implied by and hence the operator becomes independent of . In fact, it becomes identical to the standard hard thresholding operator

(2)

Even though the definition of seems to involve searching through subsets, it can in fact be computed efficiently by simply sorting the vector by decreasing absolute value and retaining the top entries.

The following result shows that even the partial hard thresholding operator is easy to compute. In fact, lines 6–8 in Algorithm 2 precisely compute .

Proposition 3.

Let and be given. Then can be computed using the sequence of operations

The proof of this proposition is straightforward and elementary. However, using it, we can now see that the OMPR () algorithm has a simple conceptual structure. In each iteration (with current iterate having support ), we do the following:

  1. (Gradient Descent) Form . Note that is the gradient of the objective function at .

  2. (Partial Hard Thresholding) Form by partially hard thresholding using the operator .

  3. (Least Squares) Form the next iterate by solving a least squares problem on the support of .

A nice property enjoyed by the entire OMPR  family is guaranteed sparse recovery under RIP based conditions. Note that the condition under which OMPR () recovers sparse vectors becomes more restrictive as increases. This could be an artifact of our analysis, as in experiments, we do not see any degradation in recovery ability as is increased.

Theorem 4 (Noiseless Case).

Suppose the vector is -sparse. Then OMPR () recovers an approximation to from measurements in iterations provided we choose a step size that satisfies and .

Theorem 5 (Noisy Case).

Suppose the vector is -sparse. Then OMPR () converges to a -approximate solution, i.e., from measurements in iterations provided we choose a step size that satisfies and , where .

Proof.

Here we provide a rough sketch of the proof of Theorem 4; the complete proof is given in Appendix A.

Our proof uses the following crucial observation regarding the structure of the vector Due to the least squares step of the previous iteration, the current residual is orthogonal to columns of . This means that

(3)

As the algorithm proceeds, elements come in and move out of the current set . Let us give names to the set of found and lost elements as we move from to :

Hence, using (3) and updates for : , and . Now let , then using upper RIP and the fact that , we can show that (details are in the Appendix A):

(4)

Furthermore, since is chosen based on the largest entries in , we have: Plugging this into (4), we get:

(5)

The above expression shows that if then our method monotonically decreases the objective function and converges to a local optimum even if RIP is not satisfied (note that upper RIP bound is independent of lower RIP bound, and can always be satisfied by normalizing the matrix appropriately).

However, to prove convergence to the global optimum, we need to show that at least one new element is added at each step, i.e., . Furthermore, we need to show sufficient decrease, i.e, . We show both these conditions for global convergence in Lemma 6, whose proof is given in Appendix A.

Assuming Lemma 6, (5) shows that at each iteration OMPR () reduces the objective function value by at least a constant fraction. Furthermore, if is chosen to have entries bounded by , then . Hence, after iterations, the optimal solution would be obtained within error. ∎

Lemma 6.

Let and . Then assuming , at least one new element is found i.e. . Furthermore, , where is a constant.

Special Cases: We have already observed that the OMPR  algorithm of the previous section is simply OMPR (). Also note that Theorem 1 immediately follows from Theorem 4.

The algorithm at the other extreme of has appeared at least three times in the recent literature: as Iterative (hard) Thresholding with Inversion (ITI) in Maleki09 (), as SVP-Newton (in its matrix avatar) in JainMD10 (), and as Hard Thresholding Pursuit (HTP) in Foucart10 ()). Let us call it IHT-Newton as the least squares step can be viewed as a Newton step for the quadratic objective. The above general result for the OMPR  family immediately implies that it recovers sparse vectors as soon as the measurement matrix satisfies .

Corollary 7.

Suppose the vector is -sparse and the matrix satisfies . Then IHT-Newton  recovers from measurements in iterations.

4 Tighter Analysis of Two Stage Hard Thresholding Algorithms

Recently, MalekiD10 () proposed a novel family of algorithms, namely two-stage hard thresholding algorithms. During each iteration, these algorithms add a fixed number (say ) of elements to the current iterate’s support set. A least squares problem is solved over the larger support set and then elements with smallest magnitude are dropped to form next iterate’s support set. Next iterate is then obtained by again solving the least squares over next iterate’s support set. See Appendix D for a more detailed description of the algorithm.

Using proof techniques developed for our proof of Theorem 4, we can obtain a simple proof for the entire spectrum of algorithms in the two-stage hard thresholding family.

Theorem 8.

Suppose the vector is -sparse. Then the Two-stage Hard Thresholding algorithm with replacement size recovers from measurements in iterations provided:

Note that CoSaMP NeedellT09 () and Subspace Pursuit(SP) (DaiM09, ) are popular special cases of the two-stage family. Using our general analysis, we are able to provide significantly less restrictive RIP conditions for recovery.

Corollary 9.

CoSaMPNeedellT09 () recovers -sparse from measurements provided

Corollary 10.

Subspace PursuitDaiM09 () recovers -sparse from measurements provided

Note that CoSaMP’s analysis given by NeedellT09 () requires while Subspace Pursuit’s analysis given by DaiM09 () requires . See Appendix D in the supplementary material for proofs of the above theorem and corollaries.

5 Fast Implementation Using Hashing

In this section, we discuss a fast implementation of the OMPR method using locality-sensitive hashing. The main intuition behind our approach is that the OMPR method selects at most one element at each step (given by ); hence, selection of the top most element is equivalent to finding the column that is most “similar” (in magnitude) to , i.e., this may be viewed as the similarity search task for queries of the form and .

To this end, we use locality sensitive hashing (LSH)  (GionisIM99, ), a well known data-structure for approximate nearest-neighbor retrieval. Note that while LSH is designed for nearest neighbor search (in terms of Euclidean distances) and in general might not have any guarantees for the similar neighbor search task, we are still able to apply it to our task because we can lower-bound the similarity of the most similar neighbor.

We first briefly describe the LSH scheme that we use. LSH generates hash bits for a vector using randomized hash functions that have the property that the probability of collision between two vectors is proportional to the similarity between them. For our problem, we use the following hash function: , where is a random hyper-plane generated from the standard multivariate Gaussian distribution. It can be shown that GW94 ()

Now, an -bit hash key is created by randomly sampling hash functions , i.e., , where each is sampled randomly from the standard multivariate Gaussian distribution. Next, hash tables are constructed during the pre-processing stage using independently constructed hash key functions . During the query stage, a query is indexed into each hash table using hash-key functions and then the nearest neighbors are retrieved by doing an exhaustive search over the indexed elements.

Below we state the following theorem from GionisIM99 () that guarantees sub-linear time nearest neighbor retrieval for LSH.

Theorem 11.

Let and , then with probability , LSH recovers -nearest neighbors, i.e., where is the nearest neighbor to and is a point retrieved by LSH.

However, we cannot directly use the above theorem to guarantee convergence of our hashing based OMPR algorithm as our algorithm requires finding the most similar point in terms of magnitude of the inner product. Below, we provide appropriate settings of the LSH parameters to guarantee sub-linear time convergence of our method under a slightly weaker condition on the RIP constant. A detailed proof of the theorem below can be found in Appendix B.

Theorem 12.

Let and , where is a small constant, then with probability , OMPR with hashing converges to the optimal solution in computational steps.

The above theorem shows that the time complexity is sub-linear in . However, currently our guarantees are not particularly strong as for large the exponent of will be close to . We believe that the exponent can be improved by more careful analysis and our empirical results indicate that LSH does speed up the OMPR method significantly.

6 Experimental Results

In this section we present empirical results to demonstrate accurate and fast recovery by our OMPR method. In the first set of experiments, we present phase transition diagram for OMPR and compare it to the phase transition diagram of OMP and IHT-Newton with step size . For the second set of experiments, we demonstrate robustness of OMPR compared to many existing methods when measurements are noisy or smaller in number than what is required for exact recovery. For the third set of experiments, we demonstrate efficiency of our LSH based implementation by comparing recovery error and time required for our method with OMP and IHT-Newton (with step-size and ). We do not present results for the /basis pursuit methods, as it has already been shown in several recent papers (Foucart10, ; MalekiD10, ) that the relaxation based methods are relatively inefficient for very large scale recovery problems.

In all the experiments we generate the measurement matrix by sampling each entry independently from the standard normal distribution and then normalize each column to have unit norm. The underlying -sparse vectors are generated by randomly selecting a support set of size and then each entry in the support set is sampled uniformly from . We use our own optimized implementation of OMP and IHT-Newton. All the methods are implemented in MATLAB and our hashing routine uses mex files.

(a) OMPR (b) OMP (c) IHT-Newton
Figure 1: Phase Transition Diagrams for different methods. Red represents high probability of success while blue represents low probability of success. Clearly, OMPR recovers correct solution for a much larger region of the plot than OMP and is comparable to IHT-Newton. (Best viewed in color)

6.1 Phase Transition Diagrams

We first compare different methods using phase transition diagrams which are commonly used in compressed sensing literature to compare different methods (MalekiD10, ). We first fix the number of measurements to be and generate different problem sizes by varying and . For each problem size , we generate random Gaussian measurement matrices and -sparse random vectors. We then estimate the probability of success of each of the method by applying the method to 100 randomly generated instances. A method is considered successful for a particular instance if it recovers the underlying -sparse vector with at most relative error.

In Figure 1, we show the phase transition diagram of our OMPR method as well as that of OMP and IHT-Newton (with step size 1). The plots shows probability of successful recovery as a function of and . Figure 1 (a) shows color coding of different success probabilities; red represents high probability of success while blue represents low probability of success. Note that for Gaussian measurement matrices, the RIP constant is less than a fixed constant if and only if , where is a universal constant. This implies that and hence a method that recovers for high will have a large fraction in the phase transition diagram where successful recovery probability is high. We observe this phenomenon for both OMPR and IHT-Newton method which is consistent with their respective theoretical guarantees (see Theorem 4). On the other hand, as expected, the phase transition diagram of OMP has a negligible fraction of the plot that shows high recovery probability.

6.2 Performance for Noisy or Under-sampled Observations

Next, we empirically compare performance of OMPR to various existing compressed sensing methods. As shown in the phase transition diagrams in Figure 1, OMPR provides comparable recovery to the IHT-Newton method for noiseless cases. Here, we show that OMPR is fairly robust under the noisy setting as well as in the case of under-sampled observations, where the number of observations is much smaller than what is required for exact recovery.

For this experiment, we generate random Gaussian measurement matrix of size . We then generate random binary vector of sparsity and add Gaussian noise to it. Figure 2 (a) shows recovery error () incurred by various methods for increasing and noise level of . Clearly, our method outperforms the existing methods, perhaps a consequence of guaranteed convergence to a local minima for fixed step size . Similarly, Figure 2 (b) shows recovery error incurred by various methods for fixed and varying noise level. Here again, our method outperforms existing methods and is more robust to noise. Finally, in Figure 2 (c) we show difference in error incurred along with confidence interval (at signficance level) by IHT-Newton and OMPR for varying levels of noises and . Our method is better than IHT-Newton (at signficance level) in terms of recovery error in around 30 cells of the table, and is not worse in any of the cells but one.

Noise/ k 10 30 50 0.00 0.00(0.0) -0.21(0.6) 0.25(0.3) 0.05 0.00(0.0) 0.13(0.3) 0.37(0.3) 0.10 0.00(0.0) 0.28(0.3) 0.63(0.4) 0.20 0.03(0.0) 0.62(0.2) 0.58(0.5) 0.30 0.18(0.1) 0.92(0.3) 0.92(0.6) 0.40 0.31(0.1) 1.19(0.3) 0.84(0.5) 0.50 0.37(0.1) 1.48(0.3) 1.24(0.6)
(a) (b) (c)
Figure 2: Error in recovery () of dimensional vectors from measurements. (a): Error incurred by various methods as the sparsity level increases. Note that OMPR  incurs the least error as it provably converges to at least a local minima for fixed step size . (b): Error incurred by various methods as the noise level increases. Here again OMPR performs significantly better than the existing methods. (c): Difference in error incurred by IHT-Newton and OMPR , i.e., Error(IHT-Newton)-Error(OMPR ). Numbers in bracket denote confidence interval at 95% significance level.
(a) (b) (c)
Figure 3: (a): Error () incurred by various methods as increases. The measurements are computing by generating with support size . (b),(c): Error incurred and time required by various methods to recover vectors of support size as increases. IHT-Newton(1/2) refers to the IHT-Newton method with step size .

6.3 Performance of LSH based implementation

Next, we empirically study recovery properties of our LSH based implementation of OMPR ( OMPR-Hash ) in the following real-time setup: Generate a random measurement matrix from the Gaussian ensemble and construct hash tables offline using hash functions specified in Section 5. Next, during the reconstruction stage, measurements arrive one at a time and the goal is to recover the underlying signal accurately in real-time.For our experiments, we generate measurements using random sparse vectors and then report recovery error and computational time required by each of the method averaged over runs.

In our first set of experiments, we empirically study the performance of different methods as increases. Here, we fix , and generate measurements using -dimensional random vectors of support set size . We then run different methods to estimate vectors of support size that minimize . For our OMPR-Hash method, we use bits hash-keys and generate hash-tables. Figure 3 (a) shows the error incurred by OMPR , OMPR-Hash , and IHT-Newton for different (recall that is an input to both OMPR and IHT-Newton). Note that although OMPR-Hash performs an approximation at each step, it is still able to achieve error similar to OMPR and IHT-Newton. Also, note that since the number of measurements are not enough for exact recovery by the IHT-Newton method, it typically diverges after a few steps. As a result, we use IHT-Newton with step size which is always guaranteed to monotonically converge to at least a local minima (see Theorem 4). In contrast, in OMPR and OMPR-Hash can always set step size aggressively to be .

Next, we evaluate OMPR-Hash as dimensionality of the data increases. For OMPR-Hash , we use hash-keys and hash-tables. Figures 3(b) and (c) compare error incurred and time required by OMPR-Hash with OMPR and IHT-Newton. Here again we use step size for IHT-Newton as it does not converge for . Note that OMPR-Hash is an order of magnitude faster than OMPR while incurring slightly higher error. OMPR-Hash is also nearly times faster than IHT-Newton.

Appendix A Proofs related to OMPR: Exact Recovery Case

Let us denote the objective function by . Let denote the support set of and be the support set of . Define the sets

(false alarms)
(missed detections)

As the algorithms proceed, elements get in and move in and out of the current set . Let us give names to the set of found and lost elements as we move from to :

(found)

We first state two technical lemmas that we will need. These can be found in [19].

Lemma 13.

For any , we have,

Lemma 14.

For any such that , we have,

Proof of Theorem 3

Lemma 15.

Let . Then, in OMPR (),

Proof.

Since is the solution to the least squares problem ,

(6)

Now, note that

(7)

Hence,

(8)

That is,

where the third line follows from the fact that , , and are disjoint sets.

As and , the above inequality implies

Next, we provide a lemma that bounds the function value in terms of missed detections and also .

Lemma 16.

Let , , and . Then, at each step,

(9)
Proof.

Now, using Lemma 2 of [4] with , , we get

(10)
(11)

The assumption that and implies that . The function is decreasing on and hence (11) implies

(12)

Next, using (7) and Cauchy-Schwarz inequality:

(13)

The result now follows using the above equation with (12). ∎

Lemma 17.

Let and . Then assuming , at least one new element is found i.e. . Furthermore, , where is a constant.

Proof.

We consider the following three exhaustive cases:

  1. and : Let , s.t., . Now,

    Now, as consists of top elements of :

    (14)

    Furthermore, since , hence every element of is smaller in magnitude than every element of , otherwise that element should have been included in . Furthermore, . Hence,

    (15)

    Adding (14) and (15), we get:

    (16)

    Using above equation along with Lemma 15, we get:

    (17)

    Now, note that if , then implying that . Hence, at least one new element is added, i.e., .

  2. : By definition of :

    Hence, using Lemma 16 and the fact that :