An Efficient Greedy Algorithm for Sparse Recovery in Noisy Environment

An Efficient Greedy Algorithm for Sparse Recovery in Noisy Environment

Abstract

Greedy algorithm are in widespread use for sparse recovery because of its efficiency. But some evident flaws exists in most popular greedy algorithms, such as CoSaMP, which includes unreasonable demands on prior knowledge of target signal and excessive sensitivity to random noise. A new greedy algorithm called AMOP is proposed in this paper to overcome these obstacles. Unlike CoSaMP, AMOP can extract necessary information of target signal from sample data adaptively and operate normally with little prior knowledge. The recovery error of AMOP is well controlled when random noise is presented and fades away along with increase of SNR. Moreover, AMOP has good robustness on detailed setting of target signal and less dependence on structure of measurement matrix. The validity of AMOP is verified by theoretical derivation. Extensive simulation experiment is performed to illustrate the advantages of AMOP over CoSaMP in many respects. AMOP is a good candidate of practical greedy algorithm in various applications of Compressed Sensing.

C

ompressed Sensing, Greedy algorithm, Sparse Recovery, Noisy Environment.

1 Introduction

\PARstart

Sparse signal recovery problem is the reconstruction of such signals with characteristic of ”Sparsity” from a set of nonadaptive linear measurements. It has great potential of application on various engineering fields such as coding and information theory, signal processing, machine learning and others (See papers in website [1] and reference herein). Sparse signals contains much less information than their ambient dimension suggests. Most of entries in its vector representation are zero (or negligible). So it is possible to reconstruct original signal approximately or even accurately using only a small number of linear measurements. The measurements are of the form where is some measurement matrix, and is original signal. Clearly the process of sparse signal recovery could be formulated as solving undetermined linear algebraic equation , where is measurement data. It is well-known that undetermined equation has infinite solutions in general. But if we focus on ”Sparse” solution only, situation will be different. Although the operation for finding the most sparse solution of undetermined linear equation is NP-Hard commonly[2], theoretical work in compressed sensing has shown that for certain kinds of measurement matrices, it is possible when the number of measurements m is nearly linear in the sparsity of original signal [3][4].

The two major algorithmic approaches to sparse signal recovery are based on -minimization and on greedy methods (Matching Pursuit). Finding the most sparse solution of undetermined linear equation is a optimization problem:

(1)

It could be solved by relaxation for some measurement matrices [5]. That is, solving (1) is equivalent to solving the following optimization problem

(2)

where for . Recently, more stronger sufficient condition called Restricted Isometry Property (RIP) on measurement matrix to guarantee the equivalence of (1) and (2) was also proposed [6]. It was widely accepted that -minimization (2) was normal path to complete sparse signal recovery. (2) is essentially a linear programming problem and technique of convex optimization could be utilized to solve it effectively [7]. The -minimization method provides uniform guarantees for sparse recovery. Once the measurement matrix satisfies the restricted isometry condition, this method works correctly for all sparse signals . The method is also stable, so it works for non-sparse signals such as those which are compressible, as well as noisy signals. However, the method is based on linear programming, and there is no strongly polynomial time algorithm in linear programming [8]. But its efficiency was questionable and most popular software package of convex programming, such as cvx[9], is hard to be used in practical application for its low rate of convergence, especially when dimension of target signal is large.

On the other hand, Greedy algorithms are quite fast, both theoretically and experimentally. It runs by iterating in general. Typically, On each iteration, the inner products of residue vector with the columns of measurement matrix is computed and a least squares problem is solved to obtain the estimation of original signal on this iteration. It is hoped that the convergence of iterating could be ensured and the estimator could tend to original signal in fewer steps [10].

The typical case of greedy algorithms for sparse recovery includes Orthogonal Matching Pursuit (OMP) [11], Regularized Orthogonal Matching Pursuit (ROMP) [8] and compressive sampling matching pursuit (CoSaMP) [12]. It was shown that OMP recovered the sparse signal with high probability and had great speed, but it would fail for some sparse signals and matrices [13]. The development of ROMP provides a greedy algorithm with uniform guarantees for sparse recovery, just as that provided by -minimization method. Furthermore, CoSaMP improves upon these results and provides rigorous runtime guarantees. However, there are one disadvantage for these two algorithms. Firstly, sparsity level must be presented as prior parameter for algorithms. But it is unknown in most practical scenario and must be guessed in advance. Once the estimated sparsity level has large difference with actual one, the error of algorithm will increase evidently (Although this error could be analyzed theoretically [12]). The problem becomes more severe in noisy environment. ROMP and CoSaMP can’t adapt their running process to noise condition when noise is presented. Actually, noise is inevitable in engineering problem and target signal of our recovery algorithms is always buried in it. There exist few ”Pure sparse” signal in real world. Because the dimension of target signal is unknown, it is always given with some margin to avoid possible missing. Hence it is hard to extract target signal without including certain amount of noise. This not only has influence on accuracy of algorithm, but also reduce the speed of convergence for algorithms. In fact, some calculation is carried through to estimate noise, however, which is useless at all.

A new greedy algorithm for sparse recovery is presented in this paper. Compared with ROMP and CoSaMP, our new algorithm need no any prior information on sparsity level of target signal. Furthermore, it is a kind of ”adaptive” algorithm which can inspect the existence of noise and adjust the halting condition automatically based on detailed state of noise. Besides that, it has uniformly guarantee and good efficiency, just as ROMP or CoSaMP. Hence our new algorithm is a better choice when signal with unknown sparsity level is to be extracted (such as compressible signal) under noisy background. This paper is organized as follows: In Section 2 we introduce our new algorithm. Section 3 describes some consequences of the restricted isometry property that pervade our analysis. The convergence of theorem is also established for sparse signals in Sections 3. Practical consideration for algorithm implementation is provided in Section 4. Empirical performance and some numerical experiment is described in Section 5. Finally, Section 7 presents overall conclusion.

2 Description of new algorithm

2.1 Motivation

The most difficult and important part of signal reconstruction is to identify the locations of the components in the target signal. The common approach adopted by most greedy algorithms is ”Local Approximating”, that is, computing the inner products between measurement vector and columns of measurement matrix . We will obtain observation vector and use as ”Local Approximation” (or ”Proxy”) of target signal . Note that is a dictionary and is sparse, so has a sparse representation with respect to the dictionary. It is reasonable that only a few entries of are remarkable, which imply the locations of the components of , and most of its entries are comparatively small. Of course, the precondition for argument above is that must satisfy some condition such as RIP. Intuitively, given sparsity level of , every columns form approximately an orthonormal system. Therefore, every coordinates of the observation vector u look like correlations of the measurement vector with the orthonormal basis and therefore are close in the Euclidean norm to the corresponding coefficients of .

Popular greedy algorithms, including OMP, ROMP and CoSaMP, pay much attention to observation vector and build their estimator of location of components in based on . OMP uses one biggest coordinate of . It is argued that using only the biggest couldn’t provides uniformly guarantee. So ROMP makes use of the biggest coordinates of , rather than just biggest one, and take a further step of regularization to improve the performance of algorithm. It should be noted that sparsity level is always unknown. CoSaMP employs more coordinates of , the biggest, to avoid the possible leakage of component in . But must be guessed to be input in ROMP or CoSaMP as important parameter. If guessed is smaller than its true value, correct result can’t be found; On the other hand, if guessed is set to a very large value (maybe much larger than true value) to ensure that all of entries of will enter the view of algorithms, certain amount of noise will presented in our calculation inevitably. How can we make a good guess for sparsity level without any knowledge on its true value?

2.2 AMOP Algorithm

Our new approach, named Adaptive Orthogonal Matching Pursuit (AMOP), chooses appropriate number of biggest entries of observation vector by studying the fine feature of . At each step, is computed by where is the residue vector of last step. Unlike other algorithms, AMOP determines the estimation for by analyzing the trend of entries in arranged by descend order of their amplitude. That is, relative amplitude difference of adjacent elements in above queue is calculated and a threshold is set. Once the relative amplitude difference between th and th element in ordered queue of entries in exceed threshold, will be chosen as estimation of .

Detailed description of AMOP is proposed as follows:

Algorithm 1 (Adaptive Orthogonal Matching Pursuit)

  • Input: Measurement matrix , Measurement vector , Threshold , and .

  • Let , .

  • Calculate and .

  • Arrange by descend order to obtain

    (3)
  • Determine index as follows, let ,

  • if and , set ; else , goto step (4);

  • Update the set of indices by .

  • Solve least square problem

    where is a submatrix of composed of its columns with index in .

  • Calculate the residue vector of .

  • If , output and , stop; else go to step (2).

As input, the AOMP algorithm requires two adjustable parameter and besides matrix and measurement vector . But it doesn’t need sparsity level of target signal anymore, unlike ROMP and CoSaMP. It can be extracted incidentally along with running of algorithm. Furthermore, the number of components selected in the step (4) is determined by algorithm itself automatically. It is easy to understand that this number is critical for performance of algorithm. Any manual setting will introduce extra error when mismatch between prescribed value and actual situation of data exists. So it is very necessary to let greedy algorithm of sparse recovery be adaptive, just as AOMP.

Step (5) should be noted that it give AMOP algorithm more flexibility and stability. If threshold is set too large so that too much coordinates was selected in one iteration, algorithm is prone to degrade or crash. For this we build a upper bound in step (5) to prevent the crazy growing of number of chosen components. If this bound is exceed, threshold will be adjusted to smaller value to increase the possibility of components in in step (3) to satisfy the condition in step (4). The importance of step (5) is also illustrated in following section on analysis of convergence.

3 Theoretical Analysis of Algorithmic Performance

There are two kinds of iterative invariant of greedy algorithm for sparse recovery deduced in the convergence analysis for ROMP and CoSaMP respectively. As to CoSaMP, assume the sparsity level is preliminary and

where is s-sparse approximation for and is additive noise, the following assertion could be proved [12].

(4)

where is result of pruning step in CoSaMP. Hence it is forced to be s-sparse. So this kind of iterative invariant is not suitable for analysis of AMOP because the sparsity level of intermediate result at each step in AMOP isn’t fixed. However, the iterative invariant in ROMP simply concerns with the percentage of the coordinates in the newly selected set belong to the support of target signal . It is argued that the ratio above isn’t lower than 50% with the help of regularization step [8]. We will follow the idea in [8] to derive our result on convergence of AMOP.

3.1 Localization of Energy

By induction on the iteration of AOMP, we study the gain in each iteration. Losing no generality, suppose sparsity level of target signal be , and coordinates is selected eventually in this iteration. Then its percentage of energy of first components in queue (3) is

(5)

For the descend order of queue (3), we have

(6)

It is easily seen that 6 achieves its minimum at , that is

(7)

Here means only the largest coordinates was chosen, which is just the choice of OMP. So OMP is a special case of AMOP. According to [8] Lemma 3.6, a large portion of energy of unidentified part of target signal would be locked by queue (3) and certain amount of energy would be reserved by ”regularization step” in ROMP by [8] lemma 3.8. In AMOP, the ”regularization step” is replaced by choosing largest coordinates, So more energy would be identified in AMOP than in OMP because more than one coordinates would be chosen in AMOP generally. The ability of locking uncovered energy of target signal for AMOP and ROMP is compared in Fig.1

Figure 1: Ability of Locking Uncovered Energy

It is shown that ability of AMOP is superior to that of ROMP when sparsity of target signal is small. Although the advantage becomes vague when sparsity grows, AMOP still has relatively good performance considering more coordinates would be chosen and percentage of actual identified energy would be larger than (7).

3.2 Getting the ”Correct” Support

In [14] and [15], the correctness of support of solution for OGA algorithm under different noise scenario were analyzed. Mutual coherence of matrix , overcomplete dictionary system, was the key parameter for performance of greedy algorithm in discussion therein. Here we will give analogous results for AMOP under noisy condition based on RIC (Restricted Isometry Constant)[3] of .

In practice the noise is unavoidable and it is always assumed that ideal noiseless signal has sparse representation , where the support of is very small. What we can observe is noisy version where . Suppose be solution of

(8)

be final result of AMOP, and

(9)

We argue that under certain conditions on value distribution of target signal. That is, the correctness of support of solution for AMOP can be guaranteed even in noisy environment.

Proposition 1

if target signal satisfy

(10)

here matrix is supposed to has -order restricted isometry constant , is twice of sparsity level of target signal . Then holds throughout the iteration process of AMOP unless all the coordinates in were chosen.

{proof}

We proceed by induction. holds at beginning of step 1 in AMOP initially for . Assume it is true at beginning of step 1 in given iteration, we prove it is still true at beginning of step 1 in next iteration. Consider case of , we have

(11)

It is trivial that

(12)

because is the solution of least square optimization on step 7 in AMOP,

(13)

Multiply on two side of 12,

(14)

we have

(15)

where is the identity matrix.

For , the norm of is estimated and bounded. Firstly, because is the projection matrix of orthogonal complement of subspace spanned by , its 2-norm is 1. So

(16)

Secondly,

(17)

Because and , According to [12], proposition 3.2, we have

(18)

and

(19)

On the other hand, according to definition of RIC, we obtain

(20)

Hence

(21)

Thirdly, by analogous deduction, if ,

(22)

Summarize the results above, we have for ,

(23)

Lower bound for is considered similarly. For ,

(24)

Hence

(25)

if target signal satisfy (10), we have

(26)

That is to say, will hold throughout the iteration process of AMOP unless all the coordinates in were chosen.

It is argued that value distribution of target signal, power of noise and RIC of matrix are all critical to performance of greedy algorithm. The proposition above gives a general condition for correctness of support of solutions for a large class of greedy algorithms (Not just AMOP) which use inner product between residue and dictionary vectors (columns of ) to obtain information of support of target signal. It seems that condition (10) is too restricted. But it is easy to see from (10) that ”Dynamic Scope” of target signal (that is, the norm difference between the elements with maximal and minimal norm) depends on RIC of matrix and noise power . Consider the requirement on RIC in ROMP, which is with is sparsity level of target signal according to [8], Theorem 3.1, we write (10) as

(27)

with maximum is normalized to 1. It is depicted in Fig.2 for noise level is 0.1(SNR is 20dB). When sparsity level is small, target signals with considerable ’Dynamic Scope’ are guaranteed to have good performance in greedy algorithms. The restriction on ’Dynamic Scope’ of target signal becomes tighter gently when sparsity level increases. The actual number of chosen coordinates in iteration step of AMOP in practical scenario is smaller than sparsity level in general. So AMOP could choose correct coordinates in most cases. This assertion will be illustrated further in numerical experiments.

Figure 2: Ratio Between Maximum and Minimum of Element Norm of Target Signal

4 Practical Consideration For Algorithm Implementation

4.1 Least Square via Orthogonalization

For efficient implementation of AMOP, it is important to design a appropriate computational scheme with low complexity and good numerical stability to calculate the solution of least square problem in step 7 in AMOP. It should be noticed that AMOP is incremental, that is, in each iteration some new coordinates were selected and none of previously chosen coordinates was excluded, unlike CoSaMP. So it is possible to construct a recursive algorithm to solve the least square problem.

Assume on the first iteration several coordinates were chosen and denote the set of corresponding columns of matrix as . Then observation vector would be linearly approximating with vectors in and coefficients are computed by solving the following equation

(28)

ordinary solver with good numerical performance such as QR decomposition or Singular value decomposition could be used to compute . From geometrical point of view, calculation of is equivalent to project onto subspace spanned by . We wrote it intuitively as

(29)

On the next iteration, a set of columns of matrix with respect to newly chosen coordinates would be added to least square regression. The projection became

(30)

It is well-known that orthogonalization could simplify the calculation for projection onto subspace spanned by two mutually orthogonal subspace could be regarded as sum of projections on each one. So was written as

(31)

where was projection of onto and is orthogonal to . Hence

(32)

This could be accomplished with Gram-Schmidt orthogonalization procedure. Without loss of generality, suppose

then

(33)

using following procedure

(34)

where denotes the inner product of vectors in Euclidean space. We can obtain

The projection in (32) could be written as

(35)

The numerical stability of Gram-Schmidt orthogonalization procedure could be improved further [16]. Instead of computing the vector as (34), it is computed as

(36)

This approach gives the same result as the original formula in exact arithmetic, but it introduces smaller errors in finite-precision arithmetic.

There are one points worth mentioning. Although other orthogonalization algorithms such as Householder transformations or Givens rotations are more stable than the stabilized Gram-Schmidt process, they produce all the orthogonalized vectors only at the end. On the contrary, the Gram-Schmidt procedure produces the th orthogonalized vector after the th iteration and this makes it the only choice for iterative algorithm like AMOP.

4.2 Resource Requirements

AMOP was designed to be a practical method for sparse signal recovery. The main barrier for algorithm efficiency is least square problem in step 7 of AMOP. Using recursive orthogonalization procedure above could mitigate computational burden of AMOP dramatically. Furthermore, the orthogonalization technique has the additional advantage that they only interact with the matrix through its action on vectors. In fact, it only concern with the inner products and additions of columns of matrix . It follows that the algorithm performs better when this sampling matrix has a fast matrix-vector multiply, such as on parallel computational platforms. On the other hand, less memory consumption is another advantage of recursive orthogonalization based least square. In fact, direct method such as SVD and QR have storage cost , where is the number of chosen coordinates in each iteration and is row number of matrix . It is too huge for large scale problems. But for AMOP, only one vector need be put in memory in recursive orthogonalization calculation and storage cost is . It is more suitable for implemented with VLSI circuit.

We estimate the time complexity of main steps in AMOP as follows:

  • Step 2: In this step, the inner products of residue vector and columns of matrix is computed as proxy for support of target signal. Its cost is bounded by matrix-vector multiply , which is with standard multiply operation or for fast matrix-vector multiply.

  • Step 3: According to standard textbook on algorithms [17], the expected time for selecting largest entries in vector with dimension is . Using efficient schemes such as QuickSort or HeapSort, a fully sorting of vector could be completed with expected time cost and largest entries could be selected directly, which is faster n some situation.

  • Step 4 & 5: Certain amount of support of target signal would be identified in these two steps. Although sometimes the threshold needs to be adjusted according to step 5 and several cycles of operations may be necessary, the total cost is still .

  • Step 7: The main advantage of AMOP is recursive orthogonalization based implementation of least square problem. Inner products of vectors are involved in orthogonalization and occupy much of computational resource which can be implemented efficiently by matrix-vector multiply. The cost is with standard multiply operation or for fast matrix-vector multiply.

Table 1 summarizes the discussion above in standard multiply operation and fast matrix-vector multiply with cost , respectively.

Step Standard Fast
2
3
4&5
7
Total
Table 1: Time Complexity of AMOP

Storage cost for AMOP is also considered for showing its practicability. Aside from the storage required by the sampling matrix, AMOP algorithm constructs only one vector of length N as the signal proxy. The sample vectors have length m, so they require storage. The signal approximations require at most storage. Similarly, the index sets that appear require only storage. The total storage is at worst .

5 Numerical Experiment

In this section some numerical experiments were conducted to illustrate the performance of signal recovery of AOMP. There are three factors to be considered in numerical testing of AMOP: construction of matrix, value distribution of target signal and SNR condition which will be examined in our experiments.

5.1 Construction of Matrix

The property of measurement matrix is critical to performance of any greedy algorithm for sparse recovery. As indicated in section on theoretical analysis, Its RIC has direct influence on probability of recovery of algorithms. Here, several kinds of matrix were built and utilized to test the performance of AMOP, including well-known Gaussian random matrix, Bernoulli random matrix and random Fourier matrix.

The target signal was set to be flat and no noise was added in, then 500 independent trials were performed. Figure 3-5 depicts the percentage (from the 500 trials) of sparse flat signals that were reconstructed exactly when Gaussian random matrix was chosen as measurement matrix . This plot was generated with for various levels of sparsity . The horizontal axis represents the number of measurements , and the vertical axis represents the exact recovery percentage.

As comparison, recovery percentage of algorithm CoSaMP is also given under the same setting. Standard CoSaMP needs sparsity of target signal as its important prior knowledge and it is widely regarded as one of main drawbacks of CoSaMP. In our experiment, sparsity of target signal was given to CoSaMP as input parameter to guarantee the power of CoSaMP to be exploited fully.

Figure 3: Recovery Percentage of Signal with Sparsity 4 with Gaussian Matrix

Figure 4: Recovery Percentage of Signal with Sparsity 20 with Gaussian Matrix

Figure 5: Recovery Percentage of Signal with Sparsity 36 with Gaussian Matrix

It should be noted that even the sparsity of target signal is known beforehand (which is impossible in practice), recovery percentage of CoSaMP is lower than that of AMOP. Especially when sparsity was relatively large, performance of CoSaMP degenerated very rapidly. On the contrary, the behavior of AMOP was very stable. According to well-known theoretical result of Compressed Sensing, for Gaussian random measurements matrix , if row number , column number and sparsity satisfies

(37)

where is a constant independent of , then the probability of recovery failure is exponentially small [3]. Our experiment result indicates that for AMOP, the value of constant is about for Gaussian measurement matrix.

Figure 6: Recovery Percentage of Signal with Sparsity 4 with Bernoulli Matrix

Figure 7: Recovery Percentage of Signal with Sparsity 20 with Bernoulli Matrix

Figure 8: Recovery Percentage of Signal with Sparsity 36 with Bernoulli Matrix

Figure 6-8 depicts corresponding result for Bernoulli random measurement matrix, which is analogous to Gaussian case. It had been proved that condition (37 is also sufficient for overwhelming probability of successful recovery for binary Bernoulli measurement matrix [18]. It is observed that the constant for AMOP in Bernoulli case is probably the same as that in Gaussian case.

Figure 9: Recovery Percentage of Signal with Sparsity 4 with Fourier Matrix

Figure 10: Recovery Percentage of Signal with Sparsity 20 with Fourier Matrix

Figure 11: Recovery Percentage of Signal with Sparsity 36 with Fourier Matrix

Figure 9-11 depicts corresponding result for Fourier random measurement matrix. Somewhat surprisingly, it is similar to that of Gaussian and Bernoulli case. To our knowledge, the best known bounds on size of measurements in Fourier case is given by [19]

(38)

which is conjectured to be the same as 37 [3] but there exists no strict theoretical proof until now. Our experiment result verified this conjecture in some extent indirectly.

5.2 Value Distribution of Target Signal

Pure flat signal is rarely seen in practical engineering application. So it is necessary to investigate the performance of sparse recovery algorithms on non-flat target signal. There are two cases to be studied. One is piecewise flat signal which is common in various fields of imaging, such as optical, microwave and magnetic resonance. The result is depicted in Figure 12 to 14. Here the measurement matrix is fixed to Gaussian random matrix.

Figure 12: Recovery Percentage of Piecewise Flat Signal with Sparsity 4

Figure 13: Recovery Percentage of Piecewise Flat Signal with Sparsity 20

Figure 14: Recovery Percentage of Piecewise Flat Signal with Sparsity 36

Recovery percentage of CoSaMP in our experiment isn’t ideal. Especially when the number of non-zero elements of target signal is large, the successful recovery probability of CoSaMP becomes more and more ignorable. The capability of CoSaMP is doubtful in this setting. On the contrary, the behavior of AMOP is very robust when value distribution of target signal is changed. Furthermore, the constant in 37 is approximately equal to that in flat signal setting. Although it is predicted theoretically that constant only depends on the construction of measurement matrix , not rely on nature of target signal, it is common in practice that the detailed value distribution of target signal certainly has influence on performance of recovery algorithms. Our experimental result indicates that the actual behavior of AMOP coincides with theoretical conclusion perfectly. This argument is confirmed by result for the other case of non-flat target signal. Here target signal is chosen as compressible signal which is frequently presented in orthogonal representation of signal, such as Discrete Cosine Transform and wavelet transform, and data compression. Two kinds of compressible signal are analyzed in our experiment, one is exponential signal,

the other is polynomial signal

For brevity, only result for exponential signal is depicted in 15 and corresponding curve for CoSaMP is omitted. We also performed the same experiment for polynomial compressible signals and found the results very similar to those in Figure 14.

Figure 15: Recovery Percentage of Polynomial Flat Signal

5.3 Target Signal With Noise

Random noise was added in target signal to test the performance of sparse recovery algorithms in noisy environment. Measurement matrix is fixed to Fourier random matrix and target signal is set to flat. The sparsity of target signal is fixed to 20 and size of measurements is fixed to 200. The relative error of AMOP and CoSaMP in Gaussian white noise with various level is depicted in Figure 16

Figure 16: Relative Recovery Error Under Gaussian Noise along with SNR

The capacity of AMOP in noisy environment is satisfactory. Relative recovery error could be controlled within when SNR is about 10dB. Even when SNR is as low as 5dB, relative error of AMOP still could be governed within . Though the error curve rise very acutely in very low SNR region, it is shown that AMOP works normally in most noisy environment. On the other hand, the relative error of CoSaMP keeps on a high level when noise is presented. When SNR was increased, it didn’t exhibit the obvious trend of decreasing. With the language of statistics, CoSaMP isn’t consistent estimator.

Figure 17: Relative Recovery Error Under Gaussian Noise along with Measurement

Figure 17 illustrates the advantage of AMOP in noisy environment from other viewpoint. Here SNR is fixed to 10dB, the error with various size of measurements is plotted. It is observed that if number of measurements is small, the performance of AMOP is heavily abnormal. In fact, relative error of AMOP is even higher than CoSaMP when is lower than 100. But it falls abruptly when increases while that of CoSaMP remains in narrow range. When is larger than 100, the relative error of AMOP tends to be stable. It is well controlled with in , which is similar to Figure 16.

5.4 Measurement Matrix in STAP

We would build a measurement matrix with spatial-temporal basis vectors, which is key component in theory and computation of STAP (Space-Time Adaptive Processing), to investigate the potential of sparse recovery algorithms to be applied in field of modern radar and communication engineering. Here is set to

(39)

and

where and denotes Doppler and spatial frequency, respectively. Unlike Fourier matrix, the construction of spatial-temporal matrix is more complex. The phase of elements in composed of two parts, one is contributed by Doppler frequency and the other by spatial frequency. It lead to following consequence: Different and could be combined to form the same (or approximately equal) phase. In other words, strong correlation exists in different column vectors in , which corresponding to various points far away with each other on spatial-temporal plane. So Restricted Isometric Constant of is conjectured to be relatively large. It is a challenge for sparse recovery algorithm to be feasible when measurement matrix is chosen as spatial-temporal matrix.

Generally speaking, the support (”Position” of non-zero elements) of target signal is much important than its detailed value. It is indeed true in practical engineering discipline. For example, in radar STAP processing, sparse recovery is utilized to estimate the energy distribution of clutter and interference on spatial-temporal plane from sample data directly. Because echo of clutter and interference is much stronger than radar target, the detailed amplitude and phase of clutter and interference isn’t crucial. As long as the accurate support (”Position”) of clutter and interference on spatial-temporal plane is found out, we can design the efficient filter to suppress clutter and interference effectively. Hence the most important feature of sparse recovery algorithm applied to STAP processing is detecting the support of target signal with great precision.

The experiment was performed to test the ability of AMOP and CoSaMP to detect signal support. Measurement matrix was set to spatial-temporal matrix as 39. Target signal is noise-free flat signal and its sparsity is fixed to 20. Figure 18 depicts the average result from 100 trials.

Figure 18: Size of Detected Target Support With No Error

It is clear that AMOP is much superior to CoSaMP when applied to STAP processing because it could detect a majority of target support with no error while CoSaMP could only find very few. But even so, the accuracy of algorithms for support, whether AMOP or CoSaMP, can hardly satisfy the requirement of practical STAP processors. Due to ultra-low Signal Clutter Ratio (generally lower than -50dB), missing four or five frequency points on spatial-temporal plane would lead to very high false alarm rate and the performance of radar would degrade heavily. So we should detect as much target support as possible to minimize false alarm rate. If tiny error is allowed, the behavior of sparse recovery algorithms becomes better, as depicted in Figure 19

Figure 19: Size of Detected Target Support With Error 1

It is easily seen that if the ”Position” detected having difference 1 with true ”Position” of target signal is allowed to be counted, the average size of detected target support for AMOP increases by about 2 and that of CoSaMP is still negligible. It accounted for that some support of target signal missed by AMOP wasn’t really missed. That is to say, their neighborhood, which corresponding to the points adjacent to them on spatial-temporal plane, were discovered instead. This is a good news for high performance filter to suppress clutter could still be designed with AMOP and carefully chosen notch, without losing much resolution and SNR. Figure 20 depicted the case of error 2. The behavior of AMOP continued to be made better and approach the best. It seems that it make little sense to enlarge error tolerance further.

Figure 20: Size of Detected Target Support With Error 2

6 Conclusion

In this paper, a novel greedy algorithm for sparse recovery, called AMOP, was given and examined. Its performance was studied by theoretical analysis of simulation experiment.

The motivation of this algorithm is two obvious drawbacks in popular methods, such as CoSaMP: Need of sparsity of target signal as prior knowledge, and weak ability of working in noisy environment. With well-designed algorithmic steps, AMOP can extract the information on sparsity of target signal adaptively and sense the nature of target signal automatically. It can recover the detail value of target signal with very high precision with little prior knowledge. Its validity is illustrated by strict deduction. Fine stability of performance under random noise is another advantage of AMOP. Furthermore, its robustness for various setting of target signal, flat or compressible, and construction of measurement matrix, such as spatial-temporal matrix, were also shown by thorough numerical experiment. It is argued that AMOP is a excellent greedy algorithm for sparse recovery and has great potential of widely utilization on signal processing.

References

  1. Compressed Sensing webpage: http://www.dsp.ece.rice.edu/cs.
  2. M.R. Garey and D.S. Johnson, Computers and Intractability, A Guide to the Theory of NP-Completeness, W. H. Freeman and Company, New York, 1979
  3. E. Candes. Compressive sampling. In Proc. International Congress of Mathematics, volume 3, pages 1433-1452, Madrid, Spain, 2006.
  4. E. Candes and M. Wakin, An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2), pp. 21-30, March 2008.
  5. D. L. Donoho and P. B. Stark. Uncertainty principles and signal recovery. SIAM J. Appl. Math., 49(3):906-931, June 1989.
  6. Candes and T. Tao. Decoding by linear programming, IEEE Trans. Inform. Theory 51, 4203-4215, 2006.
  7. S. Boyd, and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004
  8. D. Needell, R. Vershynin, Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit, Foundations of Computational Mathematics, DOI: 10.1007/s10208-008-9031-3
  9. CVX official website: http://www.stanford.edu/ boyd/cvx/
  10. D. Needell, J.A. TROPP, R. Vershynin, GREEDY SIGNAL RECOVERY REVIEW, Arxiv, 0812:2202, Dec, 2008
  11. J. A. Tropp and A. C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Info. Theory, 53(12): 4655-4666, 2007.
  12. D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. ACM Technical Report 2008-01, California Institute of Technology, Pasadena, July 2008.
  13. H. Rauhut. On the impossibility of uniform sparse reconstruction using greedy methods. To appear, Sampl. Theory Signal Image Process., 2008.
  14. D. L. Donoho, M. Elad, and V. N. Temlyakov, ”Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Trans. Inf. Theory, vol. 52, no. 1, pp. 6-18, Jan. 2006.
  15. Tseng, ”Further Results on Stable Recovery of Sparse Overcomplete Representations”, IEEE Transactions on Information Theory, vol. 55, no. 2, February 2009.
  16. Bau III, David; Trefethen, Lloyd N., Numerical linear algebra, Philadelphia: Society for Industrial and Applied Mathematics, 1997.
  17. T. Cormen, C. Lesierson, L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, Cambridge, MA, 2nd edition, 2001.
  18. E. Candes and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. on Information Theory, 52(12), pp. 5406-5425, December 2006.
  19. Rudelson, M., Vershynin, R., Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements. Preprint, 2006.
  20. Hao Zhang, Gang Li, Huadong Meng, A Class of Novel STAP Algorithms Using Sparse Recovery Technique, arXiv:0904.1313, Apr, 2009
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
108122
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description