BEYOND \ell_{1}-norm MINIMIZATION FOR SPARSE SIGNAL RECOVERY

Beyond -norm MINIMIZATION FOR SPARSE SIGNAL RECOVERY

Abstract

Sparse signal recovery has been dominated by the basis pursuit denoise (BPDN) problem formulation for over a decade. In this paper, we propose an algorithm that outperforms BPDN in finding sparse solutions to underdetermined linear systems of equations at no additional computational cost. Our algorithm, called WSPGL1, is a modification of the spectral projected gradient for minimization (SPGL1) algorithm in which the sequence of LASSO subproblems are replaced by a sequence of weighted LASSO subproblems with constant weights applied to a support estimate. The support estimate is derived from the data and is updated at every iteration. The algorithm also modifies the Pareto curve at every iteration to reflect the new weighted minimization problem that is being solved. We demonstrate through extensive simulations that the sparse recovery performance of our algorithm is superior to that of minimization and approaches the recovery performance of iterative re-weighted (IRWL1) minimization of Candès, Wakin, and Boyd, although it does not match it in general. Moreover, our algorithm has the computational cost of a single BPDN problem.

BEYOND -norm MINIMIZATION FOR SPARSE SIGNAL RECOVERY

Hassan Mansour thanks: hassanm@cs.ubc.ca thanks: The author was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Collaborative Research and Development Grant DNOISE II (375142-08).
University of British Columbia, Vancouver - BC, Canada


Index Terms—  Sparse recovery, compressed sensing, iterative algorithms, weighted minimization, partial support recovery

1 Introduction

The problem of recovering a sparse signal from an underdetermined system of linear equations is prevalent in many engineering applications. In fact, this problem has given rise to the field of compressed sensing which presents a new paradigm for acquiring signals that admit sparse or nearly sparse representations using fewer linear measurements than their ambient dimension [1, 2].

Consider an arbitrary signal and let be a set of measurements given by where is a known measurement matrix, and denotes additive noise that satisfies for some known . Compressed sensing theory states that it is possible to recover from (given ) even when , that is, using very few measurements. When is strictly sparse—i.e., when there are only nonzero entries in —and when , one may recover an estimate of the signal by solving the constrained minimization problem

(1)

However, minimization is a combinatorial problem and quickly becomes intractable as the dimensions increase. Instead, the convex relaxation given by the minimization problem

also known as basis pursuit denoise (BPDN) [3], can be used to recover an estimate . Candés, Romberg and Tao [2] and Donoho [1] show that it is possible to recover a stable and robust approximation of by solving (BPDN) instead of (1) at the cost of increasing the number of measurements taken.

Several works in the literature have proposed alternate algorithms that attempt to bridge the gap between and minimization. These include using minimization with which has been shown to be stable and robust under weaker conditions than those of minimization, see [4, 5, 6]. Weighted minimization is another alternative if there is prior information regarding the support of the signal to-be-recovered as it incorporates such information into the recovery by weighted basis pursuit denoise (w-BPDN)

where and is the weighted norm (see [7, 8, 9]).

When no prior information is available, the iterative reweighted minimization (IRWL1) algorithm, proposed by Candès, Wakin, and Boyd [10] and studied by Needell [11], solves a sequence of weighted minimization problems with the weights , where is the solution of the th iteration and for all . More recently, Mansour and Yilmaz [12] proposed a support driven iterative reweighted minimization (SDRL1) algorithm that also solves a sequence of weighted minimization problems with constant weights when belongs to support estimates that are updated in every iteration. The performance of SDRL1 is shown to match that of IRWL1.

Motivated by the performance of constant weighting in the SDRL1 algorithm, we present in this paper an iterative algorithm called WSPGL1 that converges to the solution of a weighted problem (wBPDN) with a two set weight vector and , where and is a support estimate. The set to which the algorithm converges is not known a priori but is derived and updated at every iteration. Our algorithm is a modification of the spectral projected gradient for minimization (SPGL1) algorithm [13] which solves a sequence of LASSO [14] subproblems to arrive at the solution of the BPDN problem. We give an overview of the SPGL1 algorithm in section 2. In contrast, our algorithm solves a sequence of weighted LASSO subproblems that converge to the solution of the wBPDN problem with weights applied to a support estimate . We discuss the details of this algorithm in section 3 and present preliminary recovery results in section 4 demonstrating its superior performance in recovering sparse signals from incomplete measurements compared with minimization. We limit the scope of this paper to discussing the algorithm and presenting sparse recovery results and leave the analysis of the algorithm for future work.

Notation: For a vector , an index set and its complement , let and refer to the largest entries of , is the th largest entry of , refers to the entries of indexed by , and is the vector at iteration .

2 The SPGL1 algorithm

In this section, we give an overview of the SPGL1 algorithm, developed by van den Berg and Friedlander [13], that finds the solution to the BPDN problem.

2.1 General overview

The SPGL1 algorithm finds the solution of the BPDN problem by efficiently solving a sequence of LASSO subproblems

using a spectral projected-gradient algorithm. The single parameter determines a Pareto curve , where and is the solution of (LS). The Pareto curve traces the optimal trade-off between the least-squares fit and the one-norm of the solution.

The SPGL1 algorithm is initialized at a point which gives an initial . The parameter is then updated according to the following rule

(2)

where superscript indicates Hermitian transpose, and . Consequently, the next iterate is given by the solution of (LS) and the algorithm proceeds until convergence.

2.2 Probing the Pareto curve

One of the main contributions of [13] lies in recognizing and proving that the Pareto curve is convex and continuously differentiable over all solutions of (LS). This gives rise to the update rule for shown in (2) and guarantees the convergence of SPGL1 to the solution of BPDN.

The update rule (2) is in fact a Newton-based root-finding method that solves . The update rule generates a sequence of parameters according to the Newton iteration

where is the derivative of at . It is then shown that the is equal to the negative of the dual variable of (LS) resulting in the expression . Figure 1 illustrates an example of a Pareto curve and the root finding method used in SPGL1.

Fig. 1: Example of a typical Pareto curve showing the root finding iterations used in SPGL1 [13].

3 The proposed WSPGL1 algorithm

In this section, we describe the proposed WSPGL1 algorithm for sparse signal recovery as a variation of the SPGL1 algorithm. The WSPGL1 algorithm solves a sequence of weighted LASSO subproblems to arrive at the solution to a weighted BPDN problem with weights applied to a support set . The set is derived and updated from the solutions of the weighted LASSO subproblems (LS).

3.1 Algorithm description

The two algorithms SPGL1 and WSPGL1 follow exactly the same initial steps until the solution of the first LASSO problem (LS) is found. At this point, WSPGL1 generates a support set containing the support of the largest in magnitude entries of . A weight vector is then generated such that

We heuristically choose and .

The weight vector is then used to define the weighted LASSO subproblem

with the corresponding dual variable

where . The weighted LASSO subproblem and its dual constitute a subproblem of (wBPDN) with support estimate . The BPDN and wBPDN problems have different Pareto curves. Therefore, the iterate which lies on the Pareto curve of BPDN must be adjusted to lie on the Pareto curve of the wBPDN problem. This can be easily achieved by switching with . The WSPGL1 algorithm then proceeds according to the following pseudocode.

1:  Input , , ,
2:  Output
3:  Initialize for all     , ,
4:  loop
5:     
6:     ,
7:     
8:     
9:      s.t.
10:     
11:  end loop
Algorithm 1 The WSPGL1 algorithm

3.2 Discussion

The WSPGL1 algorithm converges to the solution of a weighted BPDN problem with weights applied to a support set . When the sparse signal is recovered exactly, the set coincides with the true support of the sparse signal . Figure 2 (a) illustrates the solution path of WSPGL1 which follows the Pareto curve of the BPDN problem until the first (LS) is solved. The algorithm then uses the support information from to switch to the Pareto curve of the wBPDN problem. Figure 2 (b) compares the solution paths of WSPGL1, SPGL1, and oracle weighted SPGL1 with weight applied to the true signal support. It can be seen that WSPGL1 converges to the solution of the oracle weighted problem. Moreover, the solution paths of these algorithms merge after only the first (LS) subproblem. Note here that the x-axis is the parameter which is equal to the one-norm of for SPGL1 and the weighted one-norm of for WSPGL1 and the oracle weighted SPGL1.

(a) (b)

Fig. 2: (a) The solution path for WSPGL1 follows the BPDN Pareto curve until the first (LS) is solved, after which WSPGL1 switches to the wBPDN Pareto curve. (b) Solution paths of WSPGL1, SPGL1, and weighted SPGL1 with oracle support information. Both WSPGL1 and the oracle weighted SPGL1 use .
Fig. 3: Comparison of the percentage of exact recovery of sparse signals between the proposed WSPGL1, SDRL1 [12], IRL1 [10], and standard minimization using SPGL1 [13]. The signals have an ambient dimension and the sparsity and number of measurements are varied. The results are averaged over 100 experiments.

It is still not clear under what conditions the WSPGL1 algorithm achieves exact recovery. What is clear is that WSPGL1 can exactly recover signals with far more nonzero coefficients than what BPDN can recover. The WSPGL1 algorithm is motivated by the work in [9] and [12], which show that weighted minimization can recover less sparse signals than BPDN when the weights are applied to a support estimate that is at least 50% accurate. Moreover, it is possible to draw a support estimate from the solution of BPDN and improve that support estimate by solving wBPDN using the initial support estimate. Based on these results, we conjectured that the solution of every LASSO subproblem in SPGL1 allows us to find a support estimate that is accurate enough to improve the recovery conditions of the corresponding wBPDN problem. A full analysis of this algorithm will be the subject of future work.

4 Numerical results

We tested the WSPGL1 algorithm by comparing its performance with SDRL1 [12], IRWL1 [10] and standard minimization using the SPGL1 [13] algorithm in recovering synthetic signals of dimension . We first recover sparse signals from compressed measurements using matrices with i.i.d. Gaussian random entries and dimensions where . The sparsity of the signal is varied such that . To quantify the reconstruction performance, we plot in Figure 3 the percentage of successful recovery averaged over 100 realizations of the same experimental conditions. The figure shows that in all cases, the WSPGL1 algorithm outperforms standard minimization in recovering sparse signals. Moreover, the recovery performance approaches that of the iterative reweighted algorithms SDRL1 and IRWL1 while requiring only a fraction of the computational cost associated with these algorithms.

References

  • [1] D. Donoho, “Compressed sensing.,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [2] E. J. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, pp. 1207–1223, 2006.
  • [3] S. Chen, D. Donoho, and M.A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999.
  • [4] R. Gribonval and M. Nielsen, “Highly sparse representations from dictionaries are unique and independent of the sparseness measure,” Applied and Computational Harmonic Analysis, vol. 22, no. 3, pp. 335–355, May 2007.
  • [5] R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressive sensing,” Inverse Problems, vol. 24, no. 035020, 2008.
  • [6] R. Saab and O. Yilmaz, “Sparse recovery by non-convex optimization – instance optimality,” Applied and Computational Harmonic Analysis, vol. 29, no. 1, pp. 30–48, July 2010.
  • [7] R. von Borries, C.J. Miosso, and C. Potes, “Compressed sensing using prior information,” in 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMPSAP 2007., 12-14 2007, pp. 121 – 124.
  • [8] N. Vaswani and Wei Lu, “Modified-CS: Modifying compressive sensing for problems with partially known support,” arXiv:0903.5066v4, 2009.
  • [9] M. P. Friedlander, H. Mansour, R. Saab, and Ö. Yılmaz, “Recovering compressively sampled signals using partial support information,” to appear in the IEEE Trans. on Inf. Theory.
  • [10] E. J. Candès, Michael B. Wakin, and Stephen P. Boyd, “Enhancing sparsity by reweighted minimization,” The Journal of Fourier Analysis and Applications, vol. 14, no. 5, pp. 877–905, 2008.
  • [11] D. Needell, “Noisy signal recovery via iterative reweighted l1-minimization,” in Proceedings of the 43rd Asilomar conference on Signals, systems and computers, 2009, Asilomar’09, pp. 113–117.
  • [12] H. Mansour and O. Yilmaz, “Support driven reweighted minimization,” in Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2012.
  • [13] E. van den Berg and M. P. Friedlander, “Probing the pareto frontier for basis pursuit solutions,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890–912, 2008.
  • [14] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. Ser. B, vol. 58, no. 1, pp. 267–288, 1996.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
4923
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description