Efficient and Robust Compressed Sensing using High-Quality Expander Graphs

Efficient and Robust Compressed Sensing using High-Quality Expander Graphs

Sina Jafarpour Computer Science
Princeton University
sina@cs.princeton.edu
   Weiyu Xu Electrical Engineering
California Institute of Technology
weiyu@caltech.edu
   Babak Hassibi ”The work of B. Hassibi was supported in parts by the National Science Foundation under grant CCF 0729203, by the David and Lucille Packard Foundation and by Caltech’s Lee Center for Advanced Networking.” Electrical Engineering
California Institute of Technology
hassibi@caltech.edu
   Robert Calderbank Electrical Engineering
Princeton University
calderbk@math.princeton.edu
Abstract

Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any -dimensional vector that is -sparse (with ) can be fully recovered using measurements and only simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond and show that, with the same number of measurements, only recovery iterations are required, which is a significant improvement when is large. In fact, full recovery can be accomplished by at most very simple iterations. The number of iterations can be made arbitrarily close to , and the recovery algorithm can be implemented very efficiently using a simple binary search tree. We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the recovery time complexity. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the significant elements of an almost -sparse signal and then, using very simple optimization techniques, finds in sublinear time a -sparse signal which approximates the original signal with very high precision.

I Introduction

The goal of compressive sampling or compressed sensing is to replace the conventional sampling and reconstruction operations with a more general combination of linear measurement and optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist. Formally, suppose we have a signal which is sparse. We can model as a dimensional vector that has at most non-zero components . We desire to find an matrix such that m , the number of measurements, becomes as small as possible ( and can be efficiently stored ) and x can be recovered efficiently from .

The originally approach was through the use of dense random matrices and random projections. It has been shown that if the matrix has some restricted isometry property (RIP-2), that is, it almost preserves the Euclidean norm of all vectors, then can be used in compressed sensing and the decoding can be accomplished using linear programming and convex programming methods [17]. This is a geometric approach based on linear and quadratic optimization, and [19] showed that the property is a direct consequence of the Johnson-Linderstrauss lemma [20] so that many dense random matrices will satisfy this property. However, the problem in practice is that the linear and quadratic programming algorithms have cubic complexity in and become really inefficient, as becomes very large; furthermore, in order to store the whole matrix in memory we still need which is inefficient too.

Following [1, 2, 3, 22, 23], we will show how random dense matrices can be replaced by the adjacency matrix of a high quality family of expander graphs, thereby reducing the space complexity of matrix storage and, more important, the recovery time complexity to a few very simple iterations. The main idea is that we study expander graphs with expansion coefficient beyond the that was considered in [1, 2].

The remainder of the paper is organized as follows. In Section II we review the previous results from [1, 2]. In Section III, following the geometric approach of [3], we establish that the adjacency matrix of the expander graphs satisfies a certain Restricted Isometry Property for Manhattan distance between sparse signals. Using this property, or via a more direct alternative approach, we show how the recovery task becomes much easier. In Section IV we generalize the algorithm of [1, 2] to expander graphs with expansion coefficient beyond . The key difference is that now the progress in each iteration is proportional to , as opposed to a constant in [1, 2], and so the time complexity is reduced from to . We then describe how the algorithm can be implemented using simple data structures very efficiently and show that explicit constructions of the expander graphs impose only a small penalty in terms of the number of measurements, and not the number of iterations, the recovery algorithm requires. We also compare our result to previous results based on random projections and to other approaches using the adjacency matrices of expander graphs. In section V generalize the analysis to a family of almost -sparse signals; (after a few very simple iterations) the robust recovery algorithm proposed in [1] empowered with high-quality expander graphs finds the position and the sign of the significant elements of an almost -sparse signal. Given this information, we then show how the restricted isometry property of the expander graphs lets us use very efficient sub-linear optimization methods to find a -sparse signal that approximates the original signal very efficiently. Section VI concludes the paper.

Ii Previous result: recovery

Ii-a Basic Definitions

Xu and Hassibi [1] proposed a new scheme for compressed sensing with deterministic recovery guarantees based on combinatorial structures called unbalanced expander graphs:

Definition 1 (Bipartite Expander Graph, Informally)

An expander graph [5] is a regular graph with vertices, such that:

  1. is sparse (ideally is much smaller than ).

  2. is ” well connected”.

Various formal definitions of the second property define the various types of expander graphs. The expander graph used in [1], [2] which has suitable properties for compressed sensing is the ”vertex expander” or ”unbalanced expanderbipartite graph:

Definition 2 (Unbalanced Bipartite -Expander Graph)

A bipartite left regular graph with variable nodes, parity check nodes and left degree will be expander graph, for , if for every subset of variable nodes with cardinality , the number of neighbors connected to , denoted by is strictly larger than , i.e, .

Using the probabilistic method, Pinsker and Bassylago [7] showed the existence of expander graphs and they showed that any random left-regular bipartite graph will, with very high probability, be an expander graph. Then Capalbo et al. gave an explicit construction for these expander graphs.

Theorem 1

Let be a fixed constant. Then for large enough there exists a expander graph with variable nodes and parity check nodes with constant left degree (not growing with ) and some . Furthermore, the explicit zig-zag construction can deterministically construct the expander graph.

Using Hoeffding’s inequality and Chernoff bounds Xu and Hassibi [2] showed the following theorem.

Theorem 2

For any , if is large enough, there exists a left regular bipartite graph with left degree for some number , which is expander graph with parity check nodes.

Ii-B Recovery Algorithm

Suppose is the original dimensional -sparse signal, and the adjacency matrix of a expander graph is used as the measurement matrix for the compressed sensing. We are given and we want to recover . Xu and Hassibi [1] proposed the following algorithm:

0:   Initialize .
0:   IF output and exit.
0:   ELSE find a variable node say such that more than half of the measurements it participate in, have identical gap .
0:   set , and go to 2.
Algorithm 1 Left Degree Dependent Signal Recovery algorithm

In the above algorithm the gap is defined as follows.

Definition 3 (gap)

Let be the original signal and . Furthermore, let be our estimate for . For each value we define a gap as:

Xu and Hassibi [1] proved the following theorem that bounds the number of steps required by the algorithm to recover .

Theorem 3

Suppose is the adjacency matrix of an expander graph satisfying Definition 2, and is an dimensional sparse signal (with ), and . Then Algorithm 1 will always find a signal which is sparse and for which . Furthermore, the algorithm requires at most iterations, where is the sparsity level of the signal and is the left side degree of the expander graph.

Let us now consider the consequences of the above Theorem for the expander graphs in Theorems 1 and 2. In Theorem 1 the sparsity can grow proportional to (since ) and the algorithm will be extremely fast; the algorithm requires iterations and since is a constant independent of , the number of iterations will be . We also clearly need measurements.

In Theorem 2 the sparsity level is fixed (does not grow with ) and the number of measurements needs to be , which is desired. Once more the number of required iterations is . However, in this case Xu and Hassibi showed the following negative result for expander graphs.

Theorem 4

Consider a bipartite graph with variable nodes and measurement nodes, and assume that the graph is a expander graph with regular left degree . Then if we have .

This theorem implies that for a expander graph, the recovery algorithm needs iterations. The main contribution of the current paper is that the number of iterations can be reduced to . The key idea is to use expanders with expander coefficient beyond .

Remark Theorem 3 does not imply the full recovery of the sparse signal. It only states that the output of the recovery algorithm will be a sparse signal such that where is the original signal. However, in the next section we show how an interesting property of the expander graphs called the property, implies the full recovery. We also give a direct proof by showing that the null-space of the adjacency matrix of an expander graph cannot be “too sparse”.

Iii Expander Codes, RIP-1 Property, and Full-Recovery Principle

Iii-a Expander Codes

Compressed Sensing has many properties in common with coding theory. The recovery algorithm is similar to the decoding algorithms of error correcting codes but over instead of a finite field. As a result, several methods from coding theory have been generalized to derive compressed sensing algorithms. Among these methods are the generalization of Reed-Solomon codes by Tarokh [9], and very recent results by Calderbank, et al [8], which are based on second order Reed-Muller codes, and Parvaresh et al [10], based on list decoding.

In 1996, Sipser and Spielman [11] used expander graphs to build a family of linear error-correcting codes with linear decoding time complexity. These codes belong to class of error correcting codes called Low Density Parity Check (LDPC) Codes. The work done by Xu and Hassibi is a generalization of these expander codes to compressed sensing. Feldman et al [12] suggested a way of decoding expander codes using linear programming, and linear programming is the usual recovery algorithm in compressed sensing. This leads to a better understanding of compressed sensing using expander graphs and a very different geometric perspective on the problem.

Iii-B Norm one Restricted Isometry Property

The standard Restricted Isometry Property, is an important condition that enables compressed sensing using random projections. Intuitively, it says that the measurement almost preserves the euclidean distance between any two sufficiently sparse vectors. This property implies that recovery using minimization is possible if a random projection is used for measurement. Indyk and Berinde in [3] showed that expander graphs satisfy a very similar property called RIP-1 which states that if the adjacency matrix of an expander graph is used for measurement, then the Manhattan () distance between two sufficiently sparse signals will be preserved by measurement. They used this property to prove that -minimization is still possible in this case. However, we will show in this section how RIP-1 can guarantee that the algorithm described will have full recovery.

Following [3], we will show that the RIP-1 property can be derived from the expansion property and will guarantee that if is the original -sparse signal, then no recovery algorithm can output a -sparse signal such that but .

We begin with the definition of expander graphs with expansion coefficient , bearing in mind that we will be interested in .

Definition 4 (Unbalanced Expander Graph)

A -unbalanced bipartite expander graph is a bipartite graph , where is the set of variable nodes and is the set of parity nodes, with regular left degree such that for any , if then the set of neighbors of has size .

The following claim can be derived using the Chernoff bounds[3]111This claim is also used in the expander codes construction :

Fig. 1: vertex expander graph
Claim 1

for any , there exists a expander with left degree:

and right set size:

.

Lemma 1 (RIP-1 property of the expander graphs)

Let be the adjacency matrix of a expander graph , then for any -sparse vector we have:

(1)
Proof:

The upper bound is trivial using the triangle inequality, so we only prove the lower bound:

The left side inequality is not influenced by changing the position of the coordinates of , so we can assume that they are in a non-increasing order: . Let be the edge that connects to . Define . Intuitively is the set of the collision edges. Let and .
Clearly ; moreover by the expansion property of the graph we have: for all , and finally since the graph is -sparse we know that for all . Hence

Now the triangle inequality implies:

\qed

Iii-C Full Recovery

The full recovery property now follows immediately from Lemma 1.

Theorem 5 (Full recovery)

Suppose is the adjacency matrix of a expander graph. And is a and is a -sparse vector, such that then .

Proof:

Let . Since is and is -sparse, is -sparse222. By Lemma 1 we have:

hence . \qed

Note that the proof of the above theorem essentially says that the adjacency matrix of a expander graph does not have a null vector that is sparse. We will also give a direct proof of this result (which does not appeal to RIP-1) since it gives a flavor of the arguments to come.

Lemma 2 (Null space of )

Suppose is the adjacency matrix of a expander graph with . Then any nonzero vector in the null space of , i.e., any such that , has more than nonzero entries.

Proof:

Define to be the support set of . Suppose that has at most nonzero entries, i.e., that . Then from the expansion property we have that . Partitioning the set into the two disjoint sets , consisting of those nodes in that are connected to a single node in , and , consisting of those nodes in that are connected to more than a single node in , we may write . Furthermore, counting the edges connecting and , we have . Combining these latter two inequalities yields . This implies that there is at least one nonzero element in that participates in only one equation of . However, this contradicts the fact that and so must have more than nonzero entries. \qed

Iv Our results: Efficient Full Recovery

Iv-a Efficient measurement with iteration recovery

In this section, we show the general unbalanced bipartite expander graphs introduced in Definition 4 work much better than -expanders, in the sense that it gives the measurement size which is up to a constant the optimum measurement size, and simultaneously yields a recovery algorithm which needs only simple iterations.

Before proving the result, we introduce some notations used in the recovery algorithm and in the proof.

Definition 5 (gap)

Recall the definition of the gap from Definition 3. At each iteration , let be the support333set of nonzero elements of the gaps vector at iteration :

Definition 6

At each iteration t, we define an indicator of the difference between the estimate and :

Now we are ready to state the main result:

Theorem 6 (Efficient and Certain Compressive Sampling )

Let and suppose , as defined in definition 4 where , be the adjacency matrix of a expander graph. If we use as the measurement matrix in compressed sensing of -sparse signals, the following algorithm 2 will recover the original signal sparse signal from its measured sketch with certainty using at most simple iterations .

0:   Initialize .
0:   IF output and exit.
0:   ELSE find a variable node say such that at least of the measurements it participate in, have identical gap .
0:   set , and go to 2.
Algorithm 2 Our iteration, recovery algorithm

The proof is virtually identical to that of [1], except that we consider a general expander, rather than a -expander, and consists of the following lemmas.

  • The algorithm never gets stuck, and one can always find a coordinate such that is connected to at least parity nodes with identical gaps.

  • With certainty the algorithm will stop after at most rounds. Furthermore, by choosing small enough the number of iterations can become arbitrarily close to .

Lemma 3 (progress)

Suppose at each iteration , . If then always there exists a variable node such that at least of its neighbor check nodes have the same gap .

Proof:

we will prove that there exists a coordinate , such that is connected to at least check nodes uniquely, in other words no other variable node is connected to these nodes. This immediately implies the lemma.
Since by the expansion property of the graph . Now we are going to count the neighbors of in two ways. Figure 2 shows the progress lemma.
We partition the set into two disjoint sets:

Fig. 2: Progress Lemma
  • : The vertices in that are connected only to one vertex in .

  • : The other vertices (that are connected to more than one vertex in ).

By double counting the number of edges between variable nodes and check nodes we have:

This gives

hence

so by the pigeonhole principle, at least one of the variable nodes in must be connected uniquely to at least check nodes. \qed

Lemma 4 (gap elimination)

At each step if then

Proof:

By the previous lemma, if , there always exists a node that is connected to at least nodes with identical nonzero gap , and hence to at most nodes possibly with zero gaps. Setting the value of this variable node to zero, sets the gaps on these uniquely connected neighbors of to zero, but it may make some zero gaps on the remaining neighbors non-zero. So at least coordinates of will become zero, and at most its zero coordinates may become non-zero. Hence

(2)
\qed

Remark: The key to accelerating the algorithm is the above Lemma. For a expander, and so , which only guarantees that is reduced by a constant number. However, when , we have , which means that is guaranteed to decrease proportionally to . Since , we save a factor of .

Lemma 5 (preservation)

At each step if , after running the algorithm we have .

Proof:

Since at each step we are only changing one coordinate of , we have , so we only need to prove that .

Suppose for a contradiction that , and partition into two disjoint sets:

  1. : The vertices in that are connected only to one vertex in .

  2. : The other vertices (that are connected to more than one vertex in ).

The argument is similar to that given above; by double counting the number of vertices in one can show that

Now we have the following facts:

  • : Coordinates in are connected uniquely to coordinates in , hence each coordinate in has non-zero gap.

  • : gap elimination from Lemma 4.

  • : , differ in at most coordinates, so can differ in at most coordinates.

As a result we have:

(3)

This implies which contradicts the assumption . \qed

Proof:

Preservation (Lemma 5) and progress (Lemma 3) together immediately imply that the algorithm will never get stuck. Also by Lemma 4 we had shown that and . Hence after at most steps we will have and this together with the preservation lemma implies that we have discovered a signal such that is -sparse and . Now since we had used a expander, the full recovery property (Theorem 5) guarantees the recovery of the original signal.

Note that we have to choose , and as an example, by setting the recovery needs at most iterations. \qed

Remark: The condition in the theorem is necessary. Even leads to a expander graph (Definition 2), which needs iterations.

Iv-B Explicit Construction of Expander Graphs

In the definition of the expander graphs (Definition 4), we noted that probabilistic methods prove that such expander graphs exist and furthermore, that any random graph will, with high probability, be an expander graph. Hence, in practice it may be sufficient to use random graphs instead of expander graphs.

Though, there is no efficient explicit construction for the expander graphs of Definition 4, there exists an explicit construction for a class of expander graphs which are very close to the optimum expanders of Definition 4. Recently [13], Guruswami et al based on Parvaresh-Vardy codes [14], proved the following theorem:

Theorem 7 (Explicit Construction of expander graphs)

For any constant , and any , there exists a expander graph with left degree:

and number of right side vertices:

which has an efficient deterministic explicit construction.

Since our previous analysis was only based on the expansion property, which does not change in this case, a similar result holds if we use these expanders. For instance by letting and we will have an explicit expander construction with and so we just need number of measurements, and at most number of iterations in the recovery algorithm.

Iv-C Comparison with the recent unified geometric-combinatorial approach

We will compare our result with a very recent result by Indyk et al [22]. Their result unifies Indyk’s previous work which was based on randomness extractors [23] and a combinatorial algorithm with another approach to the RIP-1 property of Indyk-Berinde [3] which is based on geometric convex optimization methods and suggests a recursive recovery algorithm which takes sketch measurements and needs a recovery time .

By comparison, our recovery algorithm is a simple iterative algorithm, that needs sketch measurements. Our decoding algorithm consists of at most very simple iterations. Each iteration can be implemented very efficiently (see [1] ) since the adjacency matrix of the expander graph is sparse with all entries 0 or 1. One naive way to do that is by using balanced binary search trees 444such as red-black trees. As shown before, initially so we can build the tree efficiently in . Now by gap elimination (Lemma 4), although at each iteration some nodes are going to be deleted from the tree and some new nods are added, the size of the tree does not grow, so all the updates can be done in . As a result, we have a preprocessing step, and iterations each taking . So this naive approach has overall time complexity . This can even be improved by using better data structures.

V Approximately sparse signals and robust recovery

In this section we will show how the analysis using high-quality expander graphs that we proposed in the previous section can be used to show that the robust recovery algorithm in [1] can be done more efficiently in terms of the sketch size and recovery time for a family of almost -sparse signals. With this analysis we will show that the algorithm will only need measurements. Explicit constructions for the sketch matrix exist and the recovery consists of two simple sub-linear steps. First, the combinatorial iterative algorithm in [1] , which is now empowered with the high-quality expander sketches, can be used to find the position and the sign of the largest elements of the signal . Using an analysis similar to the analysis in section IV we will show that the algorithm needs only iterations, and similar to the previous section, each iteration can be done efficiently using a variation of red-black trees and will have time complexity . Then restricting to the position of the largest elements, we will use a robust theorem in expander graphs to show that simple optimization methods that are now restricted on dimensional vectors can be used to recover a sparse signal that approximates the original signal with very high precision. In summary, both the combinatorial part and the optimization part require sub linear time complexity so the overall algorithm needs sub linear recovery time and will output a -sparse signal very close to the original signal.

Before presenting the algorithm we will define precisely precisely what we mean for a signal to be almost k sparse.

Definition 7 (almost -sparse signal)

A signal is said to be almost -sparse iff it has at most large elements and the remaining elements are very close to zero and have very low magnitude. In other words, the entries of the ‘near-zero’ level in the signal vector are near-zero elements taking values from the set while the ‘significant’ level of entries take values from the set . By the definition of the almost sparsity we have . The general assumption for almost sparsity is intuitively the fact that the total magnitude of the almost sparse terms should be small enough that so that it does not disturb the overall structure of the signal which may make the recovery impossible or very errornous. Since and the total contribution of the ’near-zero’ elements is small we can assume that is small enough. We will use this assumption throughout this chapter.

In order to make the analysis for almost -sparse signals simpler we will use a high quality expander graph which is right-regular as well555the right-regularity assumption is just for the simplicity of the analysis and as we will discuss it is not mandatory.. The following lemma which is proved in [4] gives us a way to construct right-regular expanders from any expander graph without disturbing its characteristics (lemma 2.3 in [4].

Lemma 6 (right-regular expanders)

From any left-regular unbalanced expander graph with left size , right size , and left degree it is possible to efficiently construct a left-right-regular unbalanced expander graph with left size , right size , left side degree , and right side degree

Corollary 1

There exists a left-right unbalanced expander graph with left side size , right side size , left side degree , right side degree . Also based on the explicit constructions of expander graphs, explicit construction for right-regular expander graphs exists as well.

We will use the above right-regular high-quality expander graphs in order to perform robust signal recovery efficiently. The following algorithm generalizes the recovery algorithm and can be used to find the position and sign of the largest elements of an almost -sparse signal from . Throughout the algorithm at each iteration let and . where is the right side degree of the expander graph. Throughout the algorithm we will assume that . Hence the algorithm is appropriate for a family of almost -sparse signals for which the magnitude of the significant elements is large enough. We will assume that is a small constant; when is large with respect to , (), the constant degree expander sketch proposed in [1] works pretty well.

0:   Initialize .
0:   At each iteration t, if determine the positions and signs of the significant components in as the positions and signs of the non-zero signal components in ; go to 5.
0:   ELSE find a variable node say such that at least of the measurements it participate in are in either of the following categories:
  1. They have gaps which are of the same sign and have absolute values between and . Moreover, there exists a number such that are all over these measurements if we change to .

  2. They have gaps which are of the same sign and have absolute values between and . Moreover, there exists a number such that are all over these measurements if we change to .

0:   set , and go to 2 for next iteration.
0:   pick the set of significant elements of the candidate signal . Let be the sketch matrix restricted to those elements, output the solution of the optimization problem : find a vector to minimize .
Algorithm 3 The iteration, robust recovery algorithm to find the position and sign of the largest elements of an almost--sparse signal and then a close -sparse approximation for it.

In order to prove the algorithm we need the following definitions which are the generalization of the similar definitions in the exactly -sparse case.

Definition 8

At each iteration t, we define an indicator of the difference between the estimate and :

Definition 9 (gap)

At each iteration , let be the set of measurement elements in which at least one ’significant’ elements from contributes :

Theorem 8 (Validity of the algorithm 3)

The first part of the algorithm will find the position and sign of the significant elements of the signal (or more discussion see [1]).

{proof}

This is very similar to the proof of the validity of the exactly -sparse recovery algorithm. We will exploit the following facts.

  • is almost so it has at most significant elements. Initially and .

  • Since at each iteration only one element is selected, at each iteration there are at most elements such that both and are in the significant level with the same sign.

  • If then (Preservation Lemma), and by the neighborhood theorem at each round .

  • If by the neighborhood theorem there exists a node which is the unique node in that is connected to at least parity check nodes. This node is in . It differs from its actual value in the significance level or at sign. In the first case the part a) of the recovering algorithm will detect and fix it and in the second case the part b) of the algorithm will detect and fix it. For further discussion please refer to [1].

  • As a direct result : . So after iterations we will have . Consequently after at most iterations.

This means that after at most iterations the set will be empty and hence the position of the largest elements in will be the position of the largest elements in .

Remark This algorithm like the exact -sparse counterpart needs at most iterations. Now by exploiting the simple structure of the adjacency matrix of the expander graph, (again in a similar manner to the exact -sparse case), since initially we only need to construct a binary search tree for elements. Moreover, at each iteration . So even though at each iteration nodes are deleted from the tree and at most possibly new nodes are added to the tree the size of the tree never increases. Hence each iteration can be implemented efficiently in time complexity, and the algorithm will find the position of the largest elements of in with a very small overhead. Note that the right-regularity assumption was only to make the analysis simpler and is not necessary.

Knowing the position of the largest elements of it will be easier to recover a good -sparse approximation. Based on the property of the expander graph we propose a way to recover a good approximation for in a time sub-linear in . We need the following lemma which is a direct result of the property of the expander graphs and is proved in [3]

Lemma 7

Consider any such that , and let be any set of coordinates of . Then we have:

and:

Using Lemma 7 we will prove that the following minimization will recover a -sparse signal very close to the original signal:

Theorem 9 (Final recovery)

Suppose is an almost -sparse signal and is given where and . Also suppose is the set of the largest elements of . Now let be a submatrix of A containing columns from the positions of the largest elements of , so is a dimensional matrix. Hence the following minimization problem can be solved in time complexity and will recover a -sparse signal with very close norm-1 distance to the original :

\proof

Suppose is the recovered signal. Since is -sparse we have and hence:

(4)

The first two equations are only definitions. The third one is the Cauchy-Schwartz inequality. The fourth one is from the definition of and the last one is due to the almost--sparsity of . Now by setting in Lemma 7 and since is -sparse and is almost -sparse with the same positions, we will have:

As a result, since the signal is almost -sparse, the value of will be negligible and hence the recovered -sparse signal and the largest elements of the original signal will be very close to each other. So the result will be a -sparse signal approximating the original almost -sparse signal with very high precision.

Remark: Recall that the right-regularity assumption is just for making the analysis simpler. As we mentioned before, it is not necessary for the first part of the algorithm. For the second part, it is used in the inequality .

However, denoting the i-th row of by , we have

where denotes the number of ones in the i-th row of . (In the right regular case, , for all i.)

Therefore:

The only difference with the constant case is the extra . But this does not affect the end result.

Vi Conclusion

In this paper we used a combinatorial structure called an expander graph, in order to perform deterministic efficient compressed sensing and recovery. We showed how using expander graphs one needs only measurements and the recovery needs only iterations. Also we showed how the expansion property of the expander graphs, guarantees the full recovery of the original signal. Since random graphs are with high probability expander graphs and it is very easy to generate random graphs, in many cases we might use random graphs instead. However, we showed that in cases that recovery guarantees are needed, just with a little penalty on the number of measurements and without affecting the number of iterations needed for recovery, one can use another family of expander graphs for which explicit constructions exists. We also compared our result with a very recent result by Indyk et al [22], and showed that our algorithm has advantages in terms of the number of required sketch measurements, the recovering complexity, and the simplicity of the algorithm in terms of the practical implementation. Finally, we showed how the algorithm can be modified to be robust. In order to do this we slightly modified the algorithm by using right-regular high quality expander graphs to find the position of the largest elements of an almost -sparse signal. Then exploiting the robustness of the property of the expander graphs we showed how this information can be combined with efficient optimization methods to find a -sparse approximation for very efficiently. However, in the almost -sparsity model that we used non-sparse components should have ”almost equal” magnitudes. This is because of the assumption that which restricts the degree of deviation for significant components. As a result, one important future work will be finding a robust algorithm based on more general assumptions. Table I compares our results with the previous papers.

References

  • [1] W.  Xu and B.   Hassibi, Efficient Compressive Sensing with Deterministic Guarantees using Expander Graphs, Proceedings of IEEE Information Theory Workshop, Lake Tahoe, 2007.
  • [2] W.  Xu and B.   Hassibi, Further Results on Performance Analysis for Compressive Sensing Using Expander Graphs, Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers, 2007. ACSSC 2007.4-7 Nov. 2007 Page(s):621 - 625.
  • [3] R. Berinde, P. Indyk. Sparse recovery using sparse random matrices, http://people.csail.mit.edu/indyk/report.pdf, 2008.
  • [4] V. Guruswami, J. Lee, A. Razborov. Almost Euclidean subspaces of ell-1-N via expander codes.,(Electronic Colloquium on Computational Complexity, Report TR07-089, September, 2007).
  • [5] S. Vadhan Expander Graphs , Lecture 8, Pseudorandomness course, Harvard University, Fall 2004.
  • [6] M.  Capalbo, O.  Reingold, S.  Vadhan, A.  Wigderson Randomness Conductors and Constant degree expansions beyond the degree 2 barrier , Proceedings of the 34th STOC, 2002.
  • [7] L.  Bassalygo, M.  Pinsker Complexity of an optimum nonblocking switching network without reconnections , Problem in Information Transmission, vol 9 no 1, pp. 289-313, 1973.
  • [8] L.  Applebaum, S.  Howard, S.  Searle, and R.  Calderbank, Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery. Preprint, 2008.
  • [9] V. Tarokh. Reed-Solomon frames from Vandermonde Matrices.
  • [10] F. Parvaresh and B. Hassibi, List decoding for compressed sensing. In preparation.
  • [11] M.  Sipser, D.  Spielman. Expander Codes, IEEE transaction on Information Theory, Vol 42, No 6, pp 1710-1722, 1996.
  • [12] J.  Feldman, T.  Malkin, R.  Servedio, S.  Stein, M.  Wainwright LP Decoding Corrects a Constant Fraction of Errors., ISIT 2004.
  • [13] V.  Guruswami, C.  Umans, S.  Vadhan Unbalanced Expanders and Randomness Extractors from Parvaresh-Vardy Codes., proceedings of the 22nd Annual IEEE Conference on Computational Complexity , 2007.
  • [14] F.  Parvaresh, A.  Vardy Correcting errors beyond the Guruswami-Sudan radius in polynomial time., Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, pages 285–294, 2005.
  • [15] E.  Candes, J.  Romberg, T.  Tao Exact signal reconstruction from highly incomplete and inaccurate measurements, Submitted for publication, June 2004.
  • [16] R. Baraniuk. Compressive Sensing , Lecture notes in IEEE Signal Processing magazine, 2007.
  • [17] E. Candes, B. Wakin. ”people hearing without listening:” An introduction to compressive sampling , California Institute of technology, 2007.
  • [18] D. Donoho. compressed sensing , IEEE Trans.Information Theory 52, no.4, pp. 1289-1306, 2006.
  • [19] R. Baraniuk, M. Davenport, R. DeVore, M. Wakin. A simple proof of the Restricted Isometry Property of Random Matrices , Constr. Approx., 2007.
  • [20] W. B. Johnson, J. Lindenstrauss extensions of Lipschitz mapping into a Hilbert space , Conf. in modern analysis and probability, pp. 189-206, 1984.
  • [21] M. Wakin, J. Laska, M. Duarte, D. Baron, S. Saravorham, D. Takhar, K. Kelly, R. Baraniuk An architecture for compressive imaging.
  • [22] R.  Berinde, A.  Gilbert, P.  Indyk, H.  Karloff, M.  Strauss ” Combining geometry and combinatorics: a unified approach to sparse signal recovery ”’, Preprint, 2008.
  • [23] P.  Indyk ” Explicit constructions for compressed sensing of sparse signals ”’, SODA, 2008.
Paper R/D Explicit Sketch(m) # Iterations FullRecovery
[15] R No LP () Yes: RIP-2
[3] D No LP () Yes:RIP-1
[1] D Yes Yes:RIP-1
[2] D No Yes: RIP-1
Theorem:6 D No Yes:RIP-1
Theorem:7 D Yes Yes:RIP-1
TABLE I: Different Expander based recovery algorithms
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
271687
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description