Joint Sparsity Pattern Recovery with 1-bit Compressive Sensing in Sensor Networks

Joint Sparsity Pattern Recovery with 1-bit Compressive Sensing in Sensor Networks

\authorblockNVipul Gupta, Bhavya Kailkhura, Thakshila Wimalajeewa, and Pramod K. Varshney
\authorblockAIndian Institute of Technology Kanpur, Kanpur 208016, India
Department of EECS, Syracuse University, Syracuse, NY 13244, USA
Abstract

We study the problem of jointly sparse support recovery with 1-bit compressive measurements in a sensor network. Sensors are assumed to observe sparse signals having the same but unknown sparse support. Each sensor quantizes its measurement vector element-wise to 1-bit and transmits the quantized observations to a fusion center. We develop a computationally tractable support recovery algorithm which minimizes a cost function defined in terms of the likelihood function and the norm. We observe that even with noisy 1-bit measurements, jointly sparse support can be recovered accurately with multiple sensors each collecting only a small number of measurements.

11footnotetext: This work is supported in part by the National Science Foundation (NSF) under Grant No. 1307775.{keywords}

Compressed sensing, maximum-likelihood estimation, quantization, support recovery.

I Introduction

Support recovery of a sparse signal deals with the problem of finding the locations of the non-zero elements of the sparse signal. This problem occurs in a wide variety of areas including source localization [1, 2], sparse approximation [3], subset selection in linear regression [4, 5], estimation of frequency band locations in cognitive radio networks [6], and signal denoising [7]. In these applications, finding the support of the sparse signal is more important than recovering the complete signal itself. The problem of sparsity pattern recovery has been addressed by many authors in the last decade in different contexts. Compressive sensing (CS) has recently been introduced as a sparse signal acquisition scheme via random projections. A good amount of work has already been done for support recovery with real valued measurements [8, 9, 10, 11]. However, in practice, measurements are quantized before transmission or storage, therefore, it is important to consider quantization of compressive measurements for practical purposes. Further, coarse quantization of measurements is desirable and/or even necessary in resource constrained communication networks. There are some recent works that have addressed the problem of recovering sparse signals/sparsity pattern based on quantized compressive measurements in different contexts, be it calculating performance bounds [12, 13] or devising recovery algorithms [14, 15, 16, 17].

However, most of the work on 1-bit CS has focused only on recovery for the single sensor case. Reliable recovery of a sparse signal based on 1-bit CS is very difficult with only one sensor, especially when the signal-to-noise ratio (SNR) is low. On the other hand, simultaneous recovery of multiple sparse signals arises naturally in a number of applications including distributed sensor and cognitive radio networks. To the best of our knowledge, the problem of jointly sparse support recovery with multiple sensors based on 1-bit CS has not been explored in the literature. In this work, we exploit the benefits of using multiple nodes for jointly sparse recovery with 1-bit CS measurements. We assume that the multiple nodes observe sparse signals with the same but unknown sparsity pattern. The measurement vectors at each node are quantized to 1-bit element-wise and transmitted to a fusion center.

To recover the jointly sparse support, we propose to solve an optimization problem which minimizes an objective function expressed in terms of the likelihood function and the norm of a matrix. We use a computationally tractable algorithm to recover the common sparsity pattern. We show that by employing multiple sensors, the common sparse support can be estimated reliably with a relatively small number of 1-bit CS measurements per node. In particular, we investigate the trade-off between the possibility of deploying multiple sensor nodes and the cost of sampling per node.

Ii Observation Model

We consider a distributed network with multiple nodes that observe sparse signals having the same sparse support. Let the number of sensors be . At a given node, consider the following real valued observation vector collected via random projections:

(1)

where is the () measurement matrix at the -th node for and is the signal dimension. For each , the entries of are assumed to be drawn from a Gaussian ensemble with mean zero. The sparse signal vector of interest, for , has only nonzero elements with the same support. The measurement noise vector, , at the -th node, is assumed to be independent and identically distributed (i.i.d.) Gaussian with mean vector and covariance matrix where is a vector of all zeros and is the identity matrix.

Let each element of be quantized to 1-bit so that the th quantized measurement at the th node is given by,

(2)

where is the th element of , for and . Let and be matrices in which the -th element of and are and respectively, for and . Further, let be the matrix which contains as its columns for . In matrix notation, (2) can be written as,

(3)

where denotes the sign of each element of .

Iii Common Support Recovery with 1-bit CS Measurements via -Regularized Maximum Likelihood

In this section, first we formulate an optimization problem for joint sparsity pattern recovery for 1-bit CS. We use the regularized norm minimization approach with the likelihood function as the cost function instead of the widely used least squares function. With quantized measurements, the approach which uses the likelihood function as the cost function has been shown to provide better results compared to least squares methods with a single sensor [15].

For the sake of tractability, we assume that the measurement matrix is the same for all 111The work can easily be extended to the scenario having different measurement matrices.. We have from (1),

(4)

for and . In the rest of the paper, denotes the -th row of .

Next, we calculate probabilities and which will later be used to write the expression for the likelihood of given S. We have,

Similarly,

where . The conditional probability of given is given by,

The negative log-likelihood of given S, , is given by

which can be rewritten as

(5)

and . In the following, we use and interchangeably. We need to minimize this expression, , as well as incorporate the sparsity condition of the signal matrix to obtain an estimated signal matrix or the support of . As all the signals observed at all the nodes have the same support, the row- norm (as defined in [18] for real valued measurements) is appropriate to incorporate the joint sparsity constraint. The row- norm of is given by,

which is also referred to as the norm, where the row support of the coefficient matrix is defined as [18]

Now to compute , one can solve the following optimization problem:

(6)

where is the penalty parameter. However, the problem (6) is not tractable in its current form and can be relaxed as

(7)

where , i.e., is the sum of all the elements with maximum absolute value in each row, also known as the norm of a matrix.

The goal is to develop a computationally tractable algorithm to solve the problem of the form

(8)

where and is the norm of .

We use iterative shrinkage-thresholding algorithms (ISTA) for solving the problem defined in (8). In ISTA, each iteration involves solving a simplified optimization problem, which in most of the cases can be easily solved using the proximal gradient method, followed by a shrinkage/soft-threshold step; for e.g., see [19, 20, 21]. From [21], at the -th iteration we have

(9)

where

(10)

Inputs to the algorithm are (the Lipshitz constant of ) and , the initialization for the iterative method, which can be kept null matrix or , where is the pseudoinverse of and is the quantized received signal matrix as defined before. For our case, the gradient of w.r.t. matrix S can be easily calculated as , where . Notice that, is the gradient of w.r.t. X and is given by

(11)

where

The problem defined in (10) is row separable for each iteration. Therefore, to solve for , i.e., to find , we divide the problem into subproblems, where is the number of rows in . Next, we solve the following subproblem for each row of :

(12)

where , and are the rows of , and respectively. Equation (12) is of the form:

(13)

where i.e., the norm of the row of and constant vector is given by (we do not use superscript on and for brevity).

For (13), we have the following equivalent problem in the epigraph form

(14)

where , and are the -th elements in and respectively, for all and . Define . The problem in (14) can be solved using Lagrangian based methods. The Lagrangian for (14) is

with dual variables and .

Hence, for strong duality to hold, following Karush–-Kuhn–-Tucker (KKT) conditions must be satisfied by the optimal and .

(15)
(16)
(17)
(18)
(19)

Note that . To find the optimal , consider three simple cases

Case (i):

Therefore, from (19), if and only if .

Case (ii):

Case (iii):

Also, since , we have , where is defined as max.
Using (16) in the above equation, we have

(20)

as . This can be easily solved for by applying the bisection based method using the initial interval . Define

Therefore, . If there exists no solution in the interval , i.e., , the trivial solution is given by Once we have the optimal , the optimal is given by

(21)

Each subproblem given by (12) can be solved in a similar way, the solution to each of which can be used to find using (9) and (10). The summary of all the steps is provided in Algorithm 1 where denotes the Frobenius norm. Algorithm 1 produces the matrix and locations of non-zero elements in gives the support of original signal matrix .

  1. Given tolerance , parameters

  2. Initialize (), ,

  3. While

  4. While

  5. Define matrix where is computed as in (11)

  6. For each row of

  7. Update the -th row element using (21) for

  8. End For

  9. End While

  10. End While

Algorithm 1 Estimation of the common support of the sparse signal
(a) Percentage of Support Recovered (b) Probability of Recovering Exact Support
Fig. 1: Performance of common sparsity pattern recovery when
(a) Percentage of Support Recovered (b) Probability of Recovering Exact Support
Fig. 2: Performance of common sparsity pattern recovery when
(a) Percentage Support Recovery (b) Probability of Estimating Exact Support
Fig. 3: Comparison of our results with the approach presented in [15]: , and , in terms of the percentage of support recovered correctly and the probability of recovering exact support

Iv Numerical Results

In this section, we present some simulation results to demonstrate the performance of jointly sparse support recovery with 1-bit CS based on our proposed algorithm. For every Monte Carlo run, we generate the elements of the measurement matrix from a normal distribution with mean zero and variance = 0.004. In all our simulations, we used randomly generated sparse signal matrix , with each column size fixed as . We choose random values out of , which are the nonzero rows in . For each column of , all the elements whose position is given by the random values are assigned a value of either or with probability . The value of is kept . The observation noise is assumed to be Gaussian with mean zero and variance is (SNR = 2.96 dB, low SNR case), and (SNR = 23.01 dB, high SNR case). We measure the percentage of support recovered correctly and the probability of recovering exact support as , and SNR vary using Monte Carlo runs.

Our results for low SNR regime are plotted in Fig. 1. The y-axis shows the number of sensors () and the x-axis shows the number of measurements per node (). In Fig. 1 (a), the numbers on contours represent the percentage of support that are recovered. Similarly, in Fig. 1(b), the numbers on contours represent the probability of recovering the exact support. We can deduce that for a particular value of , the performance improves with the number of sensors, and vice-versa. Similarly, Fig. 2 shows the contour plots in the high SNR regime. We see that even with a very small number of sensors, for e.g., , the algorithm performs very well for reasonable values for , such as .

In Fig. 3, we compare our results with one of the most related algorithms for sparse recovery with quantized measurements as provided in [15], which uses the ML method for only one node (one measurement vector). To compute the common support with measurement vectors, the individual support sets were computed for each signal using the algorithm in [15], and the estimated support sets were fused using the majority rule. For comparison, values of , and were chosen as 50, 100 and 5, respectively. The observation noise variance . As seen from Fig. 3, our proposed approach for joint sparsity pattern recovery with quantized measurements outperforms the case when the support is estimated individually as in [15] and then fused. In particular, the proposed algorithm in this paper exploits the jointly sparse nature of multiple measurement vectors thus outperforming the results obtained by estimating the supports individually using [15] and then fused.

V Conclusion

In this paper, we exploited the use of multiple sensors for the recovery of common sparsity pattern of sparse signals with 1-bit CS. A computationally tractable algorithm was developed to optimize an objective function defined in terms of likelihood function and the norm of a matrix. Numerical results show that, with very coarsely quantized measurements (only the sign information), the common sparsity pattern of sparse signals can be recovered reliably even in the low SNR region, and the performance increases monotonically with the number of sensors.

References

  • [1] D. Malioutov, M. Cetin, and A.Willsky, “A Sparse Signal Reconstruction Perspective for Source Localization with Sensor Arrays,” IEEE Trans. Signal Processing, vol. 53, no. 8, pp. 3010–3022, Aug. 2005.
  • [2] V. Cevher, P. Indyk, C. Hegde, and R. G. Baraniuk, “Recovery of Clustered Sparse Signals from Compressive Measurements,” in Int. Conf. Sampling Theory and Applications (SAMPTA 2009), Marseille, France, May. 2009, pp. 18–22.
  • [3] B. K. Natarajan, “Sparse Approximate Solutions to Linear Systems,” SIAM J. Computing, vol. 24, no. 2, pp. 227–234, 1995.
  • [4] A. J. Miller, Subset Selection in Regression.   New York, NY: Chapman-Hall, 1990.
  • [5] E. G. Larsson and Y. Selen, “Linear Regression With a Sparse Parameter Vector,” IEEE Trans. Signal Processing, vol. 55, no. 2, pp. 451–460, Feb.. 2007.
  • [6] Z. Tian and G. Giannakis, “Compressed Sensing for Wideband Cognitive Radios,” in Proc. Acoust., Speech, Signal Processing (ICASSP), Honolulu, HI, Apr. 2007, pp. IV–1357–IV–1360.
  • [7] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic Decomposition by Basis Pursuit,” SIAM J. Sci. Computing, vol. 20, no. 1, pp. 33–61, 1998.
  • [8] M. J. Wainwright, “Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting,” IEEE Trans. Inform. Theory, vol. 55, no. 12, pp. 5728–5741, Dec. 2009.
  • [9] A. K. Fletcher, S. Rangan, and V. K. Goyal, “Necessary and sufficient conditions for sparsity pattern recovery,” IEEE Trans. Inform. Theory, vol. 55, no. 12, pp. 5758–5772, Dec. 2009.
  • [10] M. M. Akcakaya and V. Tarokh, “Shannon-theoretic limits on noisy compressive sampling,” IEEE Trans. Inform. Theory, vol. 56, no. 1, pp. 492–504, Jan. 2010.
  • [11] G. Tang and A. Nehorai, “Performance analysis for sparse support recovery,” IEEE Trans. Inform. Theory, vol. 56, no. 3, pp. 1383–1399, March 2010.
  • [12] T. Wimalajeewa and P. Varshney, “Performance Bounds for Sparsity Pattern Recovery With Quantized Noisy Random Projections,” Selected Topics in Signal Processing, IEEE Journal of, vol. 6, no. 1, pp. 43–57, Feb 2012.
  • [13] G. Reeves and M. Gastpar, “A Note on Optimal Support Recovery in Compressed Sensing,” in Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on, Nov 2009, pp. 1576–1580.
  • [14] H. Wang and Q. Wan, “One Bit Support Recovery,” in Wireless Communications Networking and Mobile Computing (WiCOM), 2010 6th International Conference on, Sept 2010, pp. 1–4.
  • [15] A. Zymnis, S. Boyd, and E. Candes, “Compressed Sensing With Quantized Measurements,” Signal Processing Letters, IEEE, vol. 17, no. 2, pp. 149–152, Feb 2010.
  • [16] Y. Plan and R. Vershynin, “One-Bit Compressed Sensing by Linear Programming,” Communications on Pure and Applied Mathematics, vol. 66, no. 8, pp. 1275–1297, 2013. [Online]. Available: http://dx.doi.org/10.1002/cpa.21442
  • [17] P. Boufounos and R. Baraniuk, “1-Bit compressive sensing,” in Information Sciences and Systems, 2008. CISS 2008. 42nd Annual Conference on, March 2008, pp. 16–21.
  • [18] J. A. Tropp, “Algorithms for simultaneous sparse approximation. Part II: Convex relaxation ,” EURASIP Journal of Signal Processing, vol. 86, no. 3, pp. 589 – 602, 2006.
  • [19] I. Daubechies, M. Defrise, and C. De Mol, “An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004. [Online]. Available: http://dx.doi.org/10.1002/cpa.20042
  • [20] S. Wright, R. Nowak, and M. Figueiredo, “Sparse Reconstruction by Separable Approximation,” Signal Processing, IEEE Transactions on, vol. 57, no. 7, pp. 2479–2493, July 2009.
  • [21] A. Beck and M. Teboulle, “A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. [Online]. Available: http://dx.doi.org/10.1137/080716542
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
18190
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description