Coded Compressive Sensing: A Compute-and-Recover Approach

Coded Compressive Sensing: A Compute-and-Recover Approach

Abstract

In this paper, we propose coded compressive sensing that recovers an -dimensional integer sparse signal vector from a noisy and quantized measurement vector whose dimension is far-fewer than . The core idea of coded compressive sensing is to construct a linear sensing matrix whose columns consist of lattice codes. We present a two-stage decoding method named compute-and-recover to detect the sparse signal from the noisy and quantized measurements. In the first stage, we transform such measurements into noiseless finite-field measurements using the linearity of lattice codewords. In the second stage, syndrome decoding is applied over the finite-field to reconstruct the sparse signal vector. A sufficient condition of a perfect recovery is derived. Our theoretical result demonstrates an interplay among the quantization level , the sparsity level , the signal dimension , and the number of measurements for the perfect recovery. Considering 1-bit compressive sensing as a special case, we show that the proposed algorithm empirically outperforms an existing greedy recovery algorithm.

1Introduction

Compressive sensing (CS)[?] is a promising technique that recovers a high-dimensional signal represented by a few non-zero elements using far-fewer measurements than the signal dimension. This technique has immense applications ranging from image compression to sensing systems requiring lower power consumption. The mathematical heart of CS is to solve a under-determined linear system of equations by harnessing an inherent sparse structure in the signal.

Let and be a real-valued spare signal vector and a compressive sensing matrix that linearly projects a high-dimensional signal in to a low-dimensional signal in where , respectively. Formally, the noiseless CS problem is to reconstruct sparse signal vector by solving the following -minimization problem:

where the collection of non-zero elements’ positions in , , is defined as , with cardinality . Unfortunately, computational complexity for solving this problem is NP-hard, implying that, in practice, it is computationally infeasible to obtain the optimal solution when is very large. There exists many practical algorithms that perfectly reconstruct the sparse signal with polynomial time computational complexity, provided that a measurement matrix has a good incoherence property. Greedy sparse signal recovery algorithms [3] became popular due to the computational efficiency in implementing these algorithms.

In practice, obtaining the measurement vector with infinite precision is infeasible. This is because, in many image sensors or communication systems, signal acquisition is performed by using analog-to-digital converters (ADCs) that quantizes each measurement to a predefined value with a finite number of bits. This quantization process makes difficulty in recovering sparse signals, as it might give rise to significant measurement errors, especially when the number of quantization bits is small. Numerous sparse signal recovery algorithms with quantized measurements in [10] were proposed to overcome the impact of the quantization errors. In particular, under the premise that each measurement is quantized with just one-bit (i.e., an extreme case of quantization errors), a compressive sensing problem was introduced in [12]. For given and , the measurements are obtained using their signs as

where the measurement vector is in the Boolean cube, i.e., . It was shown that, with the one-bit measurements, sparse signal vectors with unit-norm can be recovered with high probability by convex optimization techniques [13] or iterative greedy algorithms [14].

In this paper, we study a generalized compressive sensing problem in which each measurement is quantized with levels where () is a prime number. We also consider a quantized source signal, i.e., the non-zero elements of a sparse signal are chosen from a set of integer values, i.e., . Such setting can be found in many applications. For instance, in a random access wireless system, active users among all users in the system send quadrature amplitude modulated symbols (i.e., -level quantized signals) to a receiver, and it detects the active users’ signals using a -level ADC.

A fundamental question we ask in this paper is: what is the sufficient condition for the perfect recovery of the integer sparse signal with -level per measurement in the presence of Gaussian noise? To shed light on the answer to this question, we develop a new sparse signal recovery framework, which is referred to as “coded compressive sensing.” The core idea of coded compressive sensing is to exploit both source and channel coding techniques in information theory. The proposed scheme consists of two cascade encoding and decoding phases. The first phase of encoding is the compression phase, in which a high dimensional sparse signal vector in is compressed to a low dimensional signal vector using a parity check matrix of a maximum distance separable (MDS) linear code. The second phase is the dictionary coding phase. In this phase, each dictionary vector (each column vector of the parity check matrix) is encoded to a coded dictionary vector by exploiting a (near) capacity-achieving lattice code for a Gaussian channel. We propose a two-stage decoding method called “compute-and-recover.” In the first stage of decoding, a linear combination of the encoded dictionary vectors corresponding the non-zero elements in is decoded. We call this as the dictionary equation decoding stage that produces a noise-free measurement vector. Once the dictionary equation is perfectly decoded, in the second stage of decoding, we apply syndrome decoding to the equivalent finite field representation of the dictionary equation for the sparse signal recovery. Using the proposed scheme, we derive a lower bound of the number of measurements for the perfect recovery as a function of important system parameters: the quantization level , the sparsity level , the signal dimension , and the number of measurements . Considering as a special case, we compare the proposed scheme with existing algorithms developed for the one-bit compressive sensing problem [12]. Numerical results show that the proposed scheme outperforms than binary iterative hard thresholding (BITH) [14] in a low signal-to-noise ratio (SNR) regime.

2Coded Compressive Sensing Problem

In this section, we present a coded compressive sensing framework for an integer sparse signal recovery in the presence Gaussian noise.

2.1Signal Model

We are interested in a sparse signal detection problem from a compressed measurement in the presence of noise. Let be an unknown sparse signal vector whose sparsity level is equal to , i.e., . The measurement equation of quantized compressed sensing is given by

where denotes the -level scalar quantizer that applied component-wise, and and denote the measurement and noise vector, respectively. All entries of the noise vector are assumed to be independent and identically distributed (IID) Gaussian random variables with zero mean and variance , i.e., for all .

Our objective is to reliably estimate the unknown sparse signal vector given in the presence of Gaussian noise , by appropriately constructing a linear measurement matrix and -level scalar quantizer . We define a sparse signal recovery decoder , which maps the measurement vector to an estimate of the original sparse signal vector . It is said that the average probability of error is at most if .

2.2Sensing Matrix Construction

A linear encoding function is represented by a sensing matrix , which linearly maps the -dimensional sparse vector to an -dimensional output vector, where . We construct the sensing matrix using the proposed idea, which is referred to as dictionary coding.

Dictionary Basis Vector Selection

Let be a parity check matrix of a -ary MDS code, where is a prime power for any positive integer . In this paper we focus on a -ary Reed-Solomon (RS) code with field size constraint . Thus, the parity check matrix of the RS code has full-rank where . The th column vector of is denoted by where . We define the one-to-one mapping that maps each element of into an -length word in . For instance, when , it is possible to express an element of as a binary vector of length . Using this mapping, we can transform each dictionary vector into where . The transformed column vector is referred to as the th dictionary basis vector.

Dictionary Coding via a Lattice Code

Dictionary coding is to map a dictionary basis vector in into a lattice point in using lattice encoding where .

We commence by providing a brief background for a lattice construction. Let be the ring of Gaussian integers and be a Gaussian prime. Let us denote the addition over by , and let be the natural mapping of onto . We recall the nested lattice code construction given in [14]. Let be a lattice in , with full-rank generator matrix . Let denote a linear code over with block length and dimension , with generator matrix where . The lattice is defined through “construction A” (see [8] and references therein) as

where is the image of under the mapping function . It follows that is a chain of nested lattices, such that and .

For a lattice and , we define the lattice quantizer , the Voronoi region and . For and given above, we define the lattice code with rate .

Construction A provides an encoding function that maps a dictionary basis vector into a codeword in . Notice that the set is a system of coset representatives of the cosets of in . Hence, the encoding function is defined by

where

Consequently, the th codeword vector is produced by the encoding function

where each dictionary vector is chosen from lattice codewords in the nested lattice codebook , i.e., . Using this construction method, we have a linear sensing matrix consisting of column vectors as

The average power of each codeword is assumed to be

Finally, in this paper, we choose the shaping lattice as a cubic lattice, namely , which enables that a lattice decoding is implemented by a scalar quantizer (see [16] for more details). Here, is chosen to satisfy the power constraint in (Equation 3) as . Then, the element-wise SNR is defined as

2.3Proposed Scalar Quantizer

We propose a -level scalar quantizer called sawtooth transform as depicted in Figure 1, which can be implemented by the modulo operation followed by the scalar quantization as

3Main Result

In this section, we characterize the sufficient condition for the exact recovery of an integer sparse signal vector. The following theorem is the main result of this paper.

The proof of this theorem is based on the proposed two-stage decoding method called “compute-and-recover”. In the first stage, we decode an integer linear combination of coded dictionary vectors by removing noise, which essentially yields a finite-field sparse signal recovery problem. In the second stage, we apply syndrome decoding over the finite-field to reconstruct the sparse signal vector.

3.1Step 1: Computation of Dictionary Equation

In this stage, we decode a noise-free measurement vector from using the key property of a lattice code. Recall that dictionary vector is a lattice code; thereby, any integer-linear combination of lattice codewords is again a lattice codeword [8]. Thus we have that due to . We will first exploit this fact to decode a noise-free measurement vector.

Letting be the support set of , the noisy measurement vector with the -level quantizer is given by

where the second equality follows from (Equation 4).

We transform this noisy and quantized measurement into a noiseless finite-field measurement as follows. From the quantized sequence , we produce the sequence with components

for . Since by construction, and using the obvious identity with and , we arrive at

where the elements of the discrete additive noise vector are given by

for . Since, by linearity, is a codeword of , the above channel can be considered as a point-to-point channel with discrete additive noise over . Then, we can reliably decode if

This is an immediate consequence of the well-known fact that linear codes achieve the capacity of symmetric discrete memoryless channel[17]. From this result, we can obtain that the sufficient condition for the perfect recovery of the noise-free measurement vector is

3.2Step 2: Recovery via Syndrome Decoding

Recall that, in the first stage, the decoder has recovered the dictionary equation, i.e., where . Using the linearity of code , we have:

where represents the effective measurement vector in . As a result, the measurement equation can be equivalently rewritten in a matrix form over as

where denotes the effective sensing matrix whose column vectors are selected from dictionary basis vectors .

We would like to recover from the effective measurement vector in a noiseless setting and using one-to-one mapping . Unlike the sparse recovery algorithm in a finite field in [9], we apply a syndrome decoding method [18]. Syndrome decoding harnesses the fact that there is a bijection mapping between a sparse signal (error) vector and the effective measurement (syndrome) vector , provided that contains at most non-zero entries, i.e, . Recall that, in our construction, the th dictionary vector in was generated from the mapping where , i.e., . Since is bijection, applying the inverse mapping function , we obtain the resultant measurement equation over as

where the second equality follows from and the last equality is due to the one-to-one mapping between and by . Since was selected from the parity-check matrix of the -ary -RS code whose minimum distance, achieves a singleton bound, i.e., . As a result, the syndrome decoding method allows us to recover the sparse signal perfectly, provided that

Putting two inequalities in and together and using the fact and , the number of required measurements for the sparse signal recover in the presence of Gaussian noise boils down to

which completes the proof.

Remark 1 (Decoding complexity)

: The proposed two-stage decoding method can be implemented with a polynomial time computational complexity. In the first stage, the lattice equation can be efficiently decoded with the -level scalar quantizer in [16] and the successive decoding algorithm of the polar code [20], which essentially uses operations. Syndrome decoding used in the second stage can be implemented with polynomial time computational complexity algorithms such as Berlekamp-Massey algorithm, which requires operations in . Considering is a vector space over , this amount corresponds to operations in . Since and , the overall computational complexity of the proposed method is at most operations for recovery.

Remark 2 (Universality of the measurement matrix)

: The proposed coded compressive sensing method is universal, as it is possible to recover all sparse signals using a fixed sensing matrix . This universality is practically important, because one may needs to randomly construct a new measurement matrix for each signal. Some existing one-bit compressive sensing algorithms [14] do not hold the universality property.

Remark 3 (Non-integer sparse signal case)

: One potential concern for our integer sparse signal setting is that a sparse signal can have real value components in some applications. This concern can be resolved by exploiting an integer-forcing technique in which is quantized into an integer vector and interpreting the residual as additional noise. Then, the effective measurements are obtained as

where denotes effective noise. Utilizing this modified equation, we are able to apply the proposed coded compressive sensing method to estimate the integer approximation . Assuming the non-zero values in are bounded as for some , we conjecture that the proposed scheme guarantees to recover the sparse signal with a bounded estimation error with an increased number of measurements than that in Theorem 1. The rigorous proof of this conjecture will be provided in our journal version [21].

Remark 4 (Noiseless one-bit compressive sensing)

: One interesting scenario is that when a one-bit quantizer and a binary signal are used. In the case of noise-free, the number of required measurements for the perfect recovery is lower bounded by

Figure 1:  The proposed coded compressive sensing framework for the binary sparse signal vector {\bf x} \in \{0,1\}^n with one-bit and noisy measurements.
Figure 1: The proposed coded compressive sensing framework for the binary sparse signal vector with one-bit and noisy measurements.

4Numerical Example

In this section, we provide the signal recovery performance of the proposed coded compressive sensing method for , i.e., one-bit compressive sensing, by numerical experiments.

To test the proposed algorithm, -sparse binary vector is generated in which the non-zero positions of is uniformly distributed between 1 and 511. A fixed binary sensing matrix is designed by the concatenation of compression matrix and the generator matrix of polar code (which is completely determined by the rate-one Arikan’s kernal matrix and the information set [20]), as illustrated in Figure 1. In particular, the binary compression matrix is obtained from that is the parity check matrix of -ary RS code with the minimum distance of . Therefore, it is perfectly able to perform syndrome decoding up to the sparsity level of 5 in a noiseless case. In addition, we pick the binary polar generator matrix of code rate . We evaluate the perfect recovery probability, i.e., of the sparse signal in the presence of noise with variance when the proposed algorithm is applied.

We compare our coded compressive sensing algorithm with the following two well-known one-bit compressive sensing algorithms with some modification for a binary signal.

  • Convex optimization: a variant of the -minimization method proposed in [13] for a binary sparse signal, which is summarized in Table ?;

  • Binary iterative hard thresholding (BIHT): a heuristic algorithm in [14] with some modifications for the binary signal recovery as in step 3) and 4) of Table ?.

For the two modified reference algorithms, we use a Gaussian sensing matrix whose elements are drawn from IID Gaussian distribution . For each setting of , , , and , we perform the recovery experiment for 500 independent trials, and compute the average of perfect recovery rate.

A Convex Optimization Algorithm for Binary Sparse Signal
1) Initialization:
Given , , , , , , and
2) Find solving the following convex optimization problem:
,
.
3) Select the largest index in :
.
4) Binary signal assignment in :
and .

Figure 2:  Coded one-bit compressive sensing for the binary sparse signal vector {\bf x} \in \{0,1\}^n.
Figure 2: Coded one-bit compressive sensing for the binary sparse signal vector .

Figure 2 plots the perfect recovery probability versus SNR for each algorithm, when , , and . As can be seen in Figure 2, the proposed method outperforms BIHT significantly in terms of the perfect signal recovery performance. Specifically, BIHT is not capable of recovering the signal with high probability until SNR=12 dB, because there are a lot of sign flips in the measurements due to noise. Whereas the proposed algorithm is robust to noise; thereby it recovers the signal with probability one when SNR is 6 dB above. The convex optimization approach provides a better performance than the other algorithms; yet, it requires the computational complexity order of , which is much higher than that of the proposed one.

5Conclusion

In this paper, we proposed a novel compressive sensing framework with noisy and quantized measurements for integer sparse signals. With this framework we derived the sufficient condition of the perfect recovery as a function of important system parameters. Considering one-bit compressive sensing as a special case, we demonstrated that the proposed algorithm empirically outperforms the existing greedy recovery algorithm.

References

  1. “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,”
    E. J. Cands, J. Romberg, and T. Tao, IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489-509, Feb. 2006.
  2. “Stable recovery of sparse overcomplete representations in the presence of noise,”
    D. L. Donoho, M. Elad, and V. M. Temlyakov, IEEE Trans. Inf. Theory, vol. 52, no. 1, pp. 6-18, Jan. 2006.
  3. “Signal recovery from random measurements via orthogonal matching pursuit,”
    J. A. Tropp and A. C. Gilbert, IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.
  4. “CoSaMP: iterative signal recovery from incomplete and inaccurate samples,”
    D. Needell and J. A. Tropp, Commun. ACM, vol. 53, no. 12, pp. 93-100, Dec. 2010.
  5. “Subspace pursuit for compressive sensing signal reconstruction,”
    W. Dai and O. Milenkovic, IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 2230-2249, May 2009.
  6. “MAP support detection for greedy sparse signal recovery algorithms in compressive sensing,”
    N. Lee, Submitted to IEEE Trans. Signal Processing, Aug. 2015 (Available at:http://arxiv.org/abs/1508.00964).
  7. “Stable signal recovery for incomplete and inaccurate measurements,”
    E. J. Cands, J. Romberg, and and T. Tao, Commun. Pure Appl. Math., vol. 59, pp. 1207-1223, Aug. 2006.
  8. “Compute-and-forward: Harnessing interference through structured codes,”
    B. Nazer and M. Gastpar, IEEE Trans. Inform. Theory, vol. 57, pp. 6463-6486, Oct. 2011.
  9. “Compressed sensing over finite fields,”
    S. C. Draper and S. Malekpour, in Proc. IEEE ISIT 2013, Jul. 2009.
  10. “Quantization for compressed sensing reconstruction,”
    Z. Sun and V. K. Goyal, in SAMPTA’09, International Conference on Sampling Theory and Applications, 2009.
  11. “Distortion-rate functions for quantized compressive sensing,”
    W. Dai, H. V. Pham, and O. Milenkovic, IEEE Information Theory Workshop on Networking and Information Theory, pp. 171-175, June 2009.
  12. “1-bit compressive sensing,”
    P. T. Boufounos and R. G. Baraniuk, in Proc. of Conference on Information Science and Systems (CISS), Mar. 2008.
  13. “1-bit compressed sensing and sparse logistic regression: A convex programming,”
    Y. Plan and R. Vershynin, IEEE Trans. Inform. Theory, vol. 59, no.1, pp. 482-494, Jan. 2013.
  14. “Robust 1- bit compressive sensing via binary stable embeddings of sparse vectors,”
    L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, IEEE Trans. Inform. Theory, vol. 59, no.4, pp. 2082 - 2102, April 2013.
  15. “Robust support recovery using sparse compressive sensing matrices,”
    J. Haupt and R. Baraniuk, in Proc. 45th Annual Conf. on Information Sciences and Systems, pp. 1-6, Mar. 2011.
  16. “Compute-and-forward strategies for cooperative distributed antenna systems,”
    S-N. Hong and G. Caire, IEEE Trans. Inform. Theory, vol. 59, pp. 5227-5243, Aug. 2013.
  17. R. L. Dobrushin, ”Asymptotic optimality of group and systematic codes for some channels,” Theory of Probability and its Applications, vol. 8, pp. 47-59, 1963.
  18. “Syndrome source coding and its universal generalization,”
    T. Ancheta, IEEE Trans. Inform. Theory, vol.22, no.4, pp.432-436, Jul. 1976.
  19. “On finite alphabet compressive sensing,”
    A. K. Das and S. Vishwanath, in Proc. IEEE Int. Conf. Acoust. Speech Signal Process (ICASSP), pp. 5890-5894, Mar. 2013.
  20. “Channel polarization: A method for constructing capacity achieving codes for symmetric binary-input memoryless channels,”
    E. Arikan, IEEE Trans. Inform. Theory, vol. 55, pp. 3051-3073, Jul. 2009.
  21. “Coded compressive sensing,”
    N. Lee and S-N. Hong, in preparation, 2016.
10333
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.