Iterative Detection for Compressive Sensing:
We consider compressive sensing as a source coding method for signal transmission. We concatenate a convolutional coding system with 1-bit compressive sensing to obtain a serial concatenated system model for sparse signal transmission over an AWGN channel. The proposed source/channel decoder, which we refer to as turbo CS, is robust against channel noise and its signal reconstruction performance at the receiver increases considerably through iterations. We show 12 dB improvement with six turbo CS iterations compared to a non-iterative concatenated source/channel decoder.
In real transmission systems, source coding is used to minimize the transmitted bits. Moreover, channel coding is nearly always applied to minimize bit errors due to the channel noise. Therefore, source coding concatenated with channel coding is a recognized approach for reliable transmission of data .
Compressive sensing (CS) is a new source coding approach in which signal measurement and compression are performed in a single step. The basic idea of CS is that any -dimensional signal which is -sparse (i.e., there are only non-zero elements in the signal where ) is measured through few random linear projections. The sufficient number of projections, , guaranteeing signal reconstruction is often much less than . Thus, CS can be considered as a method of data compression with rate . However, CS deals only with sparse or approximately sparse signals . In practice, many types of signals are sparse or can be represented with a sparse vector in a proper basis. Moreover, in some signal processing applications, e.g., magnetic resonance imaging, the processes of measurement and compression are not separable, and acquiring the signal through linear projections is an intrinsic part of the measuring process .
In this paper, we use the principle of concatenated codes and turbo coding. Turbo codes are powerful channel encoding techniques first introduced by Berrou et al. in 1993  and the decoding performance achieves results close to the channel capacity. The encoding structure of a turbo encoder consists of a serial or parallel concatenation of convolutional encoders separated by random interleaving.
In particular, we utilize the serial concatenated code approach. The serial turbo decoder signal is decoded in an iterative process between two a posteriori probability (APP) soft-input/soft-output decoders .
The aim of this work is to apply a source encoder as the outer encoder concatenated with an inner channel decoder. In , the authors introduced a turbo decoding approach by concatenating fixed length codes with convolutional codes for audio/video transmission. In this paper, we apply CS as a generic source coder for any kind of sparse signal. In order to do so, there are two main challenges:
to input a posteriori belief provided by the APP decoder to the CS decoder.
to calculate a priori information from the CS decoder as input to the APP decoder for the next iteration.
As an approach, Bayesian CS , which is a CS decoding method considering CS inversion from a Bayesian prospective, could be applied. Bayesian CS provides density function for each element of the reconstructed signal, which can be applied as a priori information. However, the output of CS encoders is zero mean Gaussian distributed values while the input of convolutional encoders are and . Thus, a special quantization is needed after a CS encoder.
In this work, we use 1-bit CS as the outer encoder. 1-bit CS is a quantized version of CS representing each measurement by only a two-state value.
There are several methods introduced in the literature to solve 1-bit CS decoding problem. Some of these methods are based on linear and convex programming, e.g. [11, 12], while others are based on greedy methods [10, 13, 14, 15, 16, 17, 18]. However, all the above mentioned methods only accept binary values as input to estimate the signal. In addition, none of these methods generates soft-valued a priori information.
The key contribution of this paper is to propose a new reconstruction method for 1-bit CS which accepts soft-input and generates soft-output and, hence, is able to work iteratively together with an APP decoder to reconstruct the signal at receiver in the same fashion as in a classic serial concatenated turbo code.
We refer to the proposed coding approach as turbo CS coding. The turbo CS encoder consists of the concatenation of a 1-bit CS encoder and a convolutional encoder at the transmitter. In the receiver, the turbo CS decoder iterates between an APP decoder and a 1-bit CS decoder. Numerical experiments show a significant improvement in the quality of the reconstructed signal through turbo CS iterations.
Ii system model
In this section, we describe the serial concatenated transmission and channel model. In the first part, we discuss 1-bit CS configuration and in the second part we combine 1-bit CS with a convolutional encoder.
Ii-a 1-bit compressive sensing
In classic compressive sensing, each measurement is obtained through a projection of -sparse signal, , onto a random vector . Therefore, for number of measurements () we have
where is the th row of and . It is shown that exact signal reconstruction is guaranteed when satisfies the restricted isometry property .
In most practical cases, obtained measurements need to be quantized before reconstruction. In the extreme case, which is referred to as 1-bit CS, measurements are represented by only one bit . 1-bit CS output is essentially a sign function over CS measurements. Hence, binary measurements, , are obtained from
where denotes the sign function.
Ii-B Serially concatenated encoders
At the transmitter, the interleaved binary output of the 1-bit CS encoder is encoded by a convolutional encoder. We denote the coded bits by . is the rate of the convolutional encoder. In turbo coding context, the 1-bit CS encoder and the convolutional encoder are referred to as outer and inner encoders respectively. The coded bits are transmitted through an AWGN channel with a known variance, . The channel output is then
where and denotes the th element in the argument. The system model is illustrated in Fig. 1.
In the next section, we propose an iterative method to reconstruct at the receiver from the noisy coded measurements .
Iii Iterative 1-bit compressive sensing: Turbo CS
Iii-a A posteriori probability decoder
A posteriori probability (APP) decoder is a soft-input/soft-output decoder . APP takes two inputs: received signal and a priori probability of elements of denoted by . Hence, we have
At the output, APP gives a posteriori probability of the elements of denoted by . Therefore,
Typically in a maximum a posteriori probability decoder, a decision is made on yielding hard-bits.
In iterative decoders, however, bit probabilities are exchanged between decoders, since they contain information about the reliability of the data. A vector containing soft-bits is denoted by . Each element in is defined as the expected value of the corresponding element in . Hence, for a priori soft-bits we have
In the same way, a posteriori soft-bits are obtained from
Furthermore, hard-bits are denoted with and we have
Intuitively, when is , the th soft-bit is and when is , the th soft-bit is .
In iterative decoding, the inner decoder needs to receive the parameter and estimate and . In the next two sections, we give a brief review on 1-bit CS reconstruction and then introduce a 1-bit CS algorithm that can be used in an iterative turbo CS decoder where the CS constituent decoder accepts soft bits in and generates soft bits out.
Iii-B 1-bit CS reconstruction algorithm
The aim of a 1-bit CS reconstruction algorithm is to estimate the values in a vector based on an observation vector and knowing the measuring matrix . In many practical cases, there might be some random bit flips in due to the quantization error or noise in the transmission process. The number of these bit flips is a measure of the noise level. Some of the reconstruction algorithms consider the number of the bit flips to reconstruct the signal efficiently and are robust against the random bit flips in the binary measurements [17, 18].
Among all 1-bit CS reconstruction algorithms, adaptive outlier pursuit with bit flips (AOP-f)  has the best reconstruction performance in the presence of random bit flips and when the sparsity level of the signal and the number of the bit flips are known. There are two types of AOP-f based on -norm minimization (AOP--f) and -norm minimization (AOP--f). Since AOP--f outperforms AOP--f in terms of signal reconstruction performance, we focus on AOP--f in this paper. Henceforth, we refer to AOP--f as AOP-f.
AOP-f is an iterative algorithm that estimates and the position of the bit flips in . denotes the noisy binary measurements vector and denotes the number of the bit flips in . The position of the random bit flips in is represented by vector where and denotes element-wise product. That is, means that there is a bit flip in . AOP-f solves the following optimization problem
where denotes -norm111 of the argument and is negative function defined as
In the next section, we propose some changes to the input of AOP-f to be able to utilize soft-bits as input. In addition, we apply a mapping method on the reconstructed signal to produce a priori soft-bits to be used as an input to the APP decoder.
Iii-C Soft-in/soft-out 1-bit CS decoder
As mentioned in section III-B, AOP-f accepts binary values as input to reconstruct the signal. Therefore, a trivial way to apply AOP-f as a decoder after the APP decoder is to use from (8) in (9). However, by solely using hard-bits, we lose information about the reliability of the data. In addition, AOP-f needs to know an estimate of the number of the bit flips in to reconstruct the signal efficiently.
Here, we develop a method to use soft-bits as input to reconstruct the signal via AOP-f. is replaced with in (9). In addition, we define whose elements represent the probability of a bit flip in the corresponding element of . Thus, is derived from
The estimated number of the bit flips is denoted by and is obtained from
The next step of the decoder generates soft-bits, , at the output. We apply a CS encoder over the estimated signal. Thus, we obtain
Elements of can be approximated by a Gaussian distribution with zero mean. In this case, unlike binary phase shift keying (BPSK) system, most of the received values to be mapped are concentrated around . The challenge is to map these values to an interval between and based on their reliabilities. The elements with values around are the least reliable for generating a priori soft-values. The elements with the most reliability are the ones that are the furthest from . Therefore, we utilize elements of that are further from , and over iterations, we consider the influence of the elements of with values closer and closer to zero.
In the case that either there is no noise in the received binary measurements or the estimation of the number of the bit flips is exact, is very close to and the sign of each element of describes the sign of the corresponding element in . In the noisy case, however, there are some sign mismatches between the elements of and . To consider the effect of the random bit flips on the soft-values, we multiply with and the result is denoted by ,
In fact, the element-wise multiplication in (15) removes the sign of the elements of . In the case that there is no bit flips in , then and all the elements of (15) are positive. However, in the presence of the random bit flips, the negative elements of depict the sign flips in and the elements with large amplitudes are more reliable than the ones with small and negative amplitudes. Based on the above facts, a mapping function is introduced which maps each element of to a real value between and . The mapping function is defined as follows
where is the normalized Euclidean distance between and . We have
In fact, determines how much information is lost by applying sign function over .
In Fig. 2, the mapping method is depicted. In words, is a mapping function that categorizes the elements of by their signs:
The negative elements of are mapped to values in an interval between and based on their amplitudes. As mentioned above, the negative elements in specify the bit flips in . In addition, the negative elements with small values are more likely to be flipped and are mapped to values close to .
The positive elements of are mapped based on their amplitudes between and to values between and . Elements of exceeding are clipped and mapped to .
We refer to the proposed decoding method as soft-in/soft-out 1-bit CS decoder.
Example: To justify the performance of the soft-in/soft-out 1-bit CS decoder, we consider the best case where there is no noise in the binary measurements. Hence, . We have from (12). is estimated by (13). Elements of obtained from (15) are all positive values. Therefore, . Furthermore, and (17) gives that yields . Thus, all the elements of are . In this case, , given by (18), is identical to .
Iii-D Combination of soft-in/soft-out 1-bit CS and APP decoding
In section III-C, the soft-in/soft-out 1-bit CS reconstruction method was introduced which receives soft-bits and generates improved soft-bits as output. In this section, we combine the soft-in/soft-out 1-bit CS decoder with an APP decoder to obtain the turbo CS decoder for the transmission system in section II.
As discussed in section II, the transmission system consists of a 1-bit CS encoder serially concatenated with a convolutional encoder at the transmitter. Hence, the 1-bit CS encoder works as a source encoder that receives real values and compresses the data with rate . The binary output of the 1-bit CS encoder is given to the convolutional encoder. At the receiver, as illustrated in Fig. 3, the received noisy signal is input to an APP decoder. The a priori soft-bits are zero for the first iteration. The soft-output of the decoder, namely a posteriori probability, is given to the soft-in/soft-out 1-bit CS decoder to estimate the transmitted signal. The soft-output of the soft-in/soft-out 1-bit CS decoder is provided to the APP decoder as a priori information for the next iteration. These steps are repeated for each iteration. Through the iterations and as tends to , goes to and the output of the turbo CS decoder converges.
Iv Numerical results
In this section, we verify the reconstruction performance of turbo CS through numerical simulation. We choose -sparse signal vector randomly in each realization. We set the dimension of the signal and its sparsity level . The non-zero elements of follow zero-mean Gaussian distribution with variance . These elements are distributed uniformly through the signal vector . The elements of measuring matrix are generated based on a Gaussian distribution with zero mean and variance . The number of the encoded bits is set to . Thus, the rate of the 1-bit CS encoder is . The signal is encoded through the 1-bit CS encoder and its binary output is interleaved by a random interleaver with block length . However, simulation results show that the reconstruction performance of the turbo CS decoding system is not sensitive to the interleaver block length.
The interleaved bits are passed to a G[5,7] convolutional encoder with memory=, four states and rate . Then, the output of the convolutional encoder is passed through an AWGN channel with noise variance . We show the power of the channel noise by signal to noise ratio (SNR) which is defined as
where denotes the averaged power of a bit at the input of the channel encoder and denotes the encoder rate which is for G[5,7].
The channel output is decoded by our proposed turbo CS decoder. To show the reconstruction performance, received signal to noise ratio (RSNR) is defined as follows
We verify the reconstruction performance of turbo CS through iterations in different channel noise scenarios. The signal to noise ratio is varied between dB and dB and the calculated RSNR is averaged over realizations. Simulated results are shown in Fig. 4 with to iterations of the turbo CS decoder.
As it can be seen in Fig. 4, there is a huge improvement in the reconstruction performance of turbo CS through iterations. The reconstruction performance converges after around six iterations. We achieve dB of improvement at dB. This is a massive performance gain over concatenated coding with no iterations (iteration 1 in Fig. 4). Note we see the turbo like properties where most of the gain ( dB) comes in the 2nd iteration. After convergence, the difference between the reconstruction accuracy of turbo CS when the channel is very noisy ( dB) and when the channel is almost noiseless ( dB) is just around dB.
In another simulation, the convolutional encoder is removed. In this case, the channel noise is calculated by (19) where . Since there is no information at the receiver about the number of the random bit flips in the received signal, we set in (13). The performance of uncoded 1-bit CS is depicted by dashed line in Fig. 4. It can be seen that RSNR of 1-bit CS decoding is significantly worse when there is no channel encoding/decoding used.
Note that when SNR is less than dB, uncoded 1-bit CS outperforms turbo CS. This behaviour is not unexpected since in general when the AWGN channel is very noisy, convolutional decoders have poor performance in terms of bit error rate in comparison to an uncoded BPSK system .
In this work, we applied 1-bit CS as a generic source encoding method in a signal transmission problem over an AWGN channel. We combined 1-bit CS with a convolutional encoder and formed a serial concatenated source/channel encoding method. The key contribution of this paper is the turbo CS decoding method for the above transmission system. In turbo CS, we benefit from a posteriori soft-bits generated by the APP decoder to estimate the reliability (number of the sign flips) of the bits given to the 1-bit CS decoder. In addition, a mapping method was introduced to modify the given soft-bits based on the current estimation of the signal.
Here, we used a non-recursive Convolutional Code G[5,7] as the channel encoder and the appropriate APP decoder within our turbo CS decoder. However, we expect that most convolutional endcoder/decoder could be applied to this system model to reconstruct the signal jointly with the soft-in/soft-out 1-bit CS decoder. In addition, unlike classic turbo coding, turbo CS performance is not sensitive to the length of the interleaver.
Simulation results show that the reconstruction performance of turbo CS improves considerably through iterations. When the channel is very noisy (SNR= dB) dB gain is achievable after six iterations. In addition, the performance of the converged turbo CS is robust against the channel noise.
-  J. Hagenauer, “Source-controlled channel decoding,” IEEE Trans. Commun., vol. 43, no. 9, pp. 2449–2457, Sep. 1995.
-  E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, 2005.
-  E. J. Candès and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21–30, Mar. 2008.
-  M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 72–82, Mar. 2008.
-  C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” in Proc. IEEE Int. Conf. Commun. (ICC), vol. 2, Geneva, Switzerland, May 1993, pp. 1064–1070.
-  S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concatenation of interleaved codes: Performance analysis, design, and iterative decoding,” IEEE Trans. Inf. Theory, vol. 44, no. 3, pp. 909–926, May 1998.
-  L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Trans. Inf. Theory, vol. 20, no. 2, pp. 284–287, Mar. 1974.
-  L. Schmalen, M. Adrat, T. Clevorn, and P. Vary, “EXIT chart based system design for iterative source-channel decoding with fixed-length codes,” IEEE Trans. Commun., vol. 59, no. 9, pp. 2406–2413, Sep. 2011.
-  S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2346–2356, Jun. 2008.
-  P. T. Boufounos and R. G. Baraniuk, “1-bit compressive sensing,” in Proc. Annual Conf. Inf. Sciences Syst. (CISS), Princeton, NJ, Mar. 2008, pp. 16–21.
-  Y. Plan and R. Vershynin, “One-bit compressed sensing by linear programming,” Commun. Pure and Appl. Math., vol. 66, no. 8, pp. 1275–1297, 2013. [Online]. Available: http://dx.doi.org/10.1002/cpa.21442
-  ——, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 482–494, Dec. 2012.
-  P. T. Boufounos, “Greedy sparse signal reconstruction from sign measurements,” in Proc. Asilomar Conf. Signals, Syst., Comput., CA, Nov. 2009, pp. 1305–1309.
-  J. N. Laska, Z. Wen, W. Yin, and R. G. Baraniuk, “Trust, but verify: Fast and accurate signal recovery from 1-bit compressive measurements,” IEEE Trans. Signal Process., vol. 59, no. 11, pp. 5289–5301, Nov. 2011.
-  L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inf. Theory, vol. 59, no. 4, pp. 2082–2102, Apr. 2013.
-  U. S. Kamilov, A. Bourquard, A. Amini, and M. Unser, “One-bit measurements with adaptive thresholds,” IEEE Signal Process. Lett., vol. 19, no. 10, pp. 607–610, 2012.
-  M. Yan, Y. Yang, and S. Osher, “Robust 1-bit compressive sensing using adaptive outlier pursuit,” IEEE Trans. Signal Process., vol. 60, no. 7, pp. 3868–3875, 2012.
-  A. Movahed, A. Panahi, and G. Durisi, “A robust RFPI-based 1-bit compressive sensing reconstruction algorithm,” in Proc. IEEE Inf. Theory Workshop (ITW), Laussane, Switzerland, Sep. 2012, pp. 567–571.
-  J. G. Proakis, Digital communications. McGraw-Hill, New York, 1995.