Iterative Detection for Compressive Sensing:
Turbo CS
Abstract
We consider compressive sensing as a source coding method for signal transmission. We concatenate a convolutional coding system with 1bit compressive sensing to obtain a serial concatenated system model for sparse signal transmission over an AWGN channel. The proposed source/channel decoder, which we refer to as turbo CS, is robust against channel noise and its signal reconstruction performance at the receiver increases considerably through iterations. We show 12 dB improvement with six turbo CS iterations compared to a noniterative concatenated source/channel decoder.
I introduction
In real transmission systems, source coding is used to minimize the transmitted bits. Moreover, channel coding is nearly always applied to minimize bit errors due to the channel noise. Therefore, source coding concatenated with channel coding is a recognized approach for reliable transmission of data [1].
Compressive sensing (CS) is a new source coding approach in which signal measurement and compression are performed in a single step. The basic idea of CS is that any dimensional signal which is sparse (i.e., there are only nonzero elements in the signal where ) is measured through few random linear projections. The sufficient number of projections, , guaranteeing signal reconstruction is often much less than [2]. Thus, CS can be considered as a method of data compression with rate . However, CS deals only with sparse or approximately sparse signals [3]. In practice, many types of signals are sparse or can be represented with a sparse vector in a proper basis. Moreover, in some signal processing applications, e.g., magnetic resonance imaging, the processes of measurement and compression are not separable, and acquiring the signal through linear projections is an intrinsic part of the measuring process [4].
In this paper, we use the principle of concatenated codes and turbo coding. Turbo codes are powerful channel encoding techniques first introduced by Berrou et al. in 1993 [5] and the decoding performance achieves results close to the channel capacity. The encoding structure of a turbo encoder consists of a serial or parallel concatenation of convolutional encoders separated by random interleaving.
In particular, we utilize the serial concatenated code approach[6]. The serial turbo decoder signal is decoded in an iterative process between two a posteriori probability (APP) softinput/softoutput decoders [7].
The aim of this work is to apply a source encoder as the outer encoder concatenated with an inner channel decoder. In [8], the authors introduced a turbo decoding approach by concatenating fixed length codes with convolutional codes for audio/video transmission. In this paper, we apply CS as a generic source coder for any kind of sparse signal. In order to do so, there are two main challenges:

to input a posteriori belief provided by the APP decoder to the CS decoder.

to calculate a priori information from the CS decoder as input to the APP decoder for the next iteration.
As an approach, Bayesian CS [9], which is a CS decoding method considering CS inversion from a Bayesian prospective, could be applied. Bayesian CS provides density function for each element of the reconstructed signal, which can be applied as a priori information. However, the output of CS encoders is zero mean Gaussian distributed values while the input of convolutional encoders are and . Thus, a special quantization is needed after a CS encoder.
In this work, we use 1bit CS as the outer encoder. 1bit CS is a quantized version of CS representing each measurement by only a twostate value[10].
There are several methods introduced in the literature to solve 1bit CS decoding problem. Some of these methods are based on linear and convex programming, e.g. [11, 12], while others are based on greedy methods [10, 13, 14, 15, 16, 17, 18]. However, all the above mentioned methods only accept binary values as input to estimate the signal. In addition, none of these methods generates softvalued a priori information.
The key contribution of this paper is to propose a new reconstruction method for 1bit CS which accepts softinput and generates softoutput and, hence, is able to work iteratively together with an APP decoder to reconstruct the signal at receiver in the same fashion as in a classic serial concatenated turbo code.
We refer to the proposed coding approach as turbo CS coding. The turbo CS encoder consists of the concatenation of a 1bit CS encoder and a convolutional encoder at the transmitter. In the receiver, the turbo CS decoder iterates between an APP decoder and a 1bit CS decoder. Numerical experiments show a significant improvement in the quality of the reconstructed signal through turbo CS iterations.
Ii system model
In this section, we describe the serial concatenated transmission and channel model. In the first part, we discuss 1bit CS configuration and in the second part we combine 1bit CS with a convolutional encoder.
Iia 1bit compressive sensing
In classic compressive sensing, each measurement is obtained through a projection of sparse signal, , onto a random vector . Therefore, for number of measurements () we have
(1) 
where is the th row of and . It is shown that exact signal reconstruction is guaranteed when satisfies the restricted isometry property [3].
In most practical cases, obtained measurements need to be quantized before reconstruction. In the extreme case, which is referred to as 1bit CS, measurements are represented by only one bit [10]. 1bit CS output is essentially a sign function over CS measurements. Hence, binary measurements, , are obtained from
(2) 
where denotes the sign function.
IiB Serially concatenated encoders
At the transmitter, the interleaved binary output of the 1bit CS encoder is encoded by a convolutional encoder. We denote the coded bits by . is the rate of the convolutional encoder. In turbo coding context, the 1bit CS encoder and the convolutional encoder are referred to as outer and inner encoders respectively. The coded bits are transmitted through an AWGN channel with a known variance, . The channel output is then
(3) 
where and denotes the th element in the argument. The system model is illustrated in Fig. 1.
In the next section, we propose an iterative method to reconstruct at the receiver from the noisy coded measurements .
Iii Iterative 1bit compressive sensing: Turbo CS
Iiia A posteriori probability decoder
A posteriori probability (APP) decoder is a softinput/softoutput decoder [7]. APP takes two inputs: received signal and a priori probability of elements of denoted by . Hence, we have
(4) 
At the output, APP gives a posteriori probability of the elements of denoted by . Therefore,
(5) 
Typically in a maximum a posteriori probability decoder, a decision is made on yielding hardbits.
In iterative decoders, however, bit probabilities are exchanged between decoders, since they contain information about the reliability of the data. A vector containing softbits is denoted by . Each element in is defined as the expected value of the corresponding element in . Hence, for a priori softbits we have
(6)  
In the same way, a posteriori softbits are obtained from
(7) 
Furthermore, hardbits are denoted with and we have
(8) 
Intuitively, when is , the th softbit is and when is , the th softbit is .
In iterative decoding, the inner decoder needs to receive the parameter and estimate and . In the next two sections, we give a brief review on 1bit CS reconstruction and then introduce a 1bit CS algorithm that can be used in an iterative turbo CS decoder where the CS constituent decoder accepts soft bits in and generates soft bits out.
IiiB 1bit CS reconstruction algorithm
The aim of a 1bit CS reconstruction algorithm is to estimate the values in a vector based on an observation vector and knowing the measuring matrix . In many practical cases, there might be some random bit flips in due to the quantization error or noise in the transmission process. The number of these bit flips is a measure of the noise level. Some of the reconstruction algorithms consider the number of the bit flips to reconstruct the signal efficiently and are robust against the random bit flips in the binary measurements [17, 18].
Among all 1bit CS reconstruction algorithms, adaptive outlier pursuit with bit flips (AOPf) [17] has the best reconstruction performance in the presence of random bit flips and when the sparsity level of the signal and the number of the bit flips are known. There are two types of AOPf based on norm minimization (AOPf) and norm minimization (AOPf). Since AOPf outperforms AOPf in terms of signal reconstruction performance, we focus on AOPf in this paper. Henceforth, we refer to AOPf as AOPf.
AOPf is an iterative algorithm that estimates and the position of the bit flips in . denotes the noisy binary measurements vector and denotes the number of the bit flips in . The position of the random bit flips in is represented by vector where and denotes elementwise product. That is, means that there is a bit flip in . AOPf solves the following optimization problem
(9) 
where denotes norm^{1}^{1}1 of the argument and is negative function defined as
In the next section, we propose some changes to the input of AOPf to be able to utilize softbits as input. In addition, we apply a mapping method on the reconstructed signal to produce a priori softbits to be used as an input to the APP decoder.
IiiC Softin/softout 1bit CS decoder
As mentioned in section IIIB, AOPf accepts binary values as input to reconstruct the signal. Therefore, a trivial way to apply AOPf as a decoder after the APP decoder is to use from (8) in (9). However, by solely using hardbits, we lose information about the reliability of the data. In addition, AOPf needs to know an estimate of the number of the bit flips in to reconstruct the signal efficiently.
Here, we develop a method to use softbits as input to reconstruct the signal via AOPf. is replaced with in (9). In addition, we define whose elements represent the probability of a bit flip in the corresponding element of . Thus, is derived from
(10) 
Substituting (5), (7) and (8) in (10) gives
(11) 
The estimated number of the bit flips is denoted by and is obtained from
(12) 
Now with from (7) and from (12), can be estimated through AOPf and the following optimization can be solved via the algorithm in [17]
(13) 
The next step of the decoder generates softbits, , at the output. We apply a CS encoder over the estimated signal. Thus, we obtain
(14) 
Elements of can be approximated by a Gaussian distribution with zero mean. In this case, unlike binary phase shift keying (BPSK) system, most of the received values to be mapped are concentrated around . The challenge is to map these values to an interval between and based on their reliabilities. The elements with values around are the least reliable for generating a priori softvalues. The elements with the most reliability are the ones that are the furthest from . Therefore, we utilize elements of that are further from , and over iterations, we consider the influence of the elements of with values closer and closer to zero.
In the case that either there is no noise in the received binary measurements or the estimation of the number of the bit flips is exact, is very close to and the sign of each element of describes the sign of the corresponding element in . In the noisy case, however, there are some sign mismatches between the elements of and . To consider the effect of the random bit flips on the softvalues, we multiply with and the result is denoted by ,
(15) 
In fact, the elementwise multiplication in (15) removes the sign of the elements of . In the case that there is no bit flips in , then and all the elements of (15) are positive. However, in the presence of the random bit flips, the negative elements of depict the sign flips in and the elements with large amplitudes are more reliable than the ones with small and negative amplitudes. Based on the above facts, a mapping function is introduced which maps each element of to a real value between and . The mapping function is defined as follows
(16) 
where is the normalized Euclidean distance between and . We have
(17) 
In fact, determines how much information is lost by applying sign function over .
Since the signs of the elements in were removed in (15), the obtained values from (16) need to be multiplied again by in order to bring the signs back. Hence, the softoutput is obtained by
(18) 
In Fig. 2, the mapping method is depicted. In words, is a mapping function that categorizes the elements of by their signs:

The negative elements of are mapped to values in an interval between and based on their amplitudes. As mentioned above, the negative elements in specify the bit flips in . In addition, the negative elements with small values are more likely to be flipped and are mapped to values close to .

The positive elements of are mapped based on their amplitudes between and to values between and . Elements of exceeding are clipped and mapped to .
We refer to the proposed decoding method as softin/softout 1bit CS decoder.
Example: To justify the performance of the softin/softout 1bit CS decoder, we consider the best case where there is no noise in the binary measurements. Hence, . We have from (12). is estimated by (13). Elements of obtained from (15) are all positive values. Therefore, . Furthermore, and (17) gives that yields . Thus, all the elements of are . In this case, , given by (18), is identical to .
IiiD Combination of softin/softout 1bit CS and APP decoding
In section IIIC, the softin/softout 1bit CS reconstruction method was introduced which receives softbits and generates improved softbits as output. In this section, we combine the softin/softout 1bit CS decoder with an APP decoder to obtain the turbo CS decoder for the transmission system in section II.
As discussed in section II, the transmission system consists of a 1bit CS encoder serially concatenated with a convolutional encoder at the transmitter. Hence, the 1bit CS encoder works as a source encoder that receives real values and compresses the data with rate . The binary output of the 1bit CS encoder is given to the convolutional encoder. At the receiver, as illustrated in Fig. 3, the received noisy signal is input to an APP decoder. The a priori softbits are zero for the first iteration. The softoutput of the decoder, namely a posteriori probability, is given to the softin/softout 1bit CS decoder to estimate the transmitted signal. The softoutput of the softin/softout 1bit CS decoder is provided to the APP decoder as a priori information for the next iteration. These steps are repeated for each iteration. Through the iterations and as tends to , goes to and the output of the turbo CS decoder converges.
Iv Numerical results
In this section, we verify the reconstruction performance of turbo CS through numerical simulation. We choose sparse signal vector randomly in each realization. We set the dimension of the signal and its sparsity level . The nonzero elements of follow zeromean Gaussian distribution with variance . These elements are distributed uniformly through the signal vector . The elements of measuring matrix are generated based on a Gaussian distribution with zero mean and variance . The number of the encoded bits is set to . Thus, the rate of the 1bit CS encoder is . The signal is encoded through the 1bit CS encoder and its binary output is interleaved by a random interleaver with block length . However, simulation results show that the reconstruction performance of the turbo CS decoding system is not sensitive to the interleaver block length.
The interleaved bits are passed to a G[5,7] convolutional encoder with memory=, four states and rate . Then, the output of the convolutional encoder is passed through an AWGN channel with noise variance . We show the power of the channel noise by signal to noise ratio (SNR) which is defined as
(19) 
where denotes the averaged power of a bit at the input of the channel encoder and denotes the encoder rate which is for G[5,7].
The channel output is decoded by our proposed turbo CS decoder. To show the reconstruction performance, received signal to noise ratio (RSNR) is defined as follows
(20) 
We verify the reconstruction performance of turbo CS through iterations in different channel noise scenarios. The signal to noise ratio is varied between dB and dB and the calculated RSNR is averaged over realizations. Simulated results are shown in Fig. 4 with to iterations of the turbo CS decoder.
As it can be seen in Fig. 4, there is a huge improvement in the reconstruction performance of turbo CS through iterations. The reconstruction performance converges after around six iterations. We achieve dB of improvement at dB. This is a massive performance gain over concatenated coding with no iterations (iteration 1 in Fig. 4). Note we see the turbo like properties where most of the gain ( dB) comes in the 2nd iteration. After convergence, the difference between the reconstruction accuracy of turbo CS when the channel is very noisy ( dB) and when the channel is almost noiseless ( dB) is just around dB.
In another simulation, the convolutional encoder is removed. In this case, the channel noise is calculated by (19) where . Since there is no information at the receiver about the number of the random bit flips in the received signal, we set in (13). The performance of uncoded 1bit CS is depicted by dashed line in Fig. 4. It can be seen that RSNR of 1bit CS decoding is significantly worse when there is no channel encoding/decoding used.
Note that when SNR is less than dB, uncoded 1bit CS outperforms turbo CS. This behaviour is not unexpected since in general when the AWGN channel is very noisy, convolutional decoders have poor performance in terms of bit error rate in comparison to an uncoded BPSK system [19].
V Conclusion
In this work, we applied 1bit CS as a generic source encoding method in a signal transmission problem over an AWGN channel. We combined 1bit CS with a convolutional encoder and formed a serial concatenated source/channel encoding method. The key contribution of this paper is the turbo CS decoding method for the above transmission system. In turbo CS, we benefit from a posteriori softbits generated by the APP decoder to estimate the reliability (number of the sign flips) of the bits given to the 1bit CS decoder. In addition, a mapping method was introduced to modify the given softbits based on the current estimation of the signal.
Here, we used a nonrecursive Convolutional Code G[5,7] as the channel encoder and the appropriate APP decoder within our turbo CS decoder. However, we expect that most convolutional endcoder/decoder could be applied to this system model to reconstruct the signal jointly with the softin/softout 1bit CS decoder. In addition, unlike classic turbo coding, turbo CS performance is not sensitive to the length of the interleaver.
Simulation results show that the reconstruction performance of turbo CS improves considerably through iterations. When the channel is very noisy (SNR= dB) dB gain is achievable after six iterations. In addition, the performance of the converged turbo CS is robust against the channel noise.
References
 [1] J. Hagenauer, “Sourcecontrolled channel decoding,” IEEE Trans. Commun., vol. 43, no. 9, pp. 2449–2457, Sep. 1995.
 [2] E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, 2005.
 [3] E. J. Candès and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21–30, Mar. 2008.
 [4] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 72–82, Mar. 2008.
 [5] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit errorcorrecting coding and decoding: Turbocodes. 1,” in Proc. IEEE Int. Conf. Commun. (ICC), vol. 2, Geneva, Switzerland, May 1993, pp. 1064–1070.
 [6] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concatenation of interleaved codes: Performance analysis, design, and iterative decoding,” IEEE Trans. Inf. Theory, vol. 44, no. 3, pp. 909–926, May 1998.
 [7] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Trans. Inf. Theory, vol. 20, no. 2, pp. 284–287, Mar. 1974.
 [8] L. Schmalen, M. Adrat, T. Clevorn, and P. Vary, “EXIT chart based system design for iterative sourcechannel decoding with fixedlength codes,” IEEE Trans. Commun., vol. 59, no. 9, pp. 2406–2413, Sep. 2011.
 [9] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2346–2356, Jun. 2008.
 [10] P. T. Boufounos and R. G. Baraniuk, “1bit compressive sensing,” in Proc. Annual Conf. Inf. Sciences Syst. (CISS), Princeton, NJ, Mar. 2008, pp. 16–21.
 [11] Y. Plan and R. Vershynin, “Onebit compressed sensing by linear programming,” Commun. Pure and Appl. Math., vol. 66, no. 8, pp. 1275–1297, 2013. [Online]. Available: http://dx.doi.org/10.1002/cpa.21442
 [12] ——, “Robust 1bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 482–494, Dec. 2012.
 [13] P. T. Boufounos, “Greedy sparse signal reconstruction from sign measurements,” in Proc. Asilomar Conf. Signals, Syst., Comput., CA, Nov. 2009, pp. 1305–1309.
 [14] J. N. Laska, Z. Wen, W. Yin, and R. G. Baraniuk, “Trust, but verify: Fast and accurate signal recovery from 1bit compressive measurements,” IEEE Trans. Signal Process., vol. 59, no. 11, pp. 5289–5301, Nov. 2011.
 [15] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inf. Theory, vol. 59, no. 4, pp. 2082–2102, Apr. 2013.
 [16] U. S. Kamilov, A. Bourquard, A. Amini, and M. Unser, “Onebit measurements with adaptive thresholds,” IEEE Signal Process. Lett., vol. 19, no. 10, pp. 607–610, 2012.
 [17] M. Yan, Y. Yang, and S. Osher, “Robust 1bit compressive sensing using adaptive outlier pursuit,” IEEE Trans. Signal Process., vol. 60, no. 7, pp. 3868–3875, 2012.
 [18] A. Movahed, A. Panahi, and G. Durisi, “A robust RFPIbased 1bit compressive sensing reconstruction algorithm,” in Proc. IEEE Inf. Theory Workshop (ITW), Laussane, Switzerland, Sep. 2012, pp. 567–571.
 [19] J. G. Proakis, Digital communications. McGrawHill, New York, 1995.