Repeat-Accumulate Signal Codes
We propose a new state-constrained signal code, namely repeat-accumulate signal code (RASC). The original state-constrained signal code directly encodes modulation signals by signal processing filters, the filter coefficients of which are constrained over Eisenstein rings. Although the performance of signal codes is defined by signal filters, optimum filters were found by brute-force search in terms of symbol error rate (SER) in the literature because the asymptotic behavior with different filters has not been investigated. We introduce Monte Carlo density evolution (MC-DE) to analyze the asymptotic behavior of RASCs. Based on our analysis by MC-DE, the optimum filters can be efficiently found for given parameters of the encoder. Numerical results show the difference between the noise threshold and the Shannon limit is within 0.8 dB. We also introduce a low-complexity decoding algorithm. BCJR and fast Fourier transform-based belief propagation (FFT-BP) increase exponentially as the number of output constellations increase. The extended min-sum (EMS) decoder under constraint over Eisenstein rings is established to overcome this problem. Simulation results show the EMS decoder can reduce the computational complexity to less than 25% of that of BCJR and FFT-BP without a significant performance loss, which would be greater than approximately 1 dB.
Capacity approaching codes, such as low-density parity check (LDPC) codes and turbo codes, provide significant coding gains [1, 2, 3]. Based on theoretical analysis, the noise thresholds of these codes, which are the maximum decodable noise variances, are close to the Shannon limit. Moreover, simulation results show that these codes exhibit high coding gain over the additive white Gaussian noise (AWGN) channel when binary phase shift keying (BPSK) is used [4, 5]. However, in practice, it is difficult to approach the Shannon capacity because the correlation among unreliable coded bits degrades the decoding performance when high-order modulation is used. Coded modulation is an effective technique for enhancing the coding gain with high-order modulation because this technique enables the Hamming distances of the linear codes and the Euclidean distances of the modulation to be designed in an integrated manner [6, 7]. In order to achieve a high transmission rate, the computational complexity encountered in designing these two distances is impractical because of the enormous number of constellation points involved.
Lattice codes are structured codes for AWGN channels, which have gained a great deal of attention since Erez and Zamir proved that lattice codes achieve the Shannon capacity . The advantage of lattice codes is that linear codes over a Hamming space can easily be transformed to Euclidean space by algebraic construction . The use of a lifting scheme, such as construction A or construction D, combined with powerful linear codes has shown excellent performance. For instance, in , LDPC codes were used to design construction D lattices, and the resulting performance was within 3 dB of the Shannon limit. In , lattice construction using turbo codes and a decoding algorithm was proposed. Turbo lattices achieve a performance of within approximately 1.25 dB of the Shannon capacity. Although these codes can directly integrate the coded sequence with modulation signals, it is impractical to implement lattice codes because the constellation of lattice codes expands infinitely over the signal space. Thus, the transmitted power could be unbounded. Furthermore, due to the enormous number of constellation points, both encoding and decoding involve significant computational complexity.
Recently, signal codes were proposed as a feasible lattice-coded modulation technique. The entire encoding process consists of simple signal convolution via infinite impulse response (IIR) and/or finite impulse response (FIR) filters. In signal codes, filter coefficients are the most important parameters for coding gain because the coefficients define Euclidean distances among generated codewords. The authors herein show that the absolute values of filter coefficients must be close to one in order to obtain high coding gain . This implies that the transmission power exponentially increases with codeword length, and shaping techniques such as Tomlinson-Harashima precoding (THP) is necessary to meet average power constraint in practice. Although THP can tailor the transmission power, the number of possible constellation points becomes infinite. This fact results in a exponential decoding complexity, and well-known powerful decoding algorithms such as BCJR algorithm cannot be utilized . Therefore, in , a list decoder is used, and the decoding performance is within 2 dB of the Shannon limit, at which the frame error rate (FER) is equal to . However, list decoding is sub-optimal and is inferior to BCJR algorithm.
Mitran and Ochiai proposed state-constrained signal codes called turbo signal codes to overcome the explosion of the decoding complexity . In state-constrained signal codes, shaping coefficients are chosen over Eisenstein rings. Since rings are closed under addition and multiplication, the output signals are also constrained over Eisenstein rings, and the possible constellation points become finite. This additional constraint enables the decoder to use BCJR algorithm. This additional constraint enables the decoder to use the BCJR algorithm, and the resulting performance is within 0.8 dB of the Shannon limit, at which the symbol error rate (SER) is at . Although the decoding complexity has been relaxed by Eisenstein rings, turbo signal codes still suffer from a remarkable increase in the number of possible states of filters when higher-order modulation is used as input because the number of states increases exponentially as the order of the modulation increases. This prevents the decoder from using the BCJR algorithm. Moreover, the fundamental code properties of signal codes, such as the decoding threshold, have not yet been investigated. Therefore, we resort to a brute-force search of the filter coefficients in order to obtain higher coding gain .
In this paper, in order to address the inherent problems of conventional turbo signal codes, we propose novel state-constrained signal codes, called repeat-accumulate signal codes (RASCs). RASCs are based on repeat-accumulate (RA) codes . The proposed RASCs are composed of a repeater, an interleaver, and one tap IIR filter, i.e., an accumulator. Here, the codewords are restricted not only by Eisenstein rings but also by parity check constraints. Although the parity check constraints require density evolution (DE) in order to analyze the noise threshold of the RASCs, regular DE cannot be used because tracking true densities is impractically complex. Hence, we introduce MC-DE  for revealing the asymptotic behavior of the RASCs. Thus, the condition for the optimum filter is also studied via MC-DE. Moreover, we introduce a low-complexity decoding algorithm based on the EMS algorithm. The proposed algorithm is modified to decode the codes over Eisenstein rings. The simulation results reveal that the modified EMS can reduce the complexity to a quarter of that of the BCJR algorithm without significant performance loss. In summary, the contributions of the present study are as follows:
Thresholds with several important parameters of RASCs, such as the filter, the number of basis of rings, the column weight, and the input constellation size, are determined via MC-DE. By carefully choosing the parameters, the Shannon capacity is approached to within approximately 0.8 dB.
A new input constraint is introduced in order to increase the transmission rate. In conventional turbo signal codes, the input signals were constrained by quadrature amplitude modulation (QAM), which is a subset of Eisenstein rings, because providing redundancy in terms of constellation size can be used to improve the error performance . However, our analysis via MC-DE indicates that the thresholds when the input signals are constrained over Eisenstein rings are slightly better than those when the input signals are constrained by QAM.
We introduce a modified EMS algorithm for state-constrained signal codes in order to reduce the decoding complexity . The key concept of EMS is to extract only the highly reliable elements of the message vector in order to reduce the size of the exchanged message vector. As a result, the decoding complexity dramatically decreases because it does not depend on the number of output constellation points but rather on the size of the truncated message vectors. Simulation results show that the performance loss by message truncation is only approximately 1 dB with a computational complexity of less than a quarter of that of the BCJR or SP algorithm.
The remainder of this paper is organized as follows. The basics of the signal codes, such as the system model, channel model, and the definition of Eisenstein rings, are explained in Section 2. In Section 3, we propose the encoding and decoding structure of RASCs. Section 4 shows the optimum filters and their decoding thresholds as calculated by MC-DE. In Section 5, numerical results of RASCs obtained using FFT-BP and EMS decoders are presented, and the decoding performances of RASCs and turbo signal codes are compared. Finally, Section 6 concludes the paper.
2 Basics of Signal Codes
2.1 System Model
Figure 1 shows a system model of general signal codes. Input complex signals are directly encoded by a signal encoder, where is the number of input signals. We now assume -QAM for the input signals, and its constellation is defined as follows:
This implies that the input signals are defined over the residue class of a Gaussian integer, .
The encoder maps input signals to coded signals via the generator matrix whose elements correspond to filter coefficients, where is the number of output signals. To meet average power constraint, shaping operation should be performed in the encoding process. A shaping vector restricts the amplitude of encoded signals , into a desired shaping region. In this subsection, THP is assumed as the shaping operation. The THP aims to restrict the real and imaginary part of the output constellation into , then the shaping operation is
where is the number of filter taps, and are chosen as follows:
Thus the received signals over the AWGN channel can be written by,
where the distribution of the elements of noise vector follows i.i.d. (independent and identically distributed) circular symmetric zero-mean AWGN with variance .
2.2 Turbo signal codes over Eisenstein rings
As described above, turbo signal codes introduce the constraint based on Eisenstein rings to control the number of output constellation points. Thus, the encoded signals of turbo signal codes are constrained by
where represents the number of basis of Eisenstein rings as positive integer and, and are also chosen as integer. Then, the th coded signal of turbo signal codes is given by
where and respectively indicate feedback and feedforward filter coefficients constrained over Eisenstein rings , represent the signal held in the -th memory of the encoder. All , , and are chosen over so that the encoded signals are constrained over since rings are closed under addition and multiplication. The set of the elements of is defined as , and the cardinality of is obtained by
For short hand notation, the index of feedback filters FB is defined as follows:
The index of feedforward filters FF is expressed in the same manner as the FB. Hence,
3 Repeat-Accumulate Signal Codes
As shown in the previous section, at least three tap filter coefficients are required to construct turbo signal codes, namely, the tap coefficients of the feedforward filters, and , and a tap coefficient of the feedback filter, . It is difficult for turbo signal codes to analyze the decoding performance and to design the filter coefficients because the search space of the filter coefficients has a minimum of candidates. Therefore, RASCs are proposed in order to reduce the search space to because our encoder has only one tap filter coefficient.
3.1 Encoding Structure
The encoder of the RASC is shown in Fig. 2. In this paper, non-systematic RASC is assumed. The figure shows that the encoder consists of the same components as RA codes. The output signal from the -times repetition encoder is , and the output signal from the interleaver is , both of which are constrained over the same rings as the input signals . For instance, when the input signals are -QAM, and become , where since non-systematic RASC is assumed. The accumulator of the RASC is slightly different from that of the binary RA because the accumulator consists of a filter and a shaping operation. The th output signal from the encoder is given by
The resulting codes satisfy the following parity check equation:
Then, the RASC can be decoded by the SP algorithm since the parity check matrix can be defined based on (12). However, turbo signal codes with the useful filter coefficients do not satisfy the parity check equations. The proof of the filter of the turbo signal codes is presented in Appendix 7.
Mapping Space of Filters
As shown in Fig. 3, the number of signal points appears to increase as or increases. Moreover, as shown in Figs. 3-\subrefconst_L2N3 and 3-\subrefconst_L2N4, the number of signal points, defined as , is less than the cardinality . The reason for this is that some points over degenerate into one point over for because of ring constraint .
Notably, the decoding performance of state-constrained signal codes depends on the mapping space of the filters. Figure 4 shows four examples of mapping spaces of various filters at and . The mapping space of (Fig. 4-\subrefconst_FB11) has the same constellation as Fig. 3-\subrefconst_L2N2. However, the mapping spaces of (Fig. 4-\subrefconst_FB3), (Fig. 4-\subrefconst_FB5), and (Fig. 4-\subrefconst_FB15) are a subset of . In these cases, filtered signal points are degenerated into a few points, which implies that these filters lead to catastrophic codes because degenerated signals cannot be correctly recovered from received signals. Therefore, bijective characteristics are a necessary condition when choosing filters.
Variable Rate with Variable Input Constraint
In the turbo signal code, the achievable rate is because the input signals are constrained over (i.e., -QAM). Therefore, must be increased in order to achieve a higher rate, but large causes high decoding complexity because the number of states is . In this paper, we propose high rate signal codes while keeping the decoding complexity low. As described above, the cardinality of is identical to that of if and only if . Then, codewords are one-to-one mapped into a complex plane while the number of output states is maintained. We exploit the property whereby the input signals are chosen over for higher rates with the same complexity.
Then, the transmission rate of the non-systematic RASC is given by
3.2 Decoding Structure
In this subsection, two decoding algorithms, namely FFT-BP and EMS, are introduced for the RASC. These algorithms were originally proposed for the purpose of decoding non-binary LDPC defined over a Galois field with low computational complexity. In this paper, we modify these decoding algorithms for use with the RASC defined over Eisenstein rings.
The decoding complexity of the BCJR for the state-constrained signal codes is , because the number of states in the trellis diagram is given by . Therefore, a low-complexity decoding algorithm is required because it is difficult to decode when and become quite large. Thus, FFT-BP is introduced for the RASC. The complexity of FFT-BP is reduced as .
The exchanged message is defined as the vector of the log-likelihood ratio (LLR), the length of which is , similar to non-binary LDPC codes. At first, notation of FFT-BP algorithm is shown.
Notations: In a parity matrix , a set of non-zero elements of the row index at and a set of non-zero elements of the column index at are defined as , and , respectively. represents the number of decoding iteration. A message vector of variable node, check node at are denoted by and , respectively. Channel LLR vector is also denoted by . As an example, the notation of the LLR vector are introduced as follows.
with the probability that the random variable takes on the values , and , . and are expression in the same manner. The four steps of FFT-BP algorithm are described below.
Initialization: As initialization, all elements of and , and are given by,
Variable Node Updates: the elements of variable node messages are updated via following rule:
In non-systematic RASC, the variable nodes at are to be hidden nodes since information signals are not transmitted. If the signals of a hidden node are chosen over , the message is updated as
This procedure enable the decoder to ignore impossible candidate since the input signals are defined over .
Check Node Updates: the processing at both hidden node and observation node are given by
where indicates permutation function which depends on the mapping space of the filter, represents multidimensional FFT function and, and are inverse functions corresponding to and , respectively. As described in , there exists Fourier transform if the code are constrained over Abelian groups. Therefore, if is defined by power of two, RASC could be decoded via FFT-BP since the codewords are constrained by Eisenstein rings, which are also satisfied to the property of commutative additive groups. Note that in log-domain FFT-BP, every LLR has to be splitted into the amplitude and the sign, but details are omitted for brevity so that please refer to .
Tentative Decision: the information signal at , is estimated after iterations as following criterion:
Modified EMS Algorithm
The FFT-BP algorithm can reduce the decoding complexity to if and only if the number of output constellation points is a power of two. However, the decoding complexity increases exponentially by and . The modified EMS algorithm proposed in  is introduced herein in order to alleviate this complexity issue. In the elementary step of variable node updating, the decoder replaces the elements of the message vector, the corresponding signal of which is not defined as , with the elements of the message vector, the corresponding signal of which is defined as . The decoding complexities of both the variable node and check node updates can be reduced by each elementary step described below.
First, we truncate the message vector for the largest LLRs, denoted by and , . The values in these message vectors are sorted in decreasing order. The decoding complexity can be significantly reduced when because only the truncated messages are exchanged.
Elementary Step of Variable Node Update:
For , the output of this elementary step is held as an intermediate message, and the step is processed recursively. At the first step of the update, we compute the messages of as
where , and is given by
where is a scalar value that compensates for truncated LLRs and is given by
where is offset value optimized by density evolution .
The variable node message consists of LLRs from . If the input signals is defined as , then the hidden node messages are updated by
where represents all of the symbols defined over , and is given by
This algorithm is slightly different from EMS for non-binary LDPC because the decoder has a priori information of the input signals, which are constrained over . Then, the decoder replaces the LLRs even if could not be calculated from and . Otherwise, if the input signals are constrained over Eisenstein rings, then the decoder naively updates the variable messages via (23).
Elementary Step of Check Node Update:
Figure 5 shows the elementary step of the check node update. Similar to the variable node update, the output of this elementary step is stored as an intermediate message and recursively updates when the row degree is greater than three.
In the check node update, we search the set , which is constrained by .
If we were to naively search the candidates, the complexity would be . Therefore, we search using the virtual matrix illustrated in Fig. 5. The algorithm is as follows:
Introduce the values of the first column of the virtual matrix to the sorter.
Compute the largest value.
Does the symbol associated with the largest output value exist in the output vector?
Yes: Remove the largest output value from the sorter.
No: Append the largest output value to the last element of the output vector.
Move the right-hand neighbor of the virtual matrix to the sorter. If there is no right-hand neighbor, then increase the row number.
The above check node update does not stop after steps because the number of element of the output vector does not reached due to existing the same symbols at Step 3 within steps. However, steps are sufficient for a negligible degradation in decoding performance .
4 Threshold Analysis and Filter Design
4.1 Monte Carlo Density Evolution
Density evolution is a powerful tool for finding the decoding threshold, which is an important indicator of code performance characteristics. In binary codes, the mathematical formulation of DE is easily obtained because the BP messages are scalars [1, 2]. In the non-binary case, tracking the true BP message distribution is impractically complex because the BP messages are vectors. Gaussian approximation (GA) is a feasible way to track the density of BP messages for non-binary LDPC codes because it can reduce the number of parameters to only two, namely the mean and the variance of the density . However, the message distribution of the check node diverges increasingly from the true distribution as the degree of the check node increases. Furthermore, GA can be used if and only if channel symmetry and permutation invariance can be assumed . Channel symmetry leads to an uncorrelated message distribution, so that the all-zero codeword assumption may be valid. Permutation invariance means eliminating the effect of the weight of the parity matrix, which is obtained by a random weight coefficient.
In the state-constrained signal codes, like the non-binary LDPC, the BP messages also consist of multiple parameters, and, because of the asymmetric property caused by non-uniform modulation, the decoding performance depends on the codewords. Therefore, we introduce the technique of adding a random coset vector, as described in . The random-coset vector is added at the end of the encoder. The random-coset elements are randomly chosen and uniformly distributed over . Thus, the resulting output codeword from the AWGN channel is symmetric. The proof of this symmetric property resulting from the random-coset vector is omitted herein. (Please refer to  for the proof.) Although the channel symmetry can be assumed, permutation invariance cannot be assumed because the weight coefficients are defined by the IIR filter coefficient. MC-DE has been introduced as alternative approach for tracking the density with multiple parameters [16, 23, 24]. One advantage of MC-DE is that the estimated threshold is more accurate than the Gaussian approximation method because the analysis is non-parametric . Thanks to the random-coset settings for the RASC, as described above, we can straightforwardly introduce MC-DE to approximate the noise threshold as Algorithm 1.
In this paper, we set several parameters for MC-DE as follows: number of threshold calculations, ; number of message samples, ; maximum number of iterations, ; decoding error threshold, , which means that decoding could be regarded as successful when all symbols are correctly decoded; and threshold precision, . In the subsequent section, we present the results of the optimum parameters and the corresponding noise threshold.
4.2 Searching for the Best Filter
As described in the Introduction, an efficient search algorithm for the filter coefficient of state-constrained signal codes has not been reported . We therefore introduce an efficient search method via the proposed MC-DE as Algorithm 2.
The best four filters with several parameters are shown in Tables 1 and 2. Several filters appear to have the same or similar thresholds. For instance, in Table 1, and have the same threshold (1.14 dB). Similarly, and have the same threshold (3.10 dB). These filters are converted by affine transformation with each other, e.g., by rotation by 90 degrees, extension, and reduction. This property can be exploited in order to further truncate the candidate filters.
4.3 Thresholds for Different Numbers of Basis
Figure 6 shows the difference between the noise threshold of the RASC and the Shannon capacity. In this figure, the impact of the number of basis on the threshold is analyzed. The indices of the best filters for each are shown in this figure because different values of have different optimum filters. Note that some affine transformed filters have the same threshold, as described above. Thus, the filters must be searched for every target . Furthermore, the number of repetitions must be optimized for each . Based on our results, the optimal value of is three for , whereas that for is two. In non-binary LDPC, the optimum column weight is two for a large alphabet size, e.g., greater than or equal to GF(64) .
Unfortunately, the threshold does not improve monotonically as increases. We believe that the reason for which is the dense constellation of output signals. As shown in Figs. 3-\subrefconst_L2N2, 3-\subrefconst_L2N3, and 3-\subrefconst_L2N4, the distance between the signal points decreases as increases. Thus, increasing improves coding gain but results in a short Euclidean distance of the constellation.
4.4 Thresholds for Different Numbers of Repetitions
The effect of the number of repetition is shown in Fig. 7. Similar to the previous results, the optimum filter varies depending on . The difference between the noise threshold of the RASC and the Shannon limit for and is the smallest among all other parameters at (= 0.79 dB). However, for , the thresholds for are inferior to those for . This conclusion can also be found in the literature on non-binary LDPC code, where for a column weight greater than three, the performance of the codes over higher-order fields is worse than that of the codes over lower-order fields .
4.5 Thresholds for Different Input Constellation Sizes
The impact of the input constellation size is shown in Fig. 8. Since the computational complexity required for with is enormous, only the case of is illustrated. In contrast to the impact of , the difference between the noise threshold of the RASC and the Shannon capacity decreases as increases. The reason for this behavior, which differs from that for increasing , is that the distances between the output signals are greater than for fixed because the signal constellation space is expanded by . Based on this result, the RASC appears to have the excellent property whereby increasing the transmission rate improves the threshold.
4.6 Threshold for Different Input Constraints
Finally, the effect of the input signal constraint is shown in Fig. 9. In this case, the optimum filters for the same and differ depending on the constraint. Interestingly, , which is an identity mapping function, is the best filter for and . We believe that the reason for this is that when the input signals are chosen over , the transition of the state due to the summation of signals is chosen over . Then, the codeword distances increase, even if the signals are mapped into the same mapping space by .
5 Numerical Results
In this section, the finite length performances of the RASC are shown. We assume that the information signal length is , that the interleaver of the encoder is a random interleaver, that the number of iterations of the SP algorithm is 100, and that the filters are chosen as described in the previous section.
5.1 Performance of the RASC with FFT-BP
Figure 10 shows the SER of finite-codeword-length RASC with FFT-BP decoding. The input signals are 4-QAM (), and the filters are chosen as and for and , respectively. As shown in the figure, the relationship between and is the same as that determined by a previous threshold analysis. Namely, for , the performance for is superior to that for in both the waterfall and error floor regions. In contrast, for , the threshold of is inferior to that of . Interestingly, the performance for the error floor region depends on the filter but not the number of states. Therefore, the best filter can only be chosen if we find the best threshold via MC-DE, because the overall performance is defined by the filter.
Next, we compare the performance of the RASC with that of the turbo signal codes for the same codeword length and input constellation . Due to a termination, the transmission rates of the RASC and the turbo signal codes are bits per channel use (bits/c.u.) and bits/c.u., respectively. For the turbo signal codes, we assume that the indices of the feedback and feedforward filters are FB=11 and FB=81, respectively. This optimum filter setting was reported in . The decoding algorithm is the BCJR algorithm, and the number of iterations is 25. As shown in Fig. 10, the performance of the RASC with is close to that of turbo signal codes (within 0.2 dB, which is a negligible performance loss) in the waterfall region. Furthermore, in the error floor region, the error probability of the RASC is approximately better than that of the turbo signal codes. These results indicate that the RASC can provide superior performance to turbo signal codes with only a one-tap feedback filter.
5.2 Performance of the RASC with EMS
Figure 11 shows the SER performance for the given and obtained using the proposed modified EMS decoder. In this figure, full BP indicates the naive sum-product algorithm whereby all sets of the signals that satisfy the check equation are searched and calculated for the message updates in check nodes so that the decoder does not use FFT-BP algorithm since FFT algorithm over Eisenstein rings with cannot be represented by multidimensional FFT form as that with . The number of output constellation points for and are 81 and 256, respectively. The decoding complexities of BCJR, FFT-BP, and EMS are , , and , respectively. Therefore, the decoding complexity of EMS can be reduced if is chosen so as to be much smaller than . For example, when , the decoding complexity of EMS becomes a quarter of that of the BCJR, and when , the decoding complexity of the EMS becomes one-sixteenth that of the BCJR and a quarter of that of FFT-BP. Therefore, in this section, and are set for and, and are set for .
Figure 11 indicates that the performance loss is only approximately 0.5 dB when and is only approximately 1.0 dB when . Furthermore, our algorithm does not degrade the error floor performance because the curves with the EMS decoder decrease until reaching the error floor of full BP. As a result, the EMS decoder can dramatically reduce the decoding complexity without a significant performance loss.
In this paper, we proposed a new state-constrained signal code based on RA codes. The proposed code has several advantages, including a simpler encoder and decoder, as compared to turbo signal codes, with approximately the same performance in the waterfall region and a slightly better performance in the error floor region. Furthermore, we have found a design criterion of the filter coefficient via MC-DE. We summarize the properties of the threshold in terms of the following three parameters:
The number of basis can make improve the threshold, but an excessive value of leads to threshold degradation.
An optimum repetition number exists for a given . In particular, for a large number of states, , is optimum in terms of the threshold.
The number of states can improve both the transmission rate and the threshold.
Simulation results have shown that the optimum filters for given values of , , and perform well in the waterfall and error floor regions with short codeword lengths. Overall performance optimization for the regular RASC can be performed by MC-DE. Furthermore, we proposed a modified EMS decoder. Simulation results indicate that the modified EMS decoder does not cause a significant performance loss of more than 1 dB as compared to the performance of the full BP decoder, while the decoding complexity is reduced to less than 25% of that of BCJR and FFT-BP decoder. Finally, the proposed code can be easily generalized to irregular and other turbo-like codes, such as accumulate-repeat-accumulate codes and braided convolutional codes, to more closely approach the Shannon limit.
The authors are grateful to Hideki Ochiai and Tadashi Wadayama for their help in discussion and sharing their insights. The authors also thank to Brian M. Kurkoski for basic education in lattice coding theory. This work was supported in part by JSPS Grant-in-Aid for Scientific Research(A) Grant number 16H02345.
7 Necessary condition to satisfy the parity check equation in turbo signal codes
For simplify, we next consider the minimum encoder of turbo signal codes, which is given by
From this equation, there are three filter coefficients, , , and . In this setting, the parity check matrix of (29) is given by
where the left-hand side of this matrix corresponds to information signals, and the right-hand side corresponds to parity signals. Based on this matrix, the parity check equation at is given by
Therefore, the necessary conditions for , , and are given by
For , the coded signal at is given by
Thus, the mapping space of the filters of turbo signal codes is defined as an identity mapping in order to satisfy the parity check equation.
- T. J. Richardson and R. L. Urbanke, “The Capacity of Low-Density Parity-Check Codes under Message-Passing Decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599–618, 2001.
- S.-Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, “On the Design of Low-Density parity-check codes within 0.0045 dB of the Shannon limit,” IEEE Commun. Lett., vol. 5, no. 2, pp. 58–60, 2001.
- S. ten Brink, “Convergence Behavior of Iteratively Decoded Parallel Concatenated Codes,” IEEE Trans. Commun., vol. 49, no. 10, pp. 1727–1737, 2001.
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 619–637, 2001.
- C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” in Proc. of IEEE ICC 1993, vol. 2, 1993, pp. 1064–1070 vol.2.
- G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Trans. Inf. Theory, vol. 28, no. 1, pp. 55–67, 1982.
- G. Caire, G. Taricco, and E. Biglieri, “Bit-Interleaved Coded Modulation,” IEEE Trans. Inf. Theory, vol. 44, no. 3, pp. 927–946, 1998.
- U. Erez and R. Zamir, “Achieving 1/2 log (1+SNR) on the AWGN Channel with Lattice Encoding and Decoding,” IEEE Trans. Inf. Theory, vol. 50, no. 10, pp. 2293–2314, 2004.
- J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups. Springer Science & Business Media, 2013, vol. 290.
- M. R. Sadeghi, A. H. Banihashemi, and D. Panario, “Low-density parity-check lattices: Construction and decoding analysis,” IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. 4481–4495, 2006.
- A. Sakzad, M. Sadeghi, and D. Panario, “Turbo lattices: Construction and performance analysis,” CoRR, vol. abs/1108.1873, 2011. [Online]. Available: http://arxiv.org/abs/1108.1873
- O. Shalvi, N. Sommer, and M. Feder, “Signal codes: Convolutional lattice codes,” IEEE Trans. Inf. Theory, vol. 57, no. 8, pp. 5203–5226, Aug 2011.
- F. R. Kschischang, B. J. Frey, and H. A. Loeliger, “Factor Graphs and the Sum-Product Algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, 2001.
- P. Mitran and H. Ochiai, “Parallel concatenated convolutional lattice codes with constrained states,” IEEE Trans. Commun., vol. 63, no. 4, pp. 1081–1090, 2015.
- D. Divsalar, D. Jin, and R. McEliece, “Coding theorems for turbo-like codes,” in Allerton Conference, 1998, pp. 201–210.
- M. Gorgolione, “Analysis and Design of Non-Binary LDPC Codes over Fading Channels,” Ph.D. dissertation, Cergy-Pontoise Univ., 2012.
- D. Declercq and M. Fossorier, “Decoding Algorithms for Nonbinary LDPC Codes Over GF,” IEEE Trans. Commun., vol. 55, no. 4, pp. 633–643, 2007.
- A. Goupil, M. Colas, G. Gelle, and D. Declercq, “FFT-Based BP Decoding of General LDPC Codes Over Abelian Groups,” IEEE Trans. Commun., vol. 55, no. 4, pp. 644–649, 2007.
- G. J. Byers and F. Takawira, “Fourier Transform Decoding of Non-Binary LDPC Codes,” in in Proc. of SATNAC. Spier Wine Estate, Western Cape, South Africa, 2004.
- A. Voicila, D. Declercq, F. Verdier, M. Fossorier, and P. Urard, “Low-Complexity Decoding for Non-Binary LDPC Codes in High Order Fields,” IEEE Trans. Commun., vol. 58, no. 5, pp. 1365–1375, 2010.
- A. Bennatan and D. Burshtein, “Design and Analysis of Nonbinary LDPC Codes for Arbitrary Discrete-Memoryless Channels,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 549–583, 2006.
- G. Li, I. J. Fair, and W. A. Krzymien, “Density Evolution for Nonbinary LDPC Codes Under Gaussian Approximation,” IEEE Trans. Inf. Theory, vol. 55, no. 3, pp. 997–1015, 2009.
- B. M. Kurkoski, K. Yamaguchi, and K. Kobayashi, “Single-Gaussian Messages and Noise Thresholds for Decoding Low-Density Lattice Codes,” in Proc. of IEEE ISIT 2009, 2009, pp. 734–738.
- H. Uchikawa, B. M. Kurkoski, K. Kasai, and K. Sakaniwa, “Threshold Improvement of Low-Density Lattice Codes via Spatial Coupling,” in Proc. of ICNC 2012, 2012, pp. 1036–1040.
- M. C. Davey, “Error-Correction using Low-Density Parity-Check Codes,” Ph.D. dissertation, University of Cambridge, 2000.