Turbo Lattices: Construction and Error Decoding Performance
Abstract
In this paper a new class of lattices called turbo lattices is introduced and established. We use the lattice Construction D to produce turbo lattices. This method needs a set of nested linear codes as its underlying structure. We benefit from turbo codes as our basis codes. Therefore, a set of nested turbo codes based on nested interleavers (block interleavers) and nested convolutional codes is built. To this end, we employ both tailbiting and zerotail convolutional codes. Using these codes, along with construction D, turbo lattices are created. Several properties of Construction D lattices and fundamental characteristics of turbo lattices including the minimum distance, coding gain and kissing number are investigated. Furthermore, a multistage turbo lattice decoding algorithm based on iterative turbo decoding algorithm is given. We show, by simulation, that turbo lattices attain good error performance within from capacity at block length of . Also an excellent performance of only away from capacity at SER of is achieved for size .
I Introduction
Turbo codes were first introduced by Berrou et al. [5] in 1993 and have been largely treated since then. It has been shown [19] that these codes with an iterative turbo decoding algorithm can achieve a very good error performance close to Shannon capacity. Also, there has been interest in constructing lattices with high coding gain, low kissing number and low decoding complexity [1, 21, 28]. The lattice version of the channel coding is to find an dimensional lattice which attains good error performance for a given value of volumetonoise ratio (VNR) [10, 12, 31]. Poltyrev [20] suggests employing coding without restriction for lattices on the AWGN channel. This means communicating with no power constraints. The existence of ensembles of lattices which can achieve generalized capacity on the AWGN channel without restriction is also proved in [20]. Forney et al. [12] restate the above concepts by using coset codes and multilevel coset codes. At the receiver of communication without restriction for lattices, the main problem is to find the closest vector of to a given point . This is called lattice decoding of . Some efficient wellknown lattice decoders are known for low dimensions [14, 32].
There are a wide range of applicable lattices in communications including the wellknown root lattices [10], the recently introduced lowdensity paritycheck lattices [21] (LDPC lattices) and the lowdensity lattice codes [28] (LDLC lattices). The former lattices have been extensively treated in the 1980’s and 1990’s [10]. After the year 2000, two classes of lattices based on the primary idea of LDPC codes have been established. These type of lattices have attracted a lot of attention in recent years [1, 9, 22, 16]. Hence, constructing lattices based on turbo codes can be a promising research topic.
In the present work, we borrow the idea of turbo codes and construct a new class of lattices that we called turbo lattices. In fact, the results by Forney et al. in [12] motivate us to apply Construction D lattices to design turbo lattices. They proved the existence of sphereboundachieving lattices by means of Construction D lattices. This leads one to use Construction D method along with wellknown turbo codes to produce turbo lattices. This is the first usage of turbo codes in constructing lattices. We benefit from structural properties of lattices and turbo codes to investigate and evaluate the basic parameters of turbo lattices such as minimum distance, volume, coding gain and kissing number.
Various types of turbo codes have been constructed in terms of properties of their underlying constituent encoders and interleavers [19]. For example, encoders can be either block or convolutional codes and interleavers can be deterministic, pseudorandom or random [33]. Since Construction D deals with block codes, we treat turbo codes as block codes. Therefore, it seems more reasonable to use terminated convolutional codes. Since we use recursive and nonrecursive convolutional codes, different types of termination methods can be applied to these component convolutional codes. Hence, we are interested in terminating trellises for both feedback [27, 34] and feedforward [19] convolutional codes. To stay away from rate loss, we employ tailbiting convolutional codes for short length turbo lattices. Also zerotail convolutional codes [27, 34] are building blocks of turbo codes to use in construction of lattices with larger sizes.
There are algorithms such as generalized minsum algorithm [21], iterative decoding algorithms [9] and the algorithm in [28] for decoding newly introduced lattices. The basic idea behind these algorithms is to implement minsum and sumproduct algorithms and their generalizations. Since we used turbo codes to construct turbo lattices, it is more reasonable to benefit from the underlying turbo structure of these lattices. In this case, we have to somehow relate the decoding of turbo lattices to the iterative turbo decoders [5] for turbo codes. This results in a multistage decoding algorithm based on iterative turbo decoders similar to the one given in [12].
We summarize our contributions as follows.

We generalize the minimum distance formula for every Construction D lattice by removing a restricting condition on the minimum distance of its underlying codes. An upper bound for the kissing number of these lattices is also derived.

We construct nested turbo codes and establish the concept of turbo lattices. Various crucial parameters of these lattices such as minimum distance, coding gain and kissing number are investigated.

A multistage turbo lattice decoder is introduced. The error performance of turbo lattices is given and compared with other wellknown LDPC lattices and LDLC lattices.
The present work is organized as follow. Two methods of constructing lattices, Construction A and D, are reviewed in Section II. The crucial parameters of lattices which can be used to measure the efficiency of lattices are explained in this section. In Section III we introduce nested interleavers in a manner that can be used to build nested turbo codes. Section IV is devoted to the construction of nested turbo codes and consequently the construction of turbo lattices. Section V is dedicated to the evaluation of the critical parameters of turbo lattices based on the properties of their underlying turbo codes. In Section VI a multistage turbo lattice decoding algorithm is explained. In Section VII we carry simulation results. We conclude with final remarks on turbo lattices and further research topics in Section VIII.
Ii Backgrounds on Lattices
In order to make this work selfcontained, a background on lattices is essential. The general required information about critical parameters of Construction A and Construction D as well as parameters for measuring the efficiency of lattices are provided below.
Iia General Notations for Lattices
A discrete additive subgroup of is called lattice. Since is discrete, it can be generated by linearly independent vectors in . The set is called a basis for . In the rest of this paper, we assume that is an dimensional full rank () lattice over . By using the Euclidean norm, , we can define a metric on ; that is, for every we have . The minimum distance of , , is
Let us put as the rows of a matrix , then we have . The matrix is called a generator matrix for the lattice . The volume of a lattice can be defined by where is the transpose of . The volume of is denoted by . The coding gain of a lattice is defined by
(1) 
where is itself called the normalized volume of . This volume may be regarded as the volume of per two dimensions. The coding gain can be used as a crude measure of the performance of a lattice. For any , . An uncoded system may be regarded as the one that uses a constellation based on . Thus the coding gain of an arbitrary lattice may be considered as the gain using a constellation based on over an uncoded system using a constellation based on [12]. Therefore, coding gain is the saving in average of energy due to using for the transmission instead of using the lattice [13]. Geometrically, coding gain measures the increase in density of over integer lattice [10].
If one put an dimensional sphere of radius centered at every lattice point of , then the kissing number of is the maximum number of spheres that touch a fixed sphere. Hereafter we denote the kissing number of the lattice by . The normalized kissing number of an dimensional lattice is defined as
(2) 
Sending points of a specific lattice in the absence of power constraints has been studied. This is called coding without restriction [20]. Suppose that the points of an dimensional lattice are sent over an AWGN channel with noise variance . The volumetonoise ratio (VNR) of an dimensional lattice is defined as
(3) 
For large , the VNR is the ratio of the normalized volume of to the normalized volume of a noise sphere of squared radius which is defined as SNR in [21] and in [12].
Since lattices have a uniform structure, we can assume is transmitted and r is the received vector. Then r is a vector whose components are distributed based on a Gaussian distribution with zero mean and variance . Hence construction of lattices with higher coding gain and lower normalized kissing number is of interest.
IiB Lattice Constructions
There exist many ways to construct a lattice [10]. In the following we give two algebraic constructions of lattices based on linear block codes [10]. The first one is Construction A which translates a block code to a lattice. Then a review of Construction D is given. These two constructions are the main building blocks of this work.
Let be a group code over , i.e. , with minimum distance . Define as a Construction A lattice [10] derived from by:
(4) 
Let be a lattice constructed using Construction A. The minimum distance of is
(5) 
Its coding gain is
(6) 
and its kissing number is
(7) 
where denotes the number of codewords in with minimum weight . These definition and theorem can be generalized to a more practical and nice lattice construction. We use a set of nested linear block codes to give a more general lattice structure named Construction D. This construction plays a key role in this work.
Let be a family of linear codes where for and is the trivial code such that
where denotes the subgroup generated by . For any element and for consider the vector in of the form:
Define as all vectors of the form
(8) 
where and or . An integral basis for is given by the vectors
(9) 
for and plus vectors of the form . Let us consider vectors as integral in , with components or . To be specific, this lattice can be represented by the following code formula
(10) 
It is useful to bound the coding gain of . The next theorem is cited form [4].
Theorem 1.
Let be a lattice constructed using Construction D, then the volume of is . Furthermore, if , for and or , then the squared minimum distance of is at least , and its coding gain satisfies
In the above theorem, an exact formula for the determinant of every lattice constructed using Construction D is given. Also, proper bounds for the other important parameters of these lattices including minimum distance and coding gain have been found with an extra condition on the minimum distance of the underlying nested codes [10].
We omit this restricting condition on the minimum distance of the underlying nested block codes and then generalize those bounds to a more useful form. The resulting expressions for minimum distance and coding gain are related to the underlying codes as we will see soon. In addition, an upper bound for the kissing number of every lattice generated using Construction D is derived.
Theorem 2.
Let be a lattice constructed based on Construction D. Then

for the minimum distance of we have
(11) where is the minimum distance of for ;

the kissing number of has the following property
(12) where denotes the number of codewords in with minimum weight . Furthermore, if for every , then .
The proof is given in Appendix A.
This theorem provides a relationship between the performance of the lattice and the performance of its underlying codes. The kissing number of a Construction D lattice can be bounded above based on the minimum distance and the number of minimum weight codewords of each underlying nested code.
Iii Convolutional and Turbo Codes
Since recursive convolutional codes produce better turbo codes, we focus on tailbiting of feedback convolutional codes.
Iiia Terminated and TailBiting Convolutional Codes
Let be a systematic convolutional code of rate with constraint length and memory order . The terminated convolutional code technique can be found in [19]. It is known that, in this deformation from the convolutional code to the mentioned block code there exists a rate loss and a change in the size of the codewords while in Construction D all the code lengths of the set of nested linear codes have to be equal. However, this termination method modifies the sizes of the underlying codes in each level. This code length modification results in a restriction which prevents the use of terminated convolutional codes in our derivation of lattices based on Construction D. In order to avoid this situation, an alternative method which is referred as tailbiting [27] can be used. Thus, terminated convolutional codes can only be employed to construct turbo codes which are appropriate for using along with Construction A.
The tailbiting technique for feedforward convolutional codes are reported in [25, 27, 34]. The algorithm for tailbiting a feedback convolutional encoder is also introduced in [15, 34]. However, tailbiting is impossible for all sizes. In other words, tailbiting of a feedback convolutional encoder is only possible for some special tailbiting lengths.
Let be a generator matrix of a systematic feedback convolutional code defined as follows
(13) 
where for coprime polynomials and , and . By means of tailbiting [25], we can corresponds a rate systematic feedback convolutional encoder with constraint and a linear code (where is called tailbiting length) with generator matrix
(14) 
where and are circulant matrices with top row of length made from and respectively for and .
Theorem 3.
Let and be as above for and . Then the block code generated by in (14) can also be generated by , where is a circulant matrix if and only if for all . In this case, we get
The proof is given in Appendix A.
We observe that is an circulant matrix consisting of blocks of circulant submatrices which must be placed in the –th block of . It is obtained using as its top row, and . Also the identity matrix can be written as an identity block matrix with each of its nonzero entries replaced by an identity matrix .
We close this subsection giving a proposition that relates our result in the above theorem and wellknown results [29, 34] for eligible lengths of that can be applied to construct tailbiting feedback convolutional codes. For the sake of brevity, we consider only feedback convolutional codes of rate . Let be a generator matrix of a systematic feedback convolutional code defined as follows
(15) 
where for coprime polynomials and for . Without loss of generality, we assume that . If we realize this code in observer canonical form [34], then the state matrix is
(16) 
We have that in order to encode an tailbiting code with the method described in [34], the matrix has to be invertible. It should be noted that [34] realizing the encoder in controller canonical form and observer canonical form leads to the same set of possible sizes .
Proposition 4.
The proof is given in Appendix A.
IiiB Parallel Concatenated Codes; Structure of Turbo Codes
Turbo codes can be assumed as block codes by fixing their interleaver lengths; but they have not been analyzed from this point of view except in [33]. We follow the construction of turbo codes from [5, 19] and then we use them to produce a new type of lattices called turbo lattices. We assume that an interleaver and a recursive convolutional encoder with parameters are used for constructing a turbo code of size .
The information block (interleaver size) has to be selected large enough to achieve performance close to Shannon limit. Improving minimum free distance of turbo codes is possible by designing good interleavers. In other words, interleavers make a shift from lowerweight codewords to higherweight codewords. This shifting has been called spectral thining [19]. Such interleaving matches the codewords with lower weight of the first encoder to the highweight parity sequences of the second encoder. More precisely, for large values of interleaver size the multiplicities of the lowweight codewords in the turbo code weight spectrum are reduced by a factor of . This reduction by a factor of is called interleaver gain. Hence, it is apparent that interleavers have a key role in the heart of turbo codes and it is important to have randomlike properties for interleavers [19, 33]. Boutros et. al provided almost optimal interleavers in [7].
Iv Nested Turbo Codes and Turbo Lattices
We exploit a set of nested tailbiting convolutional codes and a nested interleaver along with Construction D to form turbo lattices. Also terminated convolutional codes and Construction A are employed for the same purpose. An explicit explanation of these two approaches is given next.
Iva Constructing Nested Turbo Codes
Consider a turbo code with two component codes generated by a generator matrix of size of a convolutional code and a random interleaver , of size . Assume that both encoders are systematic feedback convolutional codes. Every interleaver can be represented by a matrix which has only a single in each column and row. It is easy to see that the generator matrix of can be written as follows
(17) 
where is a submatrix of , the tailbited generator matrix of , including only parity columns of . The matrix is the identity matrix of size . Therefore, we can assume that is a matrix with rows and columns.
The above representation (17) can be extended to construct a generator matrix for a parallel concatenated code with branches. Each branch has its own interleaver with matrix representation and a recursive encoder for . Assume that all the encoders are the same convolutional encoder and the block of information bits has length . Thus, the corresponding generator matrix of this turbo code is
(18) 
where is a as above and .
In order to design a nested set of turbo codes, the presence of a nested interleaver is essential. Hence, a new concept of nested interleavers has to be given.
Definition 5.
The interleaver of size is a nested interleaver if the following conditions hold

,

for every , if , then
A nested interleaver is called a nested interleaver.
Example 6.
Let . The permutation
is a nested interleaver because and .
The following nested turbo codes are appropriate to use in both Construction A and Construction D for producing turbo lattices.
Definition 7.
Let be a parallel concatenated convolutional code with two equivalent systematic convolutional codes generated by . Let be the generator matrix of tailbiting of , and be the interleaver of size with the nested property that is used to construct a turbo code . Then is as of (17). Define a set of turbo codes
(19) 
In fact, a generator matrix of size is a submatrix of consisting of the first rows of for every .
Example 8.
Consider a systematic convolutional code with the following generator matrix
The matrix is equivalent to given by
where and also , and . Let , then and , for . One can use the Euclidean algorithm to find . Therefore, we get
Hence,
where is a circulant matrix of size defined by top row , . For instance
Assume that is the last columns of , thus
Also let us suppose that is a nested interleaver constructed by means of the permutation matrix
where is another permutation matrix of size . Then , a generator matrix for our nested turbo code is
Now we have such that a generator matrix for is consisting of the first rows and a generator matrix for is consisting of the first rows of .
We are prepared to formulate the basic characteristics of nested turbo codes. Next we study the structural properties of a set of nested turbo codes in terms of properties of its subcodes. Let be an nested interleaver and
be a set of nested turbo codes constructed as above. Then, we have where denotes the minimum distance of . Also the rate of is equal to for . Furthermore, we have . The rate of each can be increased to because we have allzero columns in . In fact, these columns can be punctured out to avoid from generating zero bits, but we can still keep them. Since producing turbo lattices and measuring the performance of them are in mind, is more useful than the actual rate in the turbo lattices.
The upcoming theorem reveals the fact that the rates of nested turbo codes stay unchanged when the interleaver sizes are increased. The only impact of this is on the increasing of the minimum distance (via interleaver gain and spectral thining), on the coding gain (via change in the numerator not in denominator of the formula) and on the kissing number of turbo lattices. These results are shown more explicitly in Section V.
Theorem 9.
Let be an nested interleaver and
be a set of nested turbo codes constructed as above. If we increase by scaling the tailbiting length and parameters ’s in the construction of the generator matrix of the turbo codes and induced set of nested turbo codes by a scale factor of , then the rates of the resulting nested turbo codes remain intact.
The proof is given in Appendix A.
IvB Interleaver Design
Interleavers play an important role in turbo codes [7, 24, 33]. Consequently, a key point of turbo lattices are also interleavers. They should have randomlike properties and avoid some specific patterns to induce a good minimum distance. For a turbo code ensemble using uniform interleaver technique, one can show turbo codes are good in the following sense [17]. That is, the minimum distance of parallel concatenated codes with parallel branches and recursive component codes grows as [18]. Also the average maximumlikelihood decoder block error probability approaches zero, at least as fast as [17]. Since increase in coding gain and decrease in normalized kissing number is completely and straightforwardly related to the increase of minimum distance, it is reasonable to use more than two branches.
We observe that to produce nested turbo codes, an interleaver which satisfies the nested property is necessary. In other words, we put two conditions in Definition 5 in a manner that, along with Definition 7, each determines a turbo code.
A method called appending has been introduced in order to construct nested interleavers and a detail example for this is provided in [25]. The append operation preserves the deterministic and pseudorandom properties [25]. Indeed, it is clear that if we append deterministic interleavers, then a deterministic interleaver can be defined by a function including at most cases.
The general picture of a nested interleaver can be viewed as a block interleaver with permutation matrix
where is an matrix, and . The –th turbo code is constructed by the first rows of .
IvC Turbo Lattices
Next, turbo codes and their nested versions are used to derive lattices using Construction D or Construction A. We use (19) and their corresponding generator matrices as nested codes which we need for producing a lattice based on Construction D. Now a generator matrix for a lattice constructed using Construction D can be derived from a set of generator vectors for the largest underlying code as in (9). Hence, for finding a generator matrix for , we have to multiply the rows of with index numbers between and by , . The resulting matrix along with vectors of the form of length form an integral basis for a lattice .
Definition 10.
A lattice constructed using Construction D is called a turbo lattice if its largest underlying code is a turbo code .
It is easy to verify that we can form a turbo lattice using a turbo code with generator matrix as in (17). If the level of our construction, , is larger than , then we have to use turbo codes which come from tailbited convolutional codes. However, if we have a degree of freedom in using a turbo code built from either terminated or tailbited convolutional codes.
Example 11.
Let be as in the previous example. In order to obtain a generator matrix of , we have multiplied the rows with indices by and the rows with indices by . The delivered matrix along with additional rows of the form produce a generator matrix for . Hence, a generator matrix for the produced turbo lattice is
where
is the matrix of an nested interleaver. Each is another permutation, , coming from an interleaver of size . In other words, the interleaver corresponding to can be constructed by the appending method with underlying interleavers , .
The above example benefited from a set of nested turbo codes. These turbo codes have tailbited recursive convolutional codes as their component codes. Also they used nested interleavers. We can also simply provide a turbo code based on an interleaver and two terminated convolutional codes. In this case, Construction A may be used to obtain a turbo lattice. An example of a lattice constructed using construction A and turbo code which uses terminated convolutional codes as its constituent codes is given next.
Example 12.
Let
be the generator matrix of a recursive convolutional code. In this case, we have , and . Let ; then we get a turbo code of rate . If we use terminated version of these recursive convolutional codes along with an interleaver of size , a linear block code can be obtained. Now consider this turbo code as a base code of Construction A to induce a turbo lattice. The minimum distance, coding gain and kissing number of this turbo lattice is closely related to the minimum distance of its underlying turbo code. Since the minimum distance of this turbo code can be increased or decreased by a selection of interleaver, the performance analysis of this turbo lattice relies on the choice of its interleaver.
V Parameter Analysis and design Criteria of Turbo Lattice
In this section some fundamental properties of turbo lattices such as minimum distance, coding gain and kissing number are studied. These properties give us the possibilities to obtain information from the underlying turbo codes in order to theoretically check the efficiency of the constructed turbo lattices.
Va Minimum Distance, Coding Gain and Kissing Number of Turbo Lattices
We look at the turbo lattice closer. The next theorem provides some formulas and an inequality about performance measures of a turbo lattice constructed following Construction D.
Theorem 13.
Let be a turbo lattice constructed following Construction D with nested turbo codes
as its underlying linear block codes with parameters and rate , for . Then the minimum distance of satisfies
(20) 
The coding gain is
(21) 
and for the normalized kissing number of we have
(22) 
where denotes the number of codewords in with minimum weight .
The proof is given in Appendix A.
Remark 14.
If the interleaver size and its relative parameters are increased by a factor of , then the dimension of the constructed lattice increases by the same factor. As mentioned before, by this modification and due to the interleaver gain and spectral thining, the minimum distance of the nested turbo codes, ’s, increase slightly or remain unchanged. This increase can not be faster than logarithmically with the code length [8]. Thus, in (22), decreases. Also the number of minimum weight codewords in these turbo codes decreases by a factor of . Hence, the equation (22) for the normalized kissing number of decreases.
Now, let us put all the above discussion together. We can control (increasing of) coding gain of the constructed turbo lattice only by setting up a good interleaver of size and adjusting its size. Furthermore, if one produces a set of nested turbo codes
where such that or , then we get the following bounds
and
It is obvious that this setting results in (possibly) larger (or at the worst scenario, equivalent) minimum distance, absolutely better coding gain and (possibly) lower (or at the worst scenario, equivalent) kissing number when compared with the turbo lattices which come from parallel concatenated of terminated recursive convolutional codes and Construction A. However, geometrical and layer properties of an level Construction D turbo lattices make their decoding algorithm more complex.
According to the discussion described above, we can take advantage from a wide range of aspects of these lattices. To be more specific, these turbo lattices are generated by Construction D using a nested set of block turbo codes. Their underlying codes are two tailbiting recursive convolutional codes. Thus, this class provides an appropriate link between two approaches of block and convolutional codes. The tailbiting method gives us the opportunity to combine profits of recursive convolutional codes (such as memory) with the advantages of block codes. It is worth pointing out that the nested property of turbo codes induces higher coding gain; see (21). Also, excellent performance of parallel concatenating systematic feedback convolutional codes imply efficient turbo lattices with great fundamental parameters.
VB Guidelines to Choose Suitable Parameters
Since our first priority in designing turbo lattices is to have high coding gain lattices, selecting appropriate code length for underlying turbo codes seems crucial. In addition, guidelines to choose tailbiting convolutional codes that are especially suited for parallel concatenated schemes are given in [34]. The authors of [34] also tabulate tailbiting convolutional codes of different rate and length. The minimum distance of their associated turbo codes are also provided. We express the importance of parameters like and code length of underlying turbo codes via a detail example provided below.
Assume that a tailbiting version of a systematic recursive convolutional code of rate with memory and generator matrix
is used to form a nested turbo code. The resulting turbo code has rate and based on [34], it has minimum distance for block information bits of length . Now consider only the first row of the generator matrix for . Therefore, the component encoders of have generator matrices (after puncturing out the zero bits)
A block turbo code which uses as its constituent codes has rate and according to the information in [34], the minimum distance of this code is for information block length of . For instance suppose that a block of information bits of size is used. Since is a rate block turbo code, the lattice points are in . Therefore, a square generator matrix of size for this turbo lattice can be formed following the approach in Example 11. Hence, is
where
of size is a nested interleaver. In other words is an interleaver for and is an interleaver of size for . Now the fundamental parameters of this turbo lattice constructed with levels of Construction D can be found. Since and , Theorem 13 implies that
and the coding gain of satisfies
that is, in decibels, . Also the kissing number of is bounded above by
Since and , the summation in the above inequality disappears and we get or equivalently .
VC Other Possible Design Criteria
The results in [12] provide a general guideline on the choice of code rates , which is critical in the construction of any capacityachieving lattice using Construction D. Hence a complete different line of studies can be done in order to change the above design criteria for turbo lattices via information theoretic tools.
Vi Decoding Algorithm
There exist many decoding algorithms for finding the closest point in a lattice [10, 32]. Similar expressions, algorithms and theorems can be found in [11]. In fact, in [11], Forney uses a code formula along with a multistage decoding algorithm to solve a CVP for a lattice based on Construction D.
Via A MultiStage Turbo Lattice Decoder
In the previous sections we used a set of nested turbo codes to produce turbo lattice . Now our aim is to solve a closest lattice point problem for . Assume that a vector is sent over an unconstrained AWGN channel with noise variance and a vector is received. The closest point search algorithms attempt to compute the lattice vector such that is minimized.
The excellent performance of turbo codes is due to the wellknown iterative turbo decoder [5]. One can generalize and investigate a multistage soft decision decoding algorithm [12] for decoding lattices constructed based on Construction D. A simple extension to turbo lattices is presented next.
As it is shown in Section II, every lattice constructed using Construction D benefits from a nice layered code structure. This building block consists of a set of nested linear block codes which is a set of nested turbo codes in turbo lattices. The goal is to use , the number of levels of the construction, serially matching iterative turbo decoding algorithms. The idea has been brought here from the multistage decoding algorithm presented in [11].
One can restate (10) as
(23) 
The above representation of states that every can be represented by
(24) 
where and , .
Any softinput softoutput (SISO) or softinput hardoutput (SIHO) decoding algorithm for the turbo code may be used as a decoding algorithm for , as follows. Given any , let us denote the closest even and odd integers to each coordinate of by and respectively, . Then one can compute (where the upper signs are taken if