# Decoding by Sampling — Part II:

Derandomization and Soft-Output Decoding

###### Abstract

In this paper, a derandomized algorithm for sampling decoding is proposed to achieve near-optimal performance in lattice decoding. By setting a probability threshold to sample candidates, the whole sampling procedure becomes deterministic, which brings considerable performance improvement and complexity reduction over to the randomized sampling. Moreover, the upper bound on the sample size , which corresponds to near-maximum likelihood (ML) performance, is derived. We also find that the proposed algorithm can be used as an efficient tool to implement soft-output decoding in multiple-input multiple-output (MIMO) systems. An upper bound of the sphere radius in list sphere decoding (LSD) is derived. Based on it, we demonstrate that the derandomized sampling algorithm is capable of achieving near-maximum a posteriori (MAP) performance. Simulation results show that near-optimum performance can be achieved by a moderate size in both lattice decoding and soft-output decoding.

## I Introduction

As one of the core problems of lattices, the closest vector problem (CVP) has wide applications in number theory, cryptography, and communications. In [1], the lattice reduction technique was introduced to solve CVP approximately. Its key idea is replacing the original lattice by an equivalent one with a shorter basis, which greatly improves the performance of suboptimal decoding schemes like successive interference cancelation (SIC). Since then, a number of improved decoding schemes based on the lattice reduction have been proposed [2, 3, 4, 5]. In multiple-input multiple-output (MIMO) communications, it has been shown in [6] that minimum mean-square error (MMSE) decoding based on the lattice reduction achieves the optimal diversity and multiplexing trade-off. However, the performance gap between maximum-likelihood (ML) decoding and lattice-reduction-aided decoding is still substantial especially in high-dimensional systems [7, 8].

On the other hand, in order to achieve near-capacity performance over MIMO channels, bit-interleaved coded modulation (BICM) and iterative detection and decoding (IDD) are well accepted, where the extrinsic information calculated by a priori probability (APP) detector is taken into account to produce the soft decisions [9]. As the key ingredient of IDD receivers, the calculation of APP is usually performed by a log-likelihood ratio (LLR) value via maximum a posteriori (MAP) algorithm, whose complexity increases exponentially with the number of transmit antennas and the constellation size. In [9], a modified sphere decoding (SD) algorithm referred to as list sphere decoding (LSD) was given. By resorting to a list of lattice points within a certain sphere radius, it achieves an approximation of the MAP performance while maintaining affordable complexity. However, the exponentially increased complexity is always a big problem in LSD especially for high-dimensional systems. Based on LSD, a number of approaches resorting to lattice reduction were proposed to further reduce the complexity burden or improve the performance [10, 11, 12, 13, 14]. Unfortunately, none of them give the explicit size of the sphere radius when the decoder approaches near-MAP performance, making it still an open question. In [15], a LSD-based probabilistic tree pruning algorithm was proposed with a lower bound constraint of the sphere radius. However, to fix that initial sphere radius, sphere decoding is still required as a preprocessing stage making it impractical in high dimensions.

Recently, randomized sampling decoding has been proposed in [16] to narrow the gap between lattice-reduction-aided decoding and sphere decoding. As a randomized version of SIC, it applies Klein’s sampling technique [17] to randomly sample lattice points from a Gaussian-like distribution and chooses the closest one among all the samples. However, because of randomization, there are two inherent issues in random sampling. One is inevitable repetitions in the sampling process leading to unnecessary complexity, while the other one is inevitable performance loss since some lattice points can be missed during the sampling. Although Klein mentioned a derandomized algorithm very briefly in [17], it does not seem to allow for an efficient implementation. In [16], the randomized sampling algorithm was also extended to soft-output decoding in MIMO systems. Although it could achieve remarkable performance gain with polynomial complexity, it still suffers from these two issues.

In this paper, to overcome these two problems caused by randomization, we propose a new kind of sampling algorithm referred to as derandomized sampling decoding. With a sample size set initially, candidate points are sampled deterministically according to a threshold we define. As randomization is removed, derandomized sampling decoding shows great potential in both performance and complexity. To further exploit it, its optimum decoding radius, which is defined in bounded distance decoding (BDD) as a sphere radius that the lattice point within this radius will be decoded correctly, is derived. Furthermore, the upper bound on with respect to near-ML performance is given, by varying , the decoder enjoys a flexible trade-off between performance and complexity in lattice decoding.

We then extend derandomized sampling algorithm to soft-output decoding in MIMO systems. Since the randomization during samplings is removed, it operates as an approximation scheme like LSD but generates the candidate list by sampling, which is more efficient and easier to implement. Although samplings are performed over the entire lattice, lattice points with large sampling probabilities are quite likely to be sampled, which means the final candidate list tends to be comprised of a number of lattice points around the closest lattice point. The upper bound of the sphere radius in LSD is also derived. Then based on the proposed derandomized sampling algorithm, the trade-off between performance and complexity in soft-output decoding is established by adjusting the sample size .

The rest of this paper is organized as follows. Section II presents the system model and briefly reviews the randomized sampling algorithm in lattice decoding. In Section III, derandomized sampling algorithm is proposed, followed by performance analysis and optimization. In Section IV, the proposed algorithm is extended to soft-output decoding. Simulation results are presented and evaluated in Section V. Finally, Section VI concludes the paper.

Notation: Matrices and column vectors are denoted by upper and lowercase boldface letters, and the transpose, inverse, pseudoinverse of a matrix by and , respectively. We use for the th column of the matrix , for the entry in the th row and th column of the matrix . denotes rounding to the integer closest to . If is a complex number, rounds the real and imaginary parts separately. Finally, in this paper, the computational complexity is measured by the number of arithmetic operations (additions, multiplications, comparisons, etc.).

## Ii Preliminaries

### Ii-a Sampling Decoding

Consider the decoding of an real-valued system. The extension to the complex-valued system is straightforward [16]. Let denote the transmitted signal taken from a constellation . The corresponding received signal is given by

(1) |

where is an full column-rank matrix of channel coefficients and is the noise vector with zero mean and variance .

Given the model in (1), ML decoding is shown as follows:

(2) |

where denotes Euclidean norm. Vector can be viewed as a lattice point of the lattice and ML decoding corresponds to solving the CVP in the lattice . In practice, ML decoding is always performed by sphere decoding. Due to the exponential complexity of sphere decoding, lattice-reduction-aided decoding is often preferred due to its acceptable complexity.

In SIC decoding (also known as Babai’s nearest plane algorithm), after QR-decomposition of the channel matrix , the system model in (1) becomes

(3) |

where is an orthogonal matrix and is an upper triangular matrix. At each decoding level , the pre-detection signal is calculated as

(4) |

where the decision is obtained by rounding to the nearest integer as

(5) |

Different from SIC decoding, in randomized sampling decoding [16], is generated randomly from the 2-integer set : centered at :

(6) |

where function denotes the random rounding.

Based on Klein’s sampling algorithm [17], the probability of returning an integer () from the 2-integer set is calculated from the following discrete Gaussian distribution

(7) |

where

(8) |

and . Note that is a conditional probability because the selection of previous entries are also taken into account to calculate . As for the selection of the parameter which affects the variance of sampling probabilities, Klein chose and [16] gave a better parameter for randomized sampling algorithm as where the parameter related with the sample size follows

(9) |

The decisions ’s are generated level by level, and a candidate lattice point is obtained if all the entries are generated. It has been demonstrated in [16] that given , the probability of a vector being sampled (also known as the sampling probability of ) is lower bounded by

(10) |

By repeating this sampling procedure for times, a candidate list of lattice points is obtained as shown in Fig. 1 and the closest one in Euclidean norm is chosen as the decoding output.

However, because samplings are random and because the samples are independent of each other, lattice points are sampled following the probability , which results in two inherent problems in random sampling. On one hand, inevitable sample repetitions in the final candidate list means unnecessary complexity is incurred. Meanwhile, the performance is also degraded by the existence of repetitions since most of samplings are employed to sample those lattice points with large sampling probabilities. On the other hand, lattice points have to face the risk of being missed during the sampling, especially for those with small sampling probabilities on the early decoding levels, leading to inevitable performance loss. Actually, to make sure lattice points with a reasonable probability to be sampled, one has to increase the sample size , which leads to more sampling repetitions. Therefore, the efficiency and performance of the randomized sampling are greatly suffered from the randomization.

### Ii-B Soft-Output Decoding

In order to achieve near-capacity performance with low complexity in MIMO-BICM systems, iterative detection and decoding (IDD) proposed in [9] has attracted much attention recently, which improves the performance by iteratively exchanging the extrinsic information between MIMO detector and soft-in soft-out (SISO) decoder.

As shown in Fig. 2, the extrinsic information is calculated by the MIMO detector based on the channel observation and a priori information (API) of the transmitted bits which is provided by the SISO decoder. Then is passed through the deinterleaver to become API to the SISO decoder, which computes the new extrinsic information to feed back to the MIMO detector. Specifically, the extrinsic information in soft-output decoding is always calculated through the computation of the posterior LLR for each information bit associated with the transmitted signal , which is given as

(11) |

where is the -th information bit in , . Here, represents the number of bits per constellation symbol and contains information bits in all. Through the exchange of extrinsic information in each iteration, the performance of soft-output decoding improves gradually and we have

(12) |

where denotes API of each transmitted bit in

(13) |

and is the set of indices with

(14) |

In the absence of API, we suppose all the bits in have the same probability to be 0 or 1 before is observed as . Then, for simplicity, the -value in (11) becomes [9, 18]

(15) |

The straightforward way to calculate the -value in (15) is MAP algorithm which computes the sums that contain terms. Due to the exponentially increased complexity of MAP, one has to resort to approximations to reduce the complexity.

As one of the approximation scenarios, Max-Log approximation tries to approximate the sums in (15) only with their largest terms [19, 20]:

(16) |

However, to obtain the largest terms in (16), sphere decoding is applied, which incurs exponential increment complexity. To achieve a polynomial complexity, suboptimal hard decoding schemes like SIC are used to solve those maximization problems approximately. Unfortunately, the decoder performance is poor even under the help of the lattice reduction technique.

As for another approximation schemes, list sphere decoding (LSD) proposed in [9] restricts the sums in (15) into a much smaller size, which uses sphere decoding with a certain radius to perform the candidate admission as follows

(17) |

Here, denotes the sphere radius and a larger means a better approximation and also, a higher computational complexity. Then, the calculation of L-value in (15) can be written as

(18) |

Although lattice reduction technique can be applied to reduce its complexity, LSD schemes still suffer from a high complexity cost due to the application of sphere decoding. In particular, unlike finding the closest lattice point in lattice decoding, sphere decoding in LSD tries to enumerate all the lattice points within a constant sphere radius . Additionally, since the selection of the sphere radius affects the performance of LSD, there is still an open question about the upper bound of when LSD achieves near-MAP performance.

## Iii Derandomized Sampling Decoding

In this section, we propose a derandomized sampling algorithm to solve the afore-mentioned problems in randomized sampling decoding, namely, repetition and missing of certain lattice points. Specifically, the sampling procedure of the derandomized sampling algorithm is performed level by level with as follows:

At decoding level , sample size is allocated to candidate integers according to and all the integers with are deterministically sampled. Note that is not necessarily an integer any more. For integers with , after updating the size , sampling continues from the next level in the same way. Note that when , derandomized sampling decoding performs the same with SIC decoding by always selecting the integer with the largest probability. Hence, for integers with , SIC is applied directly to obtain a candidate lattice point. Finally, among all the candidate lattice points, the closest one is selected as the solution. By performing the sampling based on the threshold at each decoding level, the whole sampling process becomes deterministic. The risk of lattice points being missed during the sampling is greatly reduced, which means the probability of sampling the closest lattice point is improved.

Unlike the parallel structure of the random sampling, derandomized sampling decoding admits a tree structure as shown in Fig. 3. The final candidate list is generated by traversing the tree from level to level 1 rather than by independent paths. From this perspective, derandomized sampling functions like a pruning algorithm in sphere decoding [21, 22, 23] which prunes branches . Thanks to the tree structure, there are no sampling repetitions during the whole sampling process while necessary calculations of sampling probabilities in branches are performed only once, saving a lot of complexity. Therefore, different from randomized sampling decoding and other decoding schemes establishing a candidate list with a constant size around the SIC output [12, 24], the size of the final candidate list is variable, which means the sample size set initially in the derandomized sampling algorithm is actually a nominal sample size of the candidate list.

As a nominal sample size, is essentially a parameter in the threshold used to evaluate the sampling performance. With the increment of , the complexity improves gradually since more lattice points will be sampled. Note that is not the real size of the final sampling list, the complexity of derandomized sampling decoding in fact grows slowly with its increment. Because problems caused by randomization are overcome, derandomized algorithm achieves desirable improvement in both performance and complexity.

### Iii-a Algorithm Analysis

The operation of the derandomized sampling algorithm relies on the notion of the sampling probability which is calculated by (7). According to the defined threshold , at each decoding level, an integer candidate for the entry will be sampled if and only if

(19) |

Note that the sampling probability calculated by (7) is a conditional probability based on the entries of previous levels. As sampling is performed from level to level 1, the sampling probability about lattice point is essentially the product of its entries’ sampling probability, which is lower bounded by (10):

(20) |

###### Proposition 1.

Given the nominal sample size , lattice points with sampling probability

(21) |

will be deterministically sampled by derandomized sampling algorithm.

Proof: Consider sampling an -dimensional lattice point by derandomized sampling algorithm. Obviously, with the initial sample size , its first entry on level will be sampled if

(22) |

Based on the selection of , its updated sample size on the next level is calculated as

(23) |

Then, on level , the first two entries of will be obtained when

(24) |

By induction, will be deterministically sampled if the following condition holds

(25) |

Thus, the conclusion follows, completing the proof.

As for the randomized sampling in [16], because the times sampling is independent of each other, the probability of missing is calculated as , which means one has to increase the sample size to ensure a high probability of being sampled. In particular, given sample size , lattice points with sampling probability

(26) |

will be found by randomized sampling algorithm with probability . Through the comparison between (21) and (26), to sample the same lattice point , the required sample size of the derandomized sampling algorithm is less than the half of that in randomized sampling algorithm:

(27) |

On the other hand, with the same sample size , derandomized sampling algorithm has the ability to obtain more lattice points than randomized sampling, which brings further performance improvement. Since is the nominal sample size, for the same sample size derandomized sampling algorithm still achieves much lower complexity than randomized sampling. More precisely, when , the complexity of the derandomized sampling algorithm is by invoking the calculation of the sampling probability in (7) for times. For , as computations in sampling procedures are reduced by removing all the repetitions, the number of recalling the calculation in (7) is much less than , which means the complexity is much smaller than . Due to the uncertainty in this procedure, it is preferable to denote the complexity of the derandomized sampling algorithm by , which means a polynomial complexity with respect to the dimension . Obviously, without suffering from the effect of the randomization, derandomized sampling algorithm shows great potential in both performance and complexity.

### Iii-B Optimization of the Parameter

As a parameter which controls the variance of sampling probabilities, parameter has a significant impact on the final decoding performance. Due to the consideration of complexity, the initial sample size of sampling algorithms is always limited, which means finding the optimum to exploit the sampling potential for a given is the key. In order to determine the optimum choice of in the derandomized sampling algorithm, let where , then becomes the parameter needed to be optimized.

It has been demonstrated in [17] that

(28) |

Because , the term in (28) will be negligible if is sufficiently large. Assume satisfies this weak condition, the sampling probability of shown in (20), which is calculated based on the discrete Gaussian distribution, can be further derived as follows

(29) |

Since lattice points with will be deterministically sampled by derandomized sampling algorithm, motivated by (29), let

(30) |

and we have

(31) |

which means lattice points with less than the right-hand side (RHS) of (31) must be obtained.

In order to exploit the potential of the derandomized sampling algorithm for the best decoding performance, parameter is selected carefully to maximize the upper bound shown in (31). Therefore, let the derivative about versus be zero, the optimum given sample size in the derandomized sampling algorithm can be finally determined as follows

(32) |

Obviously, the optimum for the randomized sampling algorithm shown in (9) is not the optimum solution in the derandomized sampling algorithm. According to (32), it is easy to check that the parameter monotonically decreases with respect to the increment of the sample size .

In the view of lattice decoding, derandomized sampling algorithm will give the closest lattice point if the distance between and lattice is less than the RHS of (31). Therefore, the RHS of (31) can be regarded as the decoding radius in the notion of bounded distance decoding (BDD). By substituting (32) into (31), the optimum decoding radius of the derandomized sampling algorithm is derived as

(33) |

### Iii-C Upper Bound on the Sample Size

We now give an explicit value of when derandomized sampling decoding achieves near-ML performance. To do this, the total probability of samples in the final candidate list is derived based on the truncation of the discrete Gaussian distribution (7).

As shown in [16], the probability that the integer generated by random rounding is located within the 2N integers around is bounded by

(34) |

Because , the term decays exponentially, meaning a finite truncation with moderate achieves an accurate approximation. Normally, 3-integer approximation is sufficient:

(35) |

Since these probabilities follow the discrete Gaussian distribution, they decrease monotonically with the distance from . Let us order them as follows

(36) |

As shown in (4), is subject to the effect of noise. Intuitively, tends to be close to an integer for small noise while it tends to be halfway between two integers for large noise. Since is the peak of the continuous Gaussian distribution associated with the discrete one (7), we define the worst case in sampling as the one where is centered between two integers.

Because the random noise makes it hard for an exact analysis, we only consider the worst-case scenario in samplings. Then, under the 3-integer approximation in (35), the following holds in the worst case:

(37) |

where is much smaller due to exponential decay of the probability with the distance.

Now, let us calculate the total probability of lattice points sampled by derandomized sampling, in the worst case. Consider the level first. Obviously, according to (19), the first two integers will be sampled if . If , all the 3 integers around are deterministically sampled. On the other hand, if , integer will be discarded while the summation of probabilities of the other two integers will be larger than according to (35). Therefore, given the nominal sample size , the sum probability of samples on the level is bounded by

(38) |

To further derive the lower bound of the total probability of samples, we assume the third sample at the each sampling level is always discarded. Then, still in the worst case, the total probability of samples on the level is given by

(39) |

Similarly, on the level , the total probability of samples in the worst case can be lower bounded by

(40) |

Therefore, the total probability of sampled lattice points in the derandomized sampling algorithm is lower bounded by a function of . We define a parameter to evaluate the decoding performance as

(41) |

Obviously, the lower bound increases with and a larger means a higher probability of the closest lattice point being sampled. Thus, derandomized sampling decoding can be used to approximate ML decoding as approaches 1.

The lower bound (41) is loose because it quantifies the probability in the worst case. For close to 1, can be very huge (in fact exponential). A lower bound in the average case is an open question. Because the noise is random, the average-case probability may be more useful.

In order to obtain a better estimate, the idea of the fixed-complexity sphere decoding (FSD), which also follows a tree structure in decoding, is exploited. Different from the standard sphere decoding, it only performs the full search in the upper levels known as the full-expansion stage while SIC is applied on the rest of levels. It has been proved in [25] that by applying the channel matrix ordering to make sure signals with the largest postprocessing noise amplification are detected in the full-expansion stage, FSD algorithm yields near-ML performance in high SNR if it satisfies:

(42) |

where is the number of levels in the full-expansion stage.

We propose to use sampling in the full-expansion stage of FSD. With the suitable channel matrix ordering, the modified sampling decoder also consists of two stages. Candidate values on the upper levels are sampled based on the lower bound while decodings on the remaining levels are processed by SIC (i.e., derandomized sampling decoding with ).

According to (40) and (42), if we set to a value near 1 on the upper decoding levels, then the decoder will achieve near-ML performance:

(43) |

Compared with (41), the lower bound (43) is better because is much smaller than meaning the value of achieving the same is greatly reduced. Here, we define representing near-ML performance. Then, according to (43), the corresponding who denotes the upper bound of the sample size in derandomized sampling decoding can be easily calculated.

Note that the derandomized sampling algorithm with performs the same with SIC in lattice decoding. Thus, the decoder enjoys flexible performance between SIC and near-ML by adjusting . Although larger will bring further performance improvement, it approaches ML performance asymptotically with the exponential increment of , which is meaningless due to the consideration of complexity.

## Iv Derandomized sampling algorithm in soft-output decoding

In this section, we show that the proposed derandomized sampling algorithm can also be used as an efficient tool to implement soft-output decoding in MIMO systems. By generating a list of lattice points around the ML estimate , derandomized sampling algorithm in soft-output decoding actually functions as an approximation scheme like list sphere decoding (LSD) in [9]. To establish the trade-off with respect to the sample size , we firstly give an upper bound of the sphere radius in LSD.

### Iv-a Upper Bound on the Sphere Radius in LSD

Given , we define a function over the -dimensional lattice as

(44) |

where denotes the lattice point of . Thus the LLR in soft-output decoding shown in (15) can be expressed by -function as

(45) | |||||

Accordingly, the -value computation is converted into the calculation of function . Here, the lattice point in is expressed by , where is an integer vector. To further exploit the function , we invoke the following lemma in [26].

Lemma 1 ([26]). For any n-dimensional lattice , and , one has

(46) | |||||

According to Lemma 1, we obtain

(47) |

As to the second term in the RHS of (47), it decays exponentially with the dimension . Assume is sufficiently large, then is dominated by the set of lattice points within the radius centering at . Back to in (45), correspondingly, lattice points in the corresponding set denoted by should satisfy the following condition as

(48) |

and we have

(49) |

Based on the fact as shown in (45) that

(50) |

the key of solving the -value computation depends on lattice points within the radius . In other words, it can be interpreted as that, with the sphere radius , LSD could achieve MAP performance within a negligible loss by only exploiting lattice points in the set shown in (17). Thus, the sphere radius in LSD is upper bounded by

(51) |

Note that with the increase of SNR, the radius shrinks gradually saving a lot of complexity.

### Iv-B Derandomized Sampling in Soft-Output Decoding

Given the sphere radius , LSD performs sphere decoding to obtain all the lattice points within . However, it is known that sphere decoding is impractical due to its exponentially increased complexity. Instead of enumerating lattice points within by exhaustive search, derandomized sampling algorithm generates lattice points by sampling from a Gaussian-like distribution, which is more efficient than LSD due to its polynomial complexity.

As it shown in (10), the lower bound of the sampling probability resembles a Gaussian distribution over the lattice. The closer to , the larger lower bound. Therefore, lattice points closer to are more likely to be sampled due to larger sampling probability lower bounds. In this way, the derandomized sampling algorithm could find a number of lattice points with small values of around the ML estimate. By restricting the original set of sums in (15) into a much smaller one denoted by , the LLR calculation by derandomized sampling algorithm can be written as

(52) |

It is noteworthy that lattice points with sampling probabilities will be deterministically sampled by derandomized sampling algorithm. As shown in (31), this can be interpreted as obtaining all the lattice points inside a sphere of the radius where

(53) |

To achieve a better upper bound of , the optimum choice of the parameter in (32) is applied and we have

(54) |

Let denotes the set formed by lattice points within sphere radius

(55) |

then set in (52) can be rewritten as

(56) |

where represents the set of lattice points outside radius but also sampled by derandomized sampling decoding, and normally . Although lattice points within only constitute a small part in the final candidate list of derandomized sampling, it captures the key aspect of the decoding performance and offers a way to investigate the effect of the sample size in soft-output decoding.

Based on the upper bound of the sphere radius in LSD, derandomized sampling algorithm can also be applied to implement soft-out decoding through sampling all the lattice points within . Hence, according to (51), let , the derandomized sampling algorithm will achieve near-MAP performance even with lattice points in only. Therefore, we have

(57) |

and

(58) |

Obviously, with the increment of , more lattice points will be sampled and the corresponding sphere radius also increases gradually leading to further performance improvement. Note that the total sampling probability shown in (43) could also be used to reveal this flexible trade-off. As for achieving near-MAP performance, we emphasize that the required sample size of the derandomized sampling algorithm is significantly less than the value shown in (59). The reasons are two-fold: the derivation is based on a loose upper bound of shown in (53), and the contribution of set which also contains sampled lattice points is not considered. Nevertheless, it provides a straightforward way of showing the trade-off in soft-output decoding with respect to . Actually, with a moderate value of , desirable performance gain in a low complexity burden can be achieved, as will be shown in simulation results.

## V Simulation Results

In this section, performance and complexity of the derandomized sampling algorithm in MIMO systems are studied. Here, the -th entry of the transmitted signal x, denoted as , is the modulation symbol taken independently from a -QAM constellation with Gray mapping. We assume a flat fading environment, the channel matrix contains uncorrelated complex gaussian fading gains with unit variance and remains constant over each frame duration. Let represents the average power per bit at the receiver, then holds where is the modulation level and is the noise power.

Fig. 4 shows the bit error rate (BER) of the derandomized sampling decoding compared with other decoding schemes in a uncoded MIMO system with 64-QAM. Clearly, sampling decoding schemes have considerable gains over the lattice-reduction-aided SIC. Compared to the fixed candidates algorithm (FCA) in [12] and iterative list decoding in [24] with 30 samples, sampling decoding algorithms offer not only the improved BER performance but also the promise of smaller sample size. As expected, derandomized sampling decoding achieves better BER performance than randomized sampling decoding with the same . Specifically, the gain in MMSE schemes with is approximately 1 dB for a BER of . With the increment of , the BER performance improves gradually. Observe that with (=73), the performance of the derandomized sampling algorithm suffers negligible loss compared with ML. Therefore, with a moderate , derandomized sampling decoding could achieve near-ML performance.

Fig. 5 shows the complexity comparison of the derandomized sampling algorithm with other schemes in different dimensions. It is clearly seen that in a 64-QAM MIMO system for the fixed SNR ( dB), the derandomized sampling algorithm needs much lower average flops than other decoding schemes with the same size . This can be interpreted as reducing the computation in sampling procedures by removing all the unnecessary repetitions. Even for a large , the complexity is still lower than that of the randomized sampling algorithm with . Consequently, better BER performance and less complexity requirement make derandomized sampling algorithm very promising for lattice decoding.

Fig. 6 shows the frame error rate for a coded MIMO BICM-IDD system with 4-QAM, using a rate-1/2, irregular (256,128,3) low-density parity-check (LDPC) code of codeword length 256 (i.e., 128 information bits). Each codeword spans one channel realization and a random bit interleaver is used. The parity check matrix is randomly constructed, but cycles of length 4 are eliminated. The maximum number of decoding iterations for LDPC is set at 50. Clearly, after three iterations between MIMO detector and SISO decoder in IDD, the proposed sampling algorithm with performs better than FCA, MMSE suppression [27], and Embedding list [28]. To achieve a better comparison, performance of both sampling algorithms with after one iteration are also given. As expected, derandomized sampling algorithm always achieves better FER performance than randomized sampling algorithm under the same iteration. Note that there is no significant performance gain after more than three iterations in IDD receivers. It is observed that the LSD in [9] and shifted sphere list deocding (SSLD) in [14] with sample size achieve near-MAP performance. However, due to the application of sphere decoding, their complexity are high and increase exponentially with the size of sphere radius and system dimensions leading them impractical. It is also shown that the performance gap between the proposed algorithm and MAP decreases with the increment of and near-MAP performance is achieved by derandomized sampling algorithm with a moderate size . By adjusting , the whole system enjoys a flexible trade-off between performance and complexity.

## Vi Conclusions

In this paper, we proposed a derandomized algorithm to address issues in sampling algorithms caused by randomization, which holds great potential in both lattice decoding and soft-output decoding. By setting a probability threshold to perform the sampling, the whole sampling procedure becomes deterministic. We demonstrated that the proposed derandomized sampling algorithm outperforms the randomized sampling algorithm with much lower complexity and derived the optimal parameter which maximizes the decoding radius for the best decoding performance. To accomplish the trade-off in lattice decoding, the upper bound on the sample size corresponding to near-ML performance was also given. Furthermore, we found that the proposed derandomized sampling algorithm is quite suitable for soft-output decoding through sampling a list of lattice points around the ML estimate. According to the analysis, we demonstrated that the derandomized sampling algorithm is capable of achieving near-MAP performance with a moderate size . Therefore, by varying , the decoder enjoys a flexible trade-off between performance and complexity in both lattice decoding and soft-output decoding.

## References

- [1] L. Babai, “On Lovász’ lattice reduction and the nearest lattice point problem,” Combinatorica, vol. 6, no. 1, pp. 1–13, 1986.
- [2] L. Luzzi, G. Othman, and J. Belfiore, “Augmented lattice reduction for low-complexity MIMO decoding,” IEEE Trans. Wireless Commun., vol. 9, pp. 2853–2859, Sep. 2010.
- [3] D. Wubben, R. Bohnke, V. Kuhn, and K. D. Kammeyer, “Near-maximum-likelihood detection of MIMO systems using MMSE-based lattice reduction,” in Proc. IEEE Int. Conf. Commun.(ICC’04), Paris, France, Jun. 2004, pp. 798–802.
- [4] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, “Closest point search in lattices,” IEEE Trans. Inform. Theory, vol. 48, no. 8, pp. 2201–2214, Aug. 2002.
- [5] Y. H. Gan, C. Ling, and W. H. Mow, “Complex lattice reduction algorithm for low-complexity full-diversity MIMO detection,” IEEE Trans. Signal Process., vol. 57, no. 7, pp. 2701–2710, Jul. 2009.
- [6] J. Jalden and P. Elia, “DMT optimality of LR-aided linear decoders for a general class of channels, lattice designs, and system models,” IEEE Trans. Inform. Theory, vol. 56, no. 10, pp. 4765–4780, Oct. 2010.
- [7] C. Ling, “On the proximity factors of lattice reduction-aided decoding,” IEEE Trans. Signal Process., vol. 59, no. 6, pp. 2795–2808, Jun. 2011.
- [8] M. O. Damen, H. Gamal, and G. Caire, “On maximum-likelihood detection and the search for the closest lattice point,” IEEE Trans. Inform. Theory, vol. 49, pp. 2389–2401, Oct. 2003.
- [9] B. M. Hochwald and S. ten Brink, “Achieving near-capacity on a multiple-antenna channel,” IEEE Trans. Commun., vol. 51, no. 3, pp. 389 – 399, Mar. 2003.
- [10] H. Vikalo, B. Hassibi, and T. Kailath, “Iterative decoding for MIMO channels via modified sphere decoding,” IEEE Trans. Wireless Commun., vol. 3, no. 6, pp. 2299 – 2311, Nov. 2004.
- [11] P. Silvola, K. Hooli, and M. Juntti, “Suboptimal soft-output map detector with lattice reduction,” IEEE Signal Process. Lett., vol. 13, no. 6, pp. 321 – 324, Jun. 2006.
- [12] W. Zhang and X. Ma, “Low-complexity soft-output decoding with lattice-reduction-aided detectors,” IEEE Trans. Commun., vol. 58, no. 9, pp. 2621–2629, Sep. 2010.
- [13] D. Milliner and J. Barry, “A lattice-reduction-aided soft detector for multiple-input multiple-output channels,” in Proc. IEEE Globecom’06, Nov. 2006, pp. 1–5.
- [14] J. Boutros, N. Gresset, L. Brunel, and M. Fossorier, “Soft-input soft-output lattice sphere decoder for linear channels,” in Proc. IEEE GLOBECOM, San Francisco, Dec. 2003, pp. 1583–1587.
- [15] J. Lee, B. Shim, and I. Kang, “Soft-input soft-output list sphere detection with a probabilistic radius tightening,” IEEE Trans. Wireless Commun., vol. 11, no. 8, pp. 2848 –2857, Aug. 2012.
- [16] S. Liu, C. Ling, and D. Stehle, “Decoding by sampling: a randomized lattice algorithm for bounded distance decoding,” IEEE Trans. Inform. Theory, vol. 57, pp. 5933–5945, Sep. 2011.
- [17] P. Klein, “Finding the closest lattice vector when it is unusually close,” SIAM Symposium on Discrete Algorithms, pp. 937–941, ACM, 2000.
- [18] E. Larsson and J. Jalden, “Fixed-complexity soft MIMO detection via partial marginalization,” IEEE Trans. Signal Process., vol. 56, pp. 3397–3407, Aug. 2008.
- [19] R. Wang and G. B. Giannakis, “Approaching MIMO channel capacity with soft detection based on hard sphere decoding,” IEEE Trans. Commun., vol. 54, no. 4, pp. 587 – 590, Apr. 2006.
- [20] C. Studer and H. Bolcskei, “Soft-input soft-output single tree-search sphere decoding,” IEEE Trans. Inform. Theory, vol. 56, no. 10, pp. 4827 –4842, Oct. 2010.
- [21] R. Gowaikar and B. Hassibi, “Statistical pruning for near-maximum likelihood decoding,” IEEE Trans. Signal Process., vol. 55, no. 6, pp. 2661–2675, Jun. 2007.
- [22] W. Zhao and G. Giannakis, “Sphere decoding algorithms with improved radius search,” IEEE Trans. Commun., vol. 53, no. 7, pp. 1104–1109, Jul. 2005.
- [23] B. Shim and I. Kang, “Sphere decoding with a probabilistic tree pruning,” IEEE Trans. Signal Process., vol. 56, no. 10, pp. 4867–4878, Oct. 2008.
- [24] T. Shimokawa and T. Fujino, “Iterative lattice reduction aided MMSE list detection in MIMO system,” in Proc. IEEE International Conference on Advanced Technologies for Communications, Oct. 2008, pp. 50–54.
- [25] J. Jalden, L. Barbero, B. Ottersten, and J. Thompson, “The error probability of the fixed-complexity sphere decoder,” IEEE Trans. Signal Process., vol. 57, pp. 2711–2720, Jul. 2009.
- [26] W. Banaszczyk, “New bounds in some transference theorems in the geometry of numbers,” in Math. Ann. 296, 1993, pp. 625–635.
- [27] A. Matache, C. Jones, and R. D. Wesel, “Reduced complexity MIMO detectors for LDPC coded systems,” in Proc. IEEE Military Commun. Conf., Monterey, USA, 2004, pp. 1073–1079.
- [28] L. Luzzi, D. Stehle, and C. Ling, “Decoding by embedding: correct decoding radius and DMT optimality,” IEEE Trans. Inform. Theory, vol. 59, no. 5, pp. 2960–2973, 2013.