# An Improved SCFlip Decoder for Polar Codes

###### Abstract

This paper focuses on the recently introduced Successive Cancellation Flip (SCFlip) decoder of polar codes. Our contribution is twofold. First, we propose the use of an optimized metric to determine the flipping positions within the SCFlip decoder, which improves its ability to find the first error that occurred during the initial SC decoding attempt. We also show that the proposed metric allows closely approaching the performance of an ideal SCFlip decoder. Second, we introduce a generalisation of the SCFlip decoder to a number of nested flips, denoted by SCFlip-, using a similar optimized metric to determine the positions of the nested flips. We show that the SCFlip-2 decoder yields significant gains in terms of decoding performance and competes with the performance of the CRC-aided SC-List decoder with list size , while having an average decoding complexity similar to that of the standard SC decoding, at medium to high signal to noise ratio.

## I Introduction

Polar codes are a new class of error-correcting codes, proposed by Arikan in [1], which provably achieve the capacity of any symmetric binary-input memoryless channel under successive cancellation (SC) decoding. However, for short to moderate blocklengths, the frame error rate (FER) performance of polar codes under successive cancellation decoding does not compete with other families of codes such as LDPC or Turbo-Codes.

In [2], a Successive Cancellation list-decoding (SCL) has been proposed, which significantly outperforms the simple SC decoding, and approaches the Maximum-Likelihood (ML) performance at high signal to noise ratio (SNR). Moreover, when applied to polar codes concatenated with an outer cyclic redundancy check (CRC) code -used to identify the correct message from the decoded list- it has been shown that the SCL decoder may successfully compete with other families of capacity approaching codes, like Low Density Parity Check (LDPC) codes. However, SCL decoder suffers from high storage and computational complexity, which grows linearly with the size of the list. Several improvements have been proposed to reduce its computational complexity, such as Stack decoding (SCS) in [3], but at a cost of an increasing storage complexity.

Successive Cancellation Flip decoder has been introduced in [4] for the BEC channel and generalised to binary-input additive white Gaussian noise (BI-AWGN) channel by using a CRC in [5]. It is close to the order statistic decoding proposed in [6] and which has been specifically used for polar codes in [7]. The idea is to allow a given number of new decoding attempts, in case that a failure of the initial SC decoding attempt is detected by the CRC. Each new attempt consists in flipping one single decision - starting with the least reliable one, according to the absolute value of the log-likelihood ratio (LLR) - of the initial SC attempt, then decoding the subsequent positions by using standard SC decoding. The above procedure is iterated until the CRC is verified or a predetermined maximum number of flips is reached. The SCFlip decoder provides an interesting trade-off between decoding performance and decoding complexity, since each new decoding attempt is only performed if the previous one failed. Consequently, the average computational complexity of the SCFlip decoder approaches the one of SC decoder at medium to high SNR, while competing with the CRC-aided SCL with = 2, in terms of error correction performance.

In this work we propose two improvements to the SCFlip decoder, aimed at both increasing the error correction performance and reducing the computational complexity. First, we propose the use of a new metric to determine the flipping positions within the SCFlip decoder. The proposed metric takes into account the sequential aspect of the SC decoder, and we show it yields an improved FER performance and a reduced computational complexity compared to LLR-based metric used in [5]. Second, we introduce a generalization of the SCFlip decoder to a number of nested flips, denoted by SCFlip-. We show that the SCFlip-2 decoder with the proposed metric to select the two flipping positions competes with the CRC-aided SCL decoder with , in terms of decoding performance, while having an average decoding complexity similar to that of the standard SC decoding at medium to high SNR. Furthermore, we also use an Oracle-assisted decoder as in [5] to determine the lower bound of these SCFlip decoders and shows that both proposed algorithms for and can closely approach the optimal performance.

## Ii Preliminaries

### Ii-a Polar Codes

A Polar Code is characterized by the three-tuple , where is the blocklength, the number of information bits and is the set of indices indicating the position of the information bits inside the block of size . Bits corresponding to positions are referred to as frozen bits and are fixed to pre-determined values known at both the encoder and the decoder.

We denote the data vector, of length , containing information bits at the positions , and frozen bits that are set to zero. The encoded vector, denoted by , is obtained by:

where is the generator matrix defined as in [1]. We further denote by the data received from the channel and used at the decoder input. denotes the decoder’s output, with being the estimation of the bit .

### Ii-B Successive Cancellation Decoder

SC decoder is the standard low complexity decoder of polar codes given in [1]. The decoding process consists in taking a decision on bit , denoted , according to the sign of the LLR:

(1) |

using the decision function :

where by convention with equal probability. Note that the computations and decisions are performed sequentially in SC decoder, as the estimation of the current bit depends on the previous decoded bits .

## Iii Definition and Analysis of Scflip- Decoders

### Iii-a Definition of SCFlip- Decoders

Let denotes the serial concatenation of an outer CRC code and an inner Polar code. Note that the number of unfrozen positions of the Polar code is , where is the number of information bits, and is the size of the CRC.

The SCFlip decoder [5] consists of a standard SC decoding, possibly followed by maximum number of new decoding attempts, until no errors are detected by the CRC check. Each new decoding consists of (i) flipping only one decision of the initial SC attempt, then (ii) decoding the subsequent positions by using standard SC decoding. The flipping positions are those corresponding to the lowest absolute values of the LLRs computed during the initial SC attempt.

However, the success of the SCFlip decoding depends on (i) the ability to find the very first error that occurred during the initial SC attempt, and (ii) the ability of SC to successfully decode the subsequent positions, once the first position in error has been flipped. In this work we introduce two new enhancements to the SCFlip decoder, aimed at improving the two above-mentioned characteristics. We propose the use of an optimized metric to determine the flipping positions, building upon the probability of a given position being first error that occurred in the initial SC attempt (see section IV). We note that the global structure of the SCFlip decoding stays the same, the only difference being on the generation of the ordered list of flipping positions, denoted by . We also introduce a generalisation of the SCFlip decoder to a number of nested flips: the first flip is performed on one decision of the initial SC attempt, while the -th flip () is performed on a bit position belonging to the new decoding trajectory determined by the previous flips ( to ). Such a sequence of nested flips will also be referred to as an order- flip. The SCFlip- decoding for and are presented in Algorithm 1 and Algorithm 2, respectively.

A simple and efficient implementation of SCFlip- decoder consists in incrementing recursively. As long as no errors are detected by the CRC code, we proceed step by step from SC decoder, SCFlip with 1 flip for a number of attempts, then SCFlip with 2 flips for a number of attempts. New decoding attempts in SCFlip decoder are similar to the standard SC decoder, the only difference being on the hard decision function. Thus, we use the notation SC(), where is a set of indices corresponding to the flipping positions. The hard decision function can be defined as follow:

Note that we have a standard SC decoder if . In Algorithm 1 and Algorithm 2, candidate positions for the -flips are stored in an ordered list denoted by of size and generated by the function FlipDetermine, described in Algorithm 3, which is performed using a metric denoted . The calculation of this metric will be discussed in section IV.

Algorithm 3 is used in SCFlip-1 as well as SCFlip-2 to determine indexes strictly larger than of the least reliable decisions according to the proposed metric. To do so, we first calculate the metric vector , then generate an index vector , such that is sorted in descending order (this operation is performed by the function denoted sort_index). For SCFlip-1, is generated by calling this function once with and . For the SCFlip-2, we have two degrees of freedom concerning the choice of the first and second flipping position. Therefore, we use two parameters and to characterize the flips of order 2. is the number of positions from the list, for which flips of order-2 will be explored. The number of order-2 flips explored for each of these positions is given by . The ordered list of size is obtained by concatenating the vectors of size returned by successive calls to this function. For each call, corresponds to the position of the first bit flipped. The maximum number of order-2 attempts is .

### Iii-B Oracle-Assisted Decoder and Order of a Noise Realization

Following [5], we distinguish between channel-generated errors (CGE) and propagation errors (PE) in the SC decoding. Propagation errors are generated by an erroneous decision, which propagates in the decoding process, while channel-generated errors correspond to erroneous decisions which are only generated by the noise realization at the decoder’s input. From these definitions, the first error in SC decoding is necessary a CGE.

In [5], an Oracle-assisted decoder (OA-SC) has been introduced to count the number of channel-generated errors which occur during a SC decoding. OA-SC performs a standard SC decoder with a hard decision function modified to ensure that the decision is correct and no error will propagate during the process: . Hence, is defined by:

where the symbol denotes the number of times the condition is verified. In this paper, the parameter is referred to as order of a noise realization.

Note that we use the same notation for the flip order in the SCFlip- decoder and the order of a noise realization as they are directly related. Indeed, the SCFlip- is able to decode a noise realization of order , provided that (i) the corresponding order- flip has been selected in the corresponding ordered list and (ii) the CRC is not verified by one of the previous decoding attempts. As in [5], we use the OA-SC decoder to predict the optimal performance of a SCFlip- decoder, regardless of the choice of the metric and the complexity (), by declaring a decoding failure if and only if the order of the noise realization is greater than . These optimal performance serve as lower bounds on the FER results for practical SCFlip- decoders. We will further denote the lower bound of the SCFlip- decoder.

Fig. 1 presents the lower-bounds of SCFlip- decoders with over BI-AWGN channel for a Polar code with parameters ()=(1024,512+16). We also plot the performance of the SC decoder with . It can be seen that ideal SCFlip- decoders exhibit significant SNR gains compared to the SC decoder, from dB for the ideal SCFlip-1, to about dB for the ideal SCFlip-2 decoder, at FER. This ideal performance can be achieved with for any order , assuming a perfect CRC, i.e. collision probability equal to 0. In practice, due to non perfect CRC, the probability of getting a CRC collision (hence an erroneous decoded message) increases with the number of decoding attempts. Therefore, optimizing the choice of flipping positions allows improving simultaneously the latency and the FER performance.

### Iii-C Importance of SCFlip-1

We define , the probability of not correcting a noise realization of order for a SCFlip- using a chosen metric , a number of attempts, and assuming a perfect CRC. As a consequence, FER of SCFlip- can be lower bounded by:

(2) |

where denote the probability of the noise realization being of order and the lower bound determined by the OA-SC defined above. This is an inequality because of non perfect CRC. The sum in the right hand side of the above inequality is referred to as the loss of order . It can be seen that this loss is incremental, so the loss of order will propagate at order . In particular, to construct an SCFlip-2 decoder that closely approaches its theoretical lower bound , we would like loss of order 2 to be of same order of magnitude as . This implies that the loss of order 1 should also be of same order of magnitude as , and therefore order of magnitude smaller than . This condition is concretely materialised by a SCFlip-1 decoder matching its lower bound predicted by the OA-SC decoder.

### Iii-D Complexity of SCFlip-

In addition to the FER performance, the performance of SCFlip- decoder is also characterized by its computational complexity. This computational complexity depends on the number of decoding attempts performed by the SCFlip- decoder in order to decode a given noise realization. Therefore, we denote by and call the normalized computational complexity the average number of attempts. It is given by:

where is the average number of decoding attempts and depends on the metric and the chosen values . It is worth noticing that complexity of SCFlip tends to the one of the SC decoder when SNR tends to infinity.

## Iv A New Metric for efficient scflip- Decoders

In [5], the SCFlip decoder uses flipping positions which are ordered according to the absolute value of their LLRs. The criterion can be described as a metric , defined by:

(3) |

The flipping positions are those corresponding to the positions with the lowest . However, we point out that this metric is sub-optimal, because it does not take into account the sequential aspect of the SC decoder. Indeed, while a lower LLR absolute value indicates that the corresponding hard decision has a higher probability of being in error, it does not provide any information about the probability of being the first error that occurred during the sequential decoding process. To address this issue, we propose a new metric, which is aimed at identifying the first error that occurred during the sequential decoding process. The probability of being the first error is given by:

where . This probability cannot be computed in practice, as we have no guarantee that previous bits have been correctly decoded. Instead, we can compute the probability given by (this follows from the definition of ):

Note that if is a frozen bit, it cannot be in error, as the decoder always takes the right decision. Therefore, for frozen bits, the above probability is set to zero.

To compute the probability of being the first error, we consider as an approximation of , and introduce a parameter to compensate the approximation and which can be optimized by simulation. Thus, the proposed metric is given by:

###### Definition 1

Given a bit , the metric associated to is defined by:

(4) |

where is a parameter to be optimized by simulation.

We further define the equivalent logarithmic domain metric . It follows that:

(5) |

The sum can be seen as a penalty added to , which take into consideration the sequential aspect of the SC decoding. Indeed, this term increases with increasing number and decreasing reliability of previously decoded bits, so that the last decoded bits are penalized compared to the metric (3). To understand the impact of the parameter , we consider the following limit cases. For , the above metric becomes , where is the number of positions in less than or equal to . Hence, the induced ordering corresponds to the usual decoding order. For it can be easily seen that , so that is equivalent to (3). In general, the use of the metric can be seen an intermediate trade-off between the decoding order and the one given by the LLRs reliability (3). To find the best possible trade-off, we optimize the value of by Monte-Carlo simulation. Note also that the optimized value should depend on the code used and the SNR. We provide an intuitive explanation of the behavior of the optimized alpha value as function of the SNR. Consider equation (5) for some fixed value of alpha. When the SNR goes to infinity, the term tends to 0 and becomes negligible compared to , and therefore the sequential characteristic of the decoder is no longer accounted for by the considered metric. Consequently, it is expected that the optimal value of alpha will increase with the SNR, so that to rebalance the contribution of the term to the value of the considered metric. To confirm this intuition, Table I shows the optimized alpha values for a code and several SNR values. As expected, it can be observed that the optimal alpha value increases with the SNR.

SNR (dB) | 1.5 | 2.5 | 3 |
---|---|---|---|

0.4 | 0.3 | 0.25 |

We further consider a Polar code and consider only random noise realizations of order 1 at SNR dB. Fig. 2 plots the loss of order 1 () assuming perfect CRC. It is calculated as the probability of the only channel-generated error not being in the list generated by the FlipDetermine procedure, as a function of the list size , by using either the metric (3), or our proposed metric (4) with (optimized value) and . According to Section III-C, we can determine the value of such that the loss of order 1 has the same order of magnitude as the theoretical lower bound of the SCFlip-2, . At SNR=2.5db, the SCFlip-2 lower bound is about , therefore, we need . This condition can easily be satisfied with the proposed metric (4), while we need much higher with the metric (3), which means in practice higher computational complexity. For next simulations, we choose .

Let us now consider be a noise realization of order for a given code with the first CGE in position . We define the set of cardinality . Consider now the code . The order of the noise realization is only for the code and therefore finding the second CGE is equivalent to decoding a noise realization of order 1 for the code . As a consequence, the metric can be used also for SCFlip-2 by considering the set defined above instead of the set . However, the optimum value for may be different. Numerical optimisation for for a code shows that the optimum value is at SNR=2.5dB.

## V Simulation Results

Throughout this section we consider transmission over a BI-AWGN channel, using a CRC-Polar code concatenation with parameters , , and . The positions for information and CRC bits, given by the set , are optimized for the SC decoder by Gaussian approximation as in [8]. Also, this set is updated for each different value of the SNR. Concerning the CRC, we use a 16-bit CRC with generator polynomial .

The FER performance of SCFlip-1 and SCFlip-2 decoders is shown in Fig. 3. For SCFlip-1, the maximum number of flips is set to . We compare the SCFlip-1 decoder using the metric (3), with the one using our proposed metric (4) with =0.3, and the theoretical lower bound corresponding to an ideal SCFlip-1 decoder. While the performance gain compared to [5] is not impressive, it can be seen that our proposed metric closely approaches the theoretical lower bound. As the metric (4) exhibits a negligible loss of order 1, it can further be used to correct second order noise realizations (note that SCFlip-1 and SCFlip-2 using the metric (3) have nearly the same FER performance, since in this case the loss of order 1 is dominant). We further plot the FER performance of the proposed SCFlip-2 with , and , and compare with the CRC-aided SCL with = 4 and 16-bit CRC. The theoretical lower bound of SCFlip-2 decoder is also shown. It can be seen that the proposed SCFlip-2 closely approaches the theoretical lower bound, and exhibits nearly the same performance as the CRC-aided SCL decoder with at medium/high SNR.

In Fig. 4, we plot the average normalized complexity for SCFlip-1 with metric (3) and =40, for SCFlip-1 with the proposed metric (4) with , and SCFlip-2 with same parameters as in previous paragraph. Moreover, we also plot the normalized complexity of SC and SCL decoders with . We note that the SCFlip-1 decoder exhibits similar FER performance when using our proposed metric (4) with , or the metric (3) with . We observe that the complexity of SCFlip is very high at low SNR, but converges quickly to the one of the SC decoder. For a SNR of 2.2dB, we have already a complexity lower than the one of the SCL with . Moreover, one can see that the proposed metric allows reducing the average normalized complexity of the SCFlip-1 decoder by a factor of 2, as compared to the metric (3). Also, computational complexity of our proposed SCFlip-2 is even slightly better than SCFlip-1 with metric (3) and , while it improves performance by 0.4dB at FER of .

## Vi Conclusion

In this paper, we first proposed an improvement of the SCFlip decoder of order-, by introducing a new metric to determine the flipping positions, which takes into account the sequential aspect of the SC decoder. The proposed metric increases the ability of the SCFlip decoder to find the first error that occurred during the initial SC attempt, thus improving both decoding performance and computational complexity. Moreover, we have shown that the proposed metric allows closely approaching the theoretical lower bound corresponding to an ideal SCFlip decoder, and explained that this a necessary condition for building an effective SCFlip decoder of order 2.

We have further investigated an SCFlip decoder of order-2, which uses an analogous metric to determine the order-2 flip positions. We have shown that the SCFlip-2 decoder yields significant gains in terms of decoding performance, closely approaching the performance of the CRC-aided SC-List decoder with list size , while having an average decoding complexity similar to that of the standard SC decoding at medium to high SNR.

## Acknowledgment

The research leading to these results received funding from the European Commission H2020 Programme, under grant agreement 671650 (mmMagic Project).

## References

- [1] E. Arikan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 3051–3073, 2009.
- [2] I. Tal and A. Vardy, “List decoding of polar codes,” Information Theory, IEEE Transactions on, vol. 61, no. 5, pp. 2213–2226, 2015.
- [3] K. Niu and K. Chen, “Stack decoding of polar codes,” Electronics letters, vol. 48, no. 12, pp. 695–697, 2012.
- [4] M. Bastani Parizi, “Polar codes: Finite length implementation, error correlations and multilevel modulation,” Master’s thesis, Swiss Federal Institute of Technology, 2012.
- [5] O. Afisiadis, A. Balatsoukas-Stimming, and A. Burg, “A low-complexity improved successive cancellation decoder for polar codes,” in 48th Asilomar Conference on Signals, Systems and Computers. IEEE, 2014, pp. 2116–2120.
- [6] M. P. Fossorier and S. Lin, “Soft-decision decoding of linear block codes based on ordered statistics,” IEEE Transactions on Information Theory, vol. 41, no. 5, pp. 1379–1396, 1995.
- [7] D. Wu, Y. Li, X. Guo, and Y. Sun, “Ordered statistic decoding for short polar codes,” IEEE Communications Letters, vol. 20, no. 6, pp. 1064–1067, 2016.
- [8] P. Trifonov, “Efficient design and decoding of polar codes,” IEEE Transactions on Communications, vol. 60, no. 11, pp. 3221–3227, 2012.