# Relaxed Polar Codes

## Abstract

Polar codes are the latest breakthrough in coding theory, as they are the first family of codes with explicit construction that provably achieve the symmetric capacity of discrete memoryless channels. Arikan’s polar encoder and successive cancellation decoder have complexities of , for code length . Although, the complexity bound of is asymptotically favorable, we report in this work methods to further reduce the encoding and decoding complexities of polar coding. The crux is to relax the polarization of certain bit-channels without performance degradation. We consider schemes for relaxing the polarization of both *very good* and *very bad* bit-channels, in the process of channel polarization. Relaxed polar codes are proved to preserve the capacity achieving property of polar codes. Analytical bounds on the asymptotic and finite-length complexity reduction attainable by relaxed polarization are derived. For binary erasure channels, we show that the computation complexity can be reduced by a factor of 6, while preserving the rate and error performance. We also show that relaxed polar codes can be decoded with significantly reduced latency. For AWGN channels with medium code lengths, we show that relaxed polar codes can have lower error probabilities than conventional polar codes, while having reduced encoding and decoding computation complexities.

## 1Introduction

Polar codes, introduced by Arikan [2], are the most recent breakthrough in coding theory. Polar codes are the first and, currently, the only family of codes with explicit construction (no ensemble to pick from) to asymptotically achieve the capacity of symmetric discrete memoryless channels as the block length goes to infinity. Besides their obvious application in error correction, recent research have shown the possibility of applying polar codes and the polarization phenomenon in various communications and signal processing problems such as data compression [4], BICM channels [6], wiretap channels [7], multiple access channels [8], and broadcast channels [11]. There have also been various modified constructions of polar codes for the different applications, such as generalized polar codes [12], compound polar codes [13], concatenated polar codes [14], and universal polar codes [15].

Polar codes can be encoded and decoded with relatively low complexity. Both the encoding complexity and the successive cancellation (SC) decoding complexity of polar codes are , for code length [2]. The decoding latency and memory requirements of polar decoders can be reduced to [16]. Hardware architectures for polar decoders, with memory and processing elements, were implemented [16]. A semi-parallel architecture for SC decoding has been recently proposed [17], where efficiency is achieved without a significant throughput penalty by sharing processing resources and taking advantage of the regular structure of polar codes. The encoding and decoding latencies of polar codes can also be reduced to , through multi-dimensional polar transformations [18]. Alamdar and Kschischang proposed a simplified successive cancellation decoder with reduced latency and computational complexity by simplifying the decoder to decode all bits in a rate-one or a rate-zero constituent code simultaneously [19]. Reduction in decoding latency can also be achieved by changing the code construction, such as through interleaved concatenation of shorter polar codes [14].

In this paper, we propose methods to reduce both the encoding and decoding computational complexities of polar codes, by means of *relaxing* the channel polarization. The resultant codes are called relaxed polar codes. Hence, hardware implementations for the encoders and decoders of relaxed polar codes can require smaller area and less power consumptions than conventional polar codes. Efficient methods for the implementation of the SC decoder, as in [16], can also be applied to further improve the efficiency of decoding relaxed polar codes.

In practical scenarios, codes have finite block lengths and are designed with a specific information block length and rate in order to satisfy a certain error rate. Due to the nature of channel polarization, the error probability of certain bit channels decrease (or increase) exponentially at each polarization step. Hence, the encoding and decoding complexities can be reduced by relaxing the polarization of certain channels if their polarization degrees hit suitable thresholds, while satisfying the code rate and error rate requirements. For Arikan’s polar code with length , each bit-channel is polarized times. However, for the proposed relaxed polar codes, some bit channels will be fully polarized times, and the polarization of the remaining bit-channels will be relaxed, where their polarization is aborted if they become sufficiently good or sufficiently bad with less than polarization steps. Relaxed polarization results in fewer polarizing operations, and hence a reduction in complexity. It is found that with careful construction of relaxed polar codes, there is no error performance degradation. In fact, it is observed that relaxed polar codes can have a lower error rate than conventional polar codes with the same rate.

The rest of this paper is organized as follows. In Section 2, we give an overview of channel polarization theory and construction of conventional polar codes, which we call *fully polarized* codes. In Section 3, the notion of relaxed channel polarization is introduced and the relaxed channel polarization theory is established. The asymptotic bounds on the complexity reduction using relaxed polar codes are discussed in Section 4. Then, upper bounds and lower bounds on the complexity reduction at finite block lengths are derived in Section 5. These bounds are evaluated and compared with the actual complexity reductions at certain code parameters, in Section 5.4. Constructions of relaxed polar codes for general channels, and decoders for relaxed polar codes are discussed in Section 6. The relation between the relaxed polar code construction and the simplified successive cancellation decoder (SSCD) is discussed in Section Section 6.3. Numerical simulations on the AWGN channels are presented in Section 6.4. The paper is concluded in Section 7.

## 2Arikan’s Fully Polarized Codes

For any binary-input discrete memoryless channel (B-DMC) , let denote the probability of receiving given that was sent, for any and . For an B-DMC , the *Bhattacharyya parameter* of is

The symmetric capacity of a B-DMC can be written as

For a binary memoryless symmetric (BMS) channel with uniform input, the error probability of can be characterized as

The Bhattacharyya parameter can be shown to be always between and . The Bhattacharyya parameter can be regarded as a measure of the reliability of . Channels with close to zero are almost noiseless, while channels with close to one are almost pure-noise channels. More precisely, it can be proved that the probability of error of an BMS channel is upper-bounded by its Bhattacharyya parameter [20]

The construction of polar codes is based on a phenomenon called *channel polarization* discovered by Arikan [2]. Consider the polarization matrix

Consider the polarizing transformation which takes two independent copies of and performs the mapping , where , , and , then polarization is defined with the channel transformation

where and are degraded and upgraded channels respectively. Hence, the following is true for the bit-channel rates [2],

The mapping of is called one level of polarization. The same mapping is applied to and to get , , , , which is the second level of polarization of . The same process can be continued in order to polarize for any arbitrary number of levels. The polarization process can be also described using a binary tree, where the root of the tree is associated with the channel . Each node in the binary tree is associated with some bit-channel and has two children, where the left child corresponds to and the right child corresponds to .

The channel polarization process can be also represented using the Kronecker powers of defined as follows. and for any ,

where is a matrix. Let . Then, the polarization transform matrix is defined as , where is the bit-reversal permutation matrix [2]. Let denote the vector of independent and uniform binary random variables. Let be transmitted through independent copies of a binary-input discrete memoryless channel (B-DMC) to form channel output . Let denote the channel that results from independent copies of in the polar transformation i.e. . The combined channel is defined with transition probabilities given by

This is the channel that the random vector observes through the polar transformation.

Assuming uniform channel input and a genie-aided successive cancellation decoder, the bit-channel is defined with the following transition probability:

Notice that gives the transition probabilities of assuming all the preceding bits are already decoded and are available, together with the observations at the channel output . This is actually the channel that observes and is also referred to as the -th bit-channel. It can be observed that corresponds to the -node in the -th level of polarization of . The following recursive formulas hold for Bhattacharyya parameters of individual bit-channels in the polar transformation [2]

with equality iff is a binary erasure channel.

The channel polarization theorem states that as the code length goes to infinity, the bit-channels become polarized, meaning that they either become noise-free or very noisy. Define the set of good bit-channels according to the channel and a positive constant 1 / 2 as

where , then the main channel polarization theorem follows [2]:

Theorem ? readily leads to a construction of capacity-achieving *polar codes*. The crux of polar codes is to carry the information bits on the upgraded noise-free channels and freeze the degraded noisy channels to a predetermined value, e.g. zero. The following theorem shows the error exponent under successive cancellation decoding [2]:

Similar to the set of good bit-channels, the set of bad bit-channels is defined according to the channel and a positive constant 1 / 2 as

The following corollary can be derived by specializing the Theorem 3 of [21]:

## 3Relaxed Polarization Theory

In this section, we define relaxed polarization. We prove that, similar to conventional polar codes, relaxed polar codes can asymptotically achieve the capacity of a binary memoryless symmetric channel. We also prove that the bit-channel error probability of relaxed polar codes is at most that of conventional polar codes without rate-loss.

Let us observe the definition of good channels in Theorem ?. Let us also observe that the Bhattacharrya parameters (BP) approach or exponentially with the block length . Let denote the bit-channels of the relaxed polar code. Consider two independent copies of a parent bit-channel at polarization level to be polarized into two bit-channel children at level , corresponding to a code of length , via the following channel transformation

Consider the case, when the polarized channel at level , where , is sufficiently good, such that it satisfies the definition of a good channel at the target length , i.e. . Then, the idea of relaxed polarization is to stop further polarization of this good channel, and the corresponding node in the polarization tree is called a relaxed node, such that the channels of all the descendents of a relaxed node are the same as that of the relaxed parent node and will also be relaxed. Let and denote the sub-vectors with odd and even indices, respectively. Then, the bit-channel transformation at the relaxed node is given by

Relaxing the further polarization of sufficiently good channels is called good-channel relaxed polarization. For the good-channel relaxed polar code, define the set of good bit-channels according to the channel and a positive constant 1 / 2 as

Next, we show that relaxed polar codes, similar to fully polarized codes, asymptotically achieve the capacity of BMS channels.

Consider a relaxed channel at level , where . Then . Then the BPs of all its descendents at level are equal to and are in . In case of full polarization, if a channel belongs to , then it must have polarized to a good channel at level or earlier. If it polarized at level , then by definition it also belongs to . Otherwise, its parent has polarized to a good channel at level . With relaxed polarization, this channel and all its siblings will also be in . Therefore, and hence . Then, the proof follows from Theorem ?.

The upper bound on the probability of error as in Theorem ? is still valid for the relaxed polar code constructed with respect to . Hence, Theorem ? shows that it is possible to construct good-channel relaxed polar codes, which are still capacity achieving.

The remaining question is to actually compare the bit-error rate of relaxed polar code with that of Arikan’s polar code. Consider the special case when . Consider the channel , then we have the following inequalities (the proof is provided in the Appendix.)

Consider a good-channel relaxed polar code with good-channel set , which is assumed to be equal to the good channel set of the fully polarized polar code, i.e., . Consider the last level of channel polarization e.g. channel and its children, and , assuming that is a relaxed node. Then, both indices and are contained in and . For the relaxed code, it follows that sum error probability of these two channels is . Together with summing and , it follows that . Therefore, we have the following lemma.

Note that the left hand side of is the union bound on the frame error probability (FER) of the constructed relaxed polar code while the bound is very tight for low FERs. Similarly, the right hand side of is the union bound on the frame error probability (FER) of the constructed relaxed polar code which is also very tight for low FERs. Hence, we conclude that the relaxed polar code is expected to perform better than the fully polarized codes in terms of frame error rate. This will be verified in Section 6.4.

The reduction in encoding and decoding complexity will be addressed in the following section.

## 4Asymptotic Analysis of Complexity Reduction

In this section, we establish bounds on the asymptotic complexity reduction (as the code’s block length goes to infinity) in polar code encoders and decoders, made possible by relaxed polarization.

First, we elaborate the notion of complexity reduction. For Arikan’s polar codes, the total number of channel polarization operations required using Arikan’s butterfly polarization structure, is

where is the length of the code. As a result, the encoding procedure consists of binary XOR operations and decoding procedure consists of LLR combinations. Therefore, each skipped polarization operation is equivalent to one *unit* of complexity reduction in both encoding and decoding, where the unit corresponds to a binary XOR when encoding and an LLR combining operation when decoding. The complexity reduction is defined to be the ratio of the number of polarization operations that are skipped due to relaxed polarization to the total number of polarization operations, , required for full polarization. The complexity reduction (CR) can be directly translated into encoding and decoding complexity ratios of .

For the asymptotic analysis throughout this section, a family of capacity-achieving polar codes is assumed which is constructed with respect to a fixed parameter 1 / 2 and the set of good bit-channels , for any block length .

Pick a fixed such that . Consider the polarization level . Let be the total number of nodes at this level. Then for large enough , the nodes at this level with index belonging to the set will be relaxed. Notice that the fraction of these nodes, i.e. , approaches the capacity by Theorem ?. The fraction of bit-channel polarizations between the level and the last level is of the total , among which a fraction of of them are relaxed, for large enough . Therefore, the complexity reduction will be at least .

In the next theorem, a bound on the asymptotic complexity reduction using the bad-channel relaxed polarization is provided. The following scenario is considered for bad-channel relaxed polarization: if none of the descendants of a certain node will belong to , then the polarization at that node, and consequently all of its descendants, will be relaxed.

Consider the polarization level . Then by Corollary ?, for large enough , the fraction of nodes with Bhattacharyya parameter at least is at least . Consider such a node with Bhattacharyya parameter . Then the best descendant of at the last level of polarization has Bhattacharyya parameter

which implies that it can not be a good-bit-channel. Therefore, the total fraction of complexity reduction is at least

and the theorem follows.

Observe that, by neglecting and in the bounds given in Theorem ? and Theorem ? and by assuming large enough , the complexity reduction ratio from good-channel and bad-channel relaxed polarization is and , respectively, which are both positive constant factors. By combining both good and bad channel relaxation, the ratio of saved operations approaches . Hence, relaxed polarization can provide a non-vanishing scalar reduction in complexity, even as the code length grows infinitely.

## 5Finite Length Analysis of Complexity Reduction

In this section, we derive bounds on the complexity reduction resulted from good-channel and bad-channel relaxed polarization at finite block lengths.

### 5.1Relaxed polar code constructions using Bhattacharyya parameters

In general, finite block length polar codes are constructed by fixing either a target frame error probability (FER) or target code rate . We consider construction of polar codes with code length , at a target FER of . To simplify notation, let . At finite block lengths, we need to specify certain thresholds for Bhattacharyya parameters in order to establish criteria for good-channel and bad-channel relaxed polarization. As a result, the following scenarios are considered for relaxed polarization:

Good-Channel Relaxed Polarization (GC-RP):

Node at polarization level is not further polarized ifBad-Channel Relaxed Polarization (BC-RP):

Node at polarization level is not further polarized ifAll-Channel Relaxed Polarization (AC-RP):

Node at polarization level is not further polarized if or

where and are thresholds that can be considered as parameters of the construction.

In the proposed bad-channel relaxed polarization (BC-RP), the bad channels are not further polarized if they become sufficiently bad, where the bad-channel relaxation threshold can be set to be . To guarantee no rate loss from BC-RP, it should only be performed if . The proposed all-channel relaxed polarization (AC-RP) relaxes the polarization of a bit-channel if it becomes either sufficiently good or bad. Since the bad bit-channels do not contribute to the FER, the target FER is still maintained with AC-RP.

In Figure 1, the achieved complexity reduction ratio for a binary erasure channel with erasure probability , BEC(), is shown. It is observed that up to 85 CR is achievable, i.e. fully polarized (FP) code requires 6.6-fold the complexity of RP code. AC-RP will result in more complexity reduction than GC-RP as the channel becomes worse. The rate-loss is calculated as

where and are the rates of the codes which are constructed by full and relaxed polarization, respectively. The rate at a target FER is calculated by aggregating the maximum number of bit-channels such that their accumulated BPs does not exceed . In this simulation result shown in Figure 1, the rate loss is always less than . Another important observation, from Figure 1, is the symmetry of the CR curve around . This is explained by the following Theorem ?, which is a direct result of Lemma ? and the description of GC-RP and BC-RP.

We show that the one-to-one mapping is nothing but mirroring i.e. the -th node at the polarization level will be mapped to the node indexed by at the same level. It is sufficient to show this for one polarization level and then the rest follows from induction. Let and . Then

And

Therefore, by induction on the polarization level, it is shown that each polarized node in the polarization tree of can be mapped to the polarization tree of by reversing the sequence of ’s and ’s during its polarization. Furthermore, the BP of will be mapped to BP at the image of .

### 5.2Analysis of complexity reduction for GC-RP

In this subsection, bounds on the complexity reduction from good-channel relaxed polarization are discussed. Let . In the next theorem, a simple upper bound is provided, which is also illustrated in Figure 2.

The upper bound follows by considering the minimum number of polarization levels required for the best polarized channel to reach the threshold . Notice that . Then, after polarization levels, the minimum BP among all is indeed . Hence, polarization levels are required for the BP of at least one bit channel to be less than . The upper bound on saved operations follows by skipping all polarization steps at all remaining levels.

Next, we derive lower bounds on the complexity reduction with relaxed polarization for . For any polarization level and some threshold , let denote the set of bit channels at polarization level with BP at most i.e. .

Notice that guarantees that is a non-empty set. In polarization level , any node in has at least one descendant with BP less than , i.e. the right-most descendant which has BP at most . Therefore, there are at least nodes at polarization level that have BP less than , and will be relaxed. Relaxing each of these nodes is equivalent to skipping polarization steps. Then the total number of polarization steps skipped is , and the proof follows.

The corollary follows by taking , and in Theorem ?.

The following provides a tighter lower bound on the GC-RP complexity reduction, which is also illustrated in Figure 4.

Consider the right-most node at polarization level which has BP . Therefore, the left child of this node is contained in which means that the set of odd-indexed nodes in is always non-empty for . The right-most descendant of any of these nodes, after more polarization levels, will have BP less than and will be relaxed by eliminating the polarizing subtrees emanating from them. The total reduction of polarization steps for each of these relaxed nodes is at least polarization steps. The bound follows by summation of for all with . Notice that since the right-polarized children of those odd-indexed nodes at are even indexed, they are not counted among the odd-indexed nodes in , for any other .

Notice that in the above lower bound, a necessary condition is that to guarantee that no double counting occurs. In many practical operation scenarios, this condition holds. If the condition does not hold, one can modify the bound by limiting the computed summation to .

### 5.3Analysis of complexity reduction for AC-RP

In this subsection, we analyze the complexity reduction from bad-channel relaxed polarization, as well as all-channel (both good and bad) relaxed polarization. As opposed to the previous subsection, we limit our attention in this subsection to binary erasure channels (BEC), wherein the exact computation of Bhattacharrya parameters is applicable at finite block lengths.

Throughout this subsection, we always assume the channel BEC. For a function , let denote the output from recursive application of the function times, with initial input .

The left child of a node with BP is associated with a bit-channel with BP . Hence, it requires polarization levels for the worst left-polarized bit-channel to have BP greater than . The rest of the proof follows as for the GC-RP case.

Theorem ? can also be proved by combining the results of Theorem ? and Theorem ?. The bounds derived for the good-channel relaxed polarization in the previous subsection, can be turned into bounds for bad-channel relaxed polarization of BEC() by replacing with in the bounds, and modifying other parameters accordingly. Hence, to avoid writing similar proofs, we only mention the theorems and skip the proofs.

Let and as in Theorem ?. In fact, . Observe that if , then , if , then , and if , then . Combining Theorem ? with upper-bounds on GC-RP of Theorem ? results in the following upper bound on AC-RP complexity reduction.

The next theorem can be also regarded as the counterpart of Theorem ?, for bad-channel relaxed polarization.

For AC-RP, the lower bounds of Theorem ? and Theorem ? can be combined as in the following corollary. It is assumed that the set of relaxed nodes in GC-RP and the set of relaxed nodes in BC-RP do not intersect. This is a valid assumption as long as the good-channel and bad-channel relaxation thresholds, and , are far apart enough, as characterized in subsection ?.

Similar to Theorem ?, the above theorem holds under the condition that . Also, similar to Corollary ?, the following corollary holds by combining Theorem ? and Theorem ?. To make notations consistent, let denote the level in Theorem ? and denote the level in Theorem ?.

### 5.4Numerical evaluation of complexity reduction by relaxed polarization on erasure channels

In this subsection, we compute the complexity reduction of different scenarios of relaxed polarization over binary erasure channels and compare them with the bounds provided in this section.

The block length of the constructed relaxed polar code is assumed to be and the FER of is assumed for the code construction. The erasure probability will be varying between and . We have observed that the thresholds and will result in desired values for in Theorem ? and in Theorem ?. With these values of and we have and . We have also observed that in Theorem ? and in Theorem ? can be well approximated by and . The results of Figure 5 show that actual CR of GC-RP can be characterized using the derived upper and lower bounds, and up to complexity reduction is achievable at a target FER of .

The performance of AC-RP is analyzed in Figure 6 at the same target FER of , where the analytical bounds are compared to the numerical results from actual construction. It is observed that the bounds give a good approximation of the actual complexity reduction. GC-RP is effective with good channel parameters and BC-RP is more effective with bad channel parameters. The bounds are also minimized at , and symmetric around . This can be explained by the symmetry property of Theorem ?.

## 6Relaxed Polarization on General BMS Channels

In this section, we describe how a code can be constructed and decoded on general binary memoryless channels using relaxed polarization.

### 6.1Construction of relaxed polar codes on general BMS channels

For general binary memoryless channels, the Bhattacharyaa parameters are exponentially hard to compute as block length increases. This is due to the exponential output alphabet size of the polarized bit-channels. Instead of exact calculation of Bhattacharyya parameters, they can be well approximated by bounding the output alphabet size of bit-channels via channel degrading and channel upgrading transformations [22]. The channel degrading and upgrading operations provide tight lower bounds and upper bounds on the corresponding Bhattacharyya parameters. In order to construct polar codes for continuous-output BMS channels (e.g. additive white Gaussian noise (AWGN) channels), the channel can be first quantized. Then, the degrading and upgrading operations will be performed for the bit-channels resulting from polarization of the quantized channel [22]. For AWGN channels, the bit channel error probability (BC-EP) can also be reasonably approximated using density evolution and a Gaussian approximation [23]. Alternatively, for short codes, the BC-EP can be numerically evaluated using Monte-Carlo simulations, assuming a genie-aided SC decoder. For generality of description, let the error probability of the -th bit-channel at the -th polarization level be bounded by

where is the probability of error of the upgraded version of and is the probability of error of its degraded version.

The values of and can be computed using upgraded and degraded versions of polarization tree. In the upgraded polarization tree, after each level of polarization the resulting bit-channels will be upgraded to have a limited output alphabet size. At the next level, the upgraded bit-channels will be polarized. As a result, all the bit-channels in the upgraded polarization tree will have a limited output alphabet size. Therefore, can be easily computed. The same procedure is repeated to get a degraded polarization tree and to compute . The construction of fully polarized codes can be done according to either lower bounds or upper bounds on the probability of error of the bit-channels at the last level of polarization. For instance, in case of using upper bounds, bit-channel are sorted according to their error probabilities in ascending order. Accumulate as many good bit-channels in the set , such that , where is the target FER. Then, the FP code is defined by and has rate .

In proposed good-channel relaxed polar codes, a node will not be further polarized if the upper bound on its bit-channel error probability is lower than a certain threshold . For bad-channel relaxed polar codes, a bit-channel will not be further polarized if the lower bound on its error probability exceeds an upper threshold . Numerically, it was found for BMS channels that the error performance of the constructed code is closer to that of the upper-bound calculated using the degraded channel. Hence, when polarization is relaxed for a node, the error probability of the children of a non-polarized node is set to the degraded-channel error probability of the parent. As a result, the procedure for designing relaxed polar codes of length at a target FER on general BMS channels is specified below. Each node at level in the polarization tree is associated with a label Relaxed() which is initialized to 0, and will be set to 1 only if this node will not be polarized. The error probability (EP) of each node in the RP tree is initialized to that of the fully polarized tree . The relaxed polar code will be defined by its good channel set .

For the case of AWGN channel, first the channel parameter is calculated to satisfy the condition , where is the target rate and is the capacity of the channel. Then, the channel is quantized using the method of [22] to get a channel with discrete output alphabet. Then Algorithm 1, discussed above, will be applied to this channel.

With target FER , the good-channel relaxation threshold is chosen to be to satisfy the target FER, i.e. . Let be the entropy of the channel with fidelty , such that , where is capacity of channel . Then, the bad-channel relaxation threshold is chosen such that . For general BMS channels with error probability , the approximation can be used, based on the inequality [2], where is the binary entropy function. To guarantee that there is no rate loss from bad-channel relaxation if is satisfied at Step 13, then relaxation may only be done after verifying that the lower bound on the error probability of the best upgraded descendent channel of that node is still higher than , which will be satisfied for practical frame errors .

The same procedure above can be used to construct GC-RP codes, by neglecting the bad-channel relaxation condition in step 13.

In case of erasure channels with erasure probability , the channel parameter (erasure probability) for the target channel capacity is . The upper and lower bound on the bit-channel error probabilities coincide, and can be calculated exactly by the BPs, . To calculate the relaxation thresholds, the error probability is , and the entropy is .

Algorithm 1 constructs the relaxed polar codes for general BMS channels, using bounds on the bit-channel error probability. However, for short block lengths, the bit-channel error probability can be numerically calculated to be using Monte-Carlo simulations, assuming a genie-aided successive cancellation decoder. In such a case, Algorithm 1 is modified by letting

### 6.2Decoding of relaxed polar codes on general BMS channels

Decoding of relaxed polar codes can be done by a modified successive cancellation decoder. For a polar code of length and BMS channel , suppose that is the input vector and is the received word.

Consider a relaxed polar code constructed as explained in the previous subsection. At each level , for , Relaxed means that is not polarized and Relaxed means that is fully polarized. In practical application of relaxed polar codes, the decoder will have prior knowledge of the polarization map by Relaxed, which requires at most storage of bits. For communication systems, the polarization map can be specified by the communication standard, similar to the specification of the parity-check matrices of block codes.

For , the decoder computes the likelihood ratio (LR) of , given the channel outputs and previously decoded bits

For FP polar codes, Arikan observed that calculation of the LRs at length require another calculations at the parent node at length , where the LRs from the pair are assembled from the pair , via a straightforward calculation using the bit-channel recursion formulas for [2]. The relaxed successive cancellation decoder (RSCD) follows the same recursion. Hence, if Relaxed, the likelihood ratio (LR) can be computed recursively as follows.

Otherwise, if Relaxed the decoding equations are modified as follows:

The hard-decision estimates at a parent node are calculated from the hard-decision estimates of its two children in a step similar to encoding. At the last stage when , the LRs are simply . At the end, hard decisions are made on (at the leaf nodes), except for frozen bit-channels where .

From the above description, LR calculations for levels are required for decoding conventional FP polar codes. However, the decoding complexity of relaxed polar codes is linearly reduced by the ratio of relaxed nodes in the polarization tree, since no LR calculation is required at relaxed nodes.

The relaxed successive cancellation decoding as discussed above is extended to perform the relaxed successive cancellation list (SCL) decoding of RP codes. The SCL decoding of polar codes is shown to have considerable improvement over the regular SC decoding [25]. In SCL decoder, instead of only one path, i.e. a sequence of decoded information bits, up to decoding paths are considered at each decoding stage. The decoding paths are being updated as the decoder evolves. At each stage of the decoding process, where an information bit is being decoded, both options of and are being considered and hence, the number of decoding paths is doubled to at most . Then this extended list of size up to is pruned, based on a maximum likelihood metric, to get a list of size of the locally most likely paths. In the SCL decoding, there are up to likelihood ratios of at each node in the decoding trellis and then up to parallel recursive calculations, as in and , are performed at the node. In the relaxed SCL decoding, if a node is relaxed, then there will be parallel decoding equations as in and . The operations for picking the most likely paths remain the same for relaxed SCL.

### 6.3Relaxed polarization versus simplified successive cancellation decoding

Whereas relaxed polarization results in a construction of a code different from that obtained by Arikan’s full polarization, the SSCD [19] is a simplified decoder for a specific code. In fact, as would be clarified below, the SSCD can also be used to further reduce the complexity and latency of decoding relaxed polar codes.

By construction, relaxed polarization attempts to maximize the number of rate-one nodes and rate-zero nodes by relaxing the polarization of sufficiently good and sufficiently bad bit-channels, respectively. Rate-1 and rate-0 nodes are nodes which have all their descendants in the good channel set and the bad-channel set, respectively. As was clarified in Section 6.2, no encoding or decoding operations are done at the relaxed nodes.

The SSCD identifies the rate-1 and rate-0 nodes in the received code, and reduces the operations required to decode the corresponding constituent codes. Hence, SSCD does not offer complexity reductions at the encoder. Since a rate-0 node only has frozen bits at its output, its constituent tree does not need to be traversed when decoding since the leaf values are known a priori. The output bits of the tree rooted at a rate-1 node can be found by simple hard-decisions on the soft likelihood ratios at the rate-1 node. However, since these bit-channels were polarized at the encoder, the input bits at the rate-1 node need to be recovered with a step similar to re-encoding in order to recover the information bits.

Hence, relaxed polarization offers complexity and latency reductions at both the encoder and the decoder, while SSCD only reduces the decoding complexity and latency compared to Arikan’s successive cancellation decoder. The decoding complexity reduction is calculated for the SSCD as for the relaxed code, where rate-1 nodes and rate-0 nodes contribute to the complexity reduction same as relaxed nodes, and the re-encoding complexity at the rate-1 nodes is neglected.

Next, we compare the reductions in decoding latency. As described in the previous section, a polarized node requires three clock cycles to calculate the even and odd LRs, and then calculate its hard-decision estimate from the hard-decision estimates of its children pair using the encoding operation. Hence, the total decoding latency with successive cancellation decoding for a polar code of length can be assumed to be . Consider the RP code decoded with the RSCD. A BC relaxed node requires no operation and hence contributes nothing to the latency. A sub-tree of GC relaxed nodes requires only one clock cycle at its root to calculate its hard-decision estimates. Similarly, for the SSCD, rate-0 nodes contribute nothing to the latency, and a sub-tree of rate-1 nodes requires one clock cycle. However, since this rate-1 constituent code is fully polarized, if the root of the rate-1 constituent code is at level , then additional clock cycles are required for re-encoding to recover the information bits at the leaf nodes.

To this end, there are two important observations to make.

Firstly, the SSCD can be combined with RSCD to decode RP codes. The SSCD as proposed in [19] is applied on Arikan’s fully polarized code, and will be noted as SSCD FP. Since the relaxed polar construction as described above relaxes the nodes before determining the good-channel set, then there can exist rate-1 and rate-0 nodes in the resultant code which have not been relaxed. Then, this implies that SSCD can also be used to decode relaxed polar codes, where after determining the good-channel set, the rate-1 and rate-0 nodes are identified and the operations at rate-1 nodes and rate-0 nodes which have not be relaxed by construction will be simplified as in SSCD. Hence, combined SSCD and RSCD on RP codes, denoted by SSCD RP, will further reduce the decoding complexity of RP codes. Moreover, since RP codes are constructed to have more rate-1 and rate-0 nodes by the relaxation operation, SSCD RP will often have reduced decoding complexity compared to SSCD FP.

Secondly, the relaxed polar code construction can be modified such that all rate-1 and rate-0 nodes of the fully polarized code are relaxed at the encoder. This modified relaxed polarization (MRP) construction is done by first constructing Arikan’s fully polarized code, selecting the good-channel set according to the desired rate or target error probability, finding the modified relaxed polarization map as that which relaxes all the rate-1 and rate-0 nodes in the FP code, and then encoding according to the modified relaxed polarization map. The good channel set for the MRP code will be fixed as that in the FP code. If only the rate-1 nodes are relaxed, then the code is called GC-MRP. If both the rate-1 and rate-0 nodes are relaxed, the code is called AC-MRP. Since all rate-1 and rate-0 nodes are already relaxed, the combined SSCD-RSCD will not produce additional complexity or latency reductions compared to RSCD when decoding the MRP code. Furthermore, neglecting the re-encoding complexity required by the SSCD at rate-1 nodes, RSCD on the modified RP codes will have the same decoding complexity but lower decoding latency compared to that of SSCD on FP codes. MRP codes also have the additional advantage of having lower error rates than their corresponding FP codes, as shown in Lemma ?.

The complexity reductions by RSCD RP, SSCD FP, SSCD RP are compared in Figure 7 for two cases: GC-RP versus SSCD when applied to rate-1 nodes only, denoted by SSCD(1), and AC-RP versus SSCD when applied to both rate-1 and rate-0 nodes, denoted by SSCD. The complexity reductions are shown for a code of length on the binary erasure channel. The FP codes are constructed at a rate equals to of the channel capacity. To construct the RP codes, the error probability of the good-channel set of the FP code is calculated and used to calculate the relaxation thresholds by , and . The asymptotic bound of CR with GC-RP is the capacity as proved in Theorem ?, and is also shown. It can be noted that the results of the figure are inline with the discussion above. When considering the GC-RP and rate-1 nodes only, RSCD GC-RP can offer higher complexity reduction than SSCD(1), especially at higher rates. However, after taking the bad-channel nodes and the rate-0 nodes into account, SSCD FP has higher complexity reduction than RSCD RP which implies that the bad channel relaxation threshold can be made more aggressive without degrading the performance. Across all rates, the combined RSCD-SSCD on RP codes has the highest complexity reduction. RSCD on the MRP codes is shown in Figure 8 to have the least latency compared to SSCD on FP codes, and RSCD on RP codes. It is observed that in case of GC-RP, the decoding latency decreases at higher code rates due to the increase in the number of relaxed nodes. For AC-RP, the latency increases with the code rate since GC-relaxed (or rate-1) nodes require more latency than BC-relaxed (or rate-0) nodes, as described above. It has also been observed that the percentage of latency reduction increases with the code length.

### 6.4Performance on the AWGN channel

The achievable complexity reduction is analyzed by actual construction of the relaxed polar codes on AWGN channels in Figure 9. An AWGN channel with binary-input capacity is used to calculate upper and lower bounds on the bit-channel error probabilities. The CR at different code lengths , and are logged at different target FER . The rate, achievable by construction of the FP code at each target FER , is also logged. It is observed that at a larger target FER , a higher rate is possible, due to possible accumulation of more good-channel bits. The CR also increases with the target FER , due to the increase of the relaxation threshold , despite the increase in the code rate. Since the number of polarization levels increases with the code block length , the achievable CR from RP increases with . The effect of CR due to bad-channel relaxation becomes more visible, at higher target FER, as also becomes lower.

The error-rate performance by actual relaxed successive cancellation decoding of the RP codes is shown in Figure 10, for binary phase shift keying (BPSK) on an AWGN channel with variance as a function of the signal to noise ratio SNR. Practical code length of is assumed with near half-rate code of , respectively. The code is constructed with Algorithm 1, assuming an AWGN channel with binary-input capacity , and with . For simpler comparison, both the GC-RP and AC-RP codes use the same good bit-channel set found by construction (Stage 4) of the AC-RP code. However, the FP good-channel set is optimized for the FP polar code. It is observed that the frame error rate (FER) and bit error rate (BER) performances of the GC-RP code are similar to those of the AC-RP code. Although, the RP codes can have a slightly higher FER than the FP code due to different information sets, it is observed that the RP codes have lower information bit error rates than the FP codes. This verifies the proper design and selection of relaxation thresholds for the RP codes. Another important observation is that the proposed construction of relaxed polar codes is robust enough, such that it performs well over the whole range of simulated SNRs, although the codes are constructed for a certain SNR.

The performance of the relaxed SCL decoder is compared with the performance of the regular SCL decoder for a numerical example. The simulations are done for the code block length , and rate . First, the fully polarized polar code of rate is constructed for an AWGN channel at an dB. Then, the all-channel modified relaxed polar code is constructed by considering the same set of information bit indices. The complexity reduction ratio of the modified relaxed polar code from the good channel relaxation only is and from the all channel relaxation is . Regular SC decoding of the FP code and RSCD of the RP code are simulated and compared over the AWGN channel for the constructed codes, which corresponds to the case with list size . Furthermore, the relaxed SCL decoder, as discussed in Section 6.2, and the regular SCL decoder, with a maximum list size of 32 are simulated and compared as well. For list decoding, the polar information bits are concatenated with a CRC code of length 16, where the rates are adjusted accordingly so that the actual information rate is , i.e., the information block length of the polar-CRC code is increased to . The simulation results are shown in Figure 11 and show about dB SNR gain with list decoding. It is observed that with successive cancellation decoding, the RP code has a slightly better FER than the FP code. Since the RP code has the same information set as the FP code, this can be justified by Lemma ?. Furthermore, a better bit error rate (BER), with up to dB SNR, is observed for the RP code compared to FP code.

## 7Conclusion

In this work, a new paradigm for polar codes, called relaxed polar coding, is investigated. In relaxed polar codes, a bit-channel will not be further polarized if it has been already polarized to be sufficiently good or sufficiently bad. Hence, encoding and decoding of relaxed polar (RP) codes have lower computational and time complexities than those of conventional polar codes. RP codes also have lower space complexity than conventional polar codes in fixed point hardware implementations, due to the less number of bits required to store the likelihood ratios. This has the compound effect of decoder implementations with less power consumption. It is proved in this work that, similar to conventional polar codes, RP codes are capacity achieving. It is also shown that with proper design, RP codes will have lower error rates than conventional polar codes of the same rate. Constructions of RP codes on the binary erasure channel, and on general BMS channels are described. Asymptotic and finite-length bounds on the complexity reduction achievable by relaxed polar coding are derived and verified for the binary erasure channel against actual constructions. The relaxed successive cancellation decoding (RSCD) for relaxed polar codes is described. The successive cancellation list decoder for polar-CRC codes [25] is also modified for list-decoding of relaxed polar-CRC codes. Moreover, we discuss how simplified successive cancellation decoding [19], can be done on top of RSCD to further reduce the decoding complexity and latency of RP codes. For a code of rate and length , the results show an decoding complexity reduction ratio and an decoding latency reduction ratio are possible with relaxed successive cancellation decoding of RP codes on the BEC. It is verified by numerical simulations on the AWGN channel that the information bit error rates of properly designed RP codes are at least as good as those of conventional polar code with the same rate.

Next, we discuss possible directions for future work. Whereas the derived bounds on the complexity reduction ratios on the BEC channel have explicit closed form formulas, the numerical results showed that there is room to derive tighter bounds. Due to the recursive calculations required to calculate the Bhattacharyya parameters at an arbitrary bit-channel, then the exact bounds are expected to be recursive in nature. For general BMS channels, it is more difficult to get closed form bounds as the polarization results in bit-channels with exponentially large alphabets. Another issue is that we considered the construction of relaxed polar codes based on Arikan’s polarization matrix. This construction can be readily extended to the general case of polarization matrices, where [12]. Also, it is noted that relaxing the good bit-channels results in rate-1 constituent codes. These bit-channels can be further concatenated with other codes to further reduce the error probability. In fact, the interleaved concatenation scheme for polar codes [14] adaptively concatenates better bit-channels with outer codes whose rates are higher than those concatenated with the worse bit-channels, in order to maintain the target code rate or the target error performance of the concatenated code. When constructing concatenated relaxed polar codes, the adaptive concatenation scheme can be modified to take into account, or jointly optimize, the selection of the relaxed bit channels.

## Acknowledgement

The authors would like to sincerely thank the associate editor and the reviewers for their careful review of this paper and for their valuable comments which have improved its quality.

## Appendix

For any DMC , let denote the probability of error of under ML decoder. Let be a BMS channel, where is the binary alphabet. Then

Let the channel be polarized using the Arikan’s Butterfly. Let the polarized bit-channels be denoted by and . Then it is known that

and

Suppose that the size of output alphabet is . Let . Then for , let

and

Then

and

The size of output alphabet of is . For any pair ,

Notice that . Therefore,

where the last equality follows by and . This proves the first part of the lemma.

The size of the output alphabet of is . For any pair , there are two corresponding elements in the output alphabet of . Then

Then