Optimized IR-HARQ Schemes Based on Punctured LDPC Codes over the BEC
We study incremental redundancy hybrid ARQ (IR-HARQ) schemes based on punctured, finite-length, LDPC codes. The transmission is assumed to take place over time varying binary erasure channels, such as mobile wireless channels at the application layer. We analyze and optimize the throughput and delay performance of these IR-HARQ protocols under iterative, message-passing decoding. We derive bounds on the performance that are achievable by such schemes, and show that, with a simple extension, the iteratively decoded, punctured LDPC code based IR-HARQ protocol can be made rateless and operating close to the general theoretical optimum for a wide range of channel erasure rates.
In communications networks today, transmissions almost always take place over time varying channels, because of, for example, the channel’s physical nature (e.g., wireless) or the length of a session (e.g., downloading a large file). Traditional channel coding schemes are inadequate in such circumstances because they have fixed redundancy matching only a particular channel condition. Similar problems arise in transmission to multiple users over (non)varying but different channels. Several recently proposed and/or implemented coding schemes address the time varying and multiuser communication scenarios, such as hybrid ARQ on the physical layer and Raptor codes on the applications layer.
Hybrid ARQ transmission schemes combine conventional ARQ with forward error correction. A scheme known as incremental redundancy hybrid ARQ (IR-HARQ) achieves higher throughput efficiency by adapting its error correcting code redundancy to varying channel conditions. Because of that, the scheme has been adopted by a number of standards for mobile phone networks. IR-HARQ is considered to be one of the most important features of the CDMA2000 1xEV-DO Release 0 and Revision A systems , . A historic overview of HARQ schemes, up to 1998, can be found in . For a survey of more recent developments, we direct the reader to  and references therein. In the third generation wireless standards, the IR-HARQ scheme resides in the physical layer and operates over time varying fading channels. The scheme is based on a turbo code dating back to the IS-95 standard. A possible replacement of this code by an LDPC or a fountain code was considered in .
Fountain codes are primarily designed to operate over erasure channels. They have superior performance in applications in which the channel variations are large and/or cannot be reliably determined a priori. Because of this robustness, some classes of Fountain codes have been adopted into multiple standards, such as within the 3GPP MBMS standard for broadcast file delivery and streaming services, the DVB-H IPDC standard for delivering IP services over DVB networks, and DVB-IPTV for delivering commercial TV services over an IP network, and are presently being considered for implementation in LTE eMBMS.
We here consider a hybrid ARQ scheme based on punctured LDPC codes over the BEC channel. LDPC codes have been chosen as an instance of capacity-approaching codes. They are theoretically well understood, and popular in practice not only because of their error/erasure rate performance, but also because they have simple encoders and decoders. In this particular application, capacity approaching LDPC codes are of interest because they can be punctured, as explained in , s.t. the resulting punctured ensemble is also capacity approaching. Most of developed results can be easily extended to other punctured sparse-graph codes.
The performance of the HARQ scheme is measured by the throughput and the delay from the beginning of the coded data transmission until the moment when the information has been successfully decoded. The goal is to have high throughput and low delay, but only a certain tradeoff between these two quantities is attainable, and finding it is the central question in analyzing HARQ schemes. One of our goals is to characterize the tradeoff between the average throughput and the average delay, and to show how to run an HARQ scheme to achieve various operating points. Note that the average throughput and the average delay have been intensively investigated. However, the obtained results only give bounds, either under the maximum-likelihood decoding assumption (e.g. [5, 7]), or under (more practical) iterative decoding but based on the bit error probability (e.g. ), which means that the bound is tight only for large code lengths. The approach taken in this paper is based on the block error performance under iterative decoding, and we use the finite-length scaling results on punctured LDPC code ensembles, as developed in . We also show how our LDPC codes based IR-HARQ scheme can be made rateless.
The main contribution of this work are as follows: (i) We derive tight approximations of the average throughput and delay as functions of certain parameters of the used code ensemble and of the considered IR-HARQ scheme; (ii) We show how to chose these parameters in order to optimize both the throughput and the delay; (iii) We propose a rateless-like IR-HARQ scheme, based on LDPC codes, and derive tight bounds of its average throughput and delay.
This paper is organized as follows: In Sec. II, we describe our IR-HARQ scheme and present expressions for its average throughput and delay. In Sec. III, we define the finite-length rate-compatible LDPC codes used further in the paper. Section III-C presents a model of the IR-HARQ scheme based on LDPC codes. In Sec. IV, we define the optimization problem to determine the best code and protocol parameters. Section V presents a modification of the IR-HARQ scheme based on LDPC codes, enlarging its working region, and the comparison of the modified scheme with the HARQ scheme, based on LT codes. At the end, in Sec VI, we then discuss our observations and some possible extensions.
Ii Incremental Redundancy Hybrid ARQ Model
Ii-a Multiple Transmissions Protocol and Channel Model
We analyze a particular retransmission protocol called Incremental Redundancy Hybrid ARQ (IR-HARQ), with the following multiple transmission model of [10, 11]: at the transmitter, the user data bits are encoded by a low rate code, referred to as the mother code. Initially, only a selected number of encoded bits are transmitted, and decoding is attempted at the receiving end. If decoding fails, the transmitter, notified through the feedback, sends additional encoded bits, thus incrementing the redundancy. Besides the information about the success/failure of the transmission, the feedback may also carry the channel erasure rate information, to help the transmitter decide to which extent to increment the redundancy. Upon completion of the new transmission, decoding is again attempted at the receiving end, where the new bits are combined with those previously received.
The described procedure is repeated after each subsequent transmission request until all the encoded bits of the mother code are transmitted. The channel is modeled as a time-varying BEC such that the channel erasure probability during the transmission of one block of encoded bits is constant and changes from one block transmission to another. We denote the channel erasure probability for transmission as . That the channel erasure probability does not change during the transmission of one block is a reasonable assumption as the block transmission duration is usually chosen to be smaller than the coherence time of the transmission channel. This approach is used further in the paper, namely in Section IV-A, when the maximum number of transmissions is chosen.
The main design parameters of the IR-HARQ scheme are [10, 11]: the maximum possible number of transmissions for one block of user data and the fractions , , of encoded bits assigned to transmission . The maximum number of transmissions is usually predefined by the protocol, while the fractions ’s can be either predefined or calculated before each transmission, taking into account the feedback information about the previous channel erasure rates.
To analyze the IR-HARQ scheme, we adopt a probabilistic model in which the ’s are seen as probabilities, i.e., in which the transmitter assigns a bit to transmission with probability .
Clearly, the transmitter has also the constraint (known as rate compatible puncturing) to assign
to transmission only those bits which have not been assigned to any of the previous transmissions.
Even with this probabilistic model it is possible to make the scheme rate compatible as follows :
Before the IR HARQ protocol starts
For each encoded bit, generate a number independently and uniformly at random over
Determine and (or all the ’s if necessary)
Compute as .
Each bit s.t. is assigned to transmission .
If transmission fails for
Determine (if not yet determined).
Each bit s.t. is assigned to transmission .
If transmission fails
transmit all remaining bits.
In the IR-HARQ transmission protocol above, the transmitter is assumed to have already accumulated some useful data to be sent, so the queuing process is not considered.
In Section IV we determine how the ’s are chosen. The criterion for such choice is to optimize the performance of the scheme, which is given by its throughput and delay.
Ii-B Performance Measures
Two standard measures of ARQ protocol efficiency are the throughput and the delay, defined as follows.
The throughput of a retransmission scheme is the number of user data bits accepted at the receiving end in the time required for transmission of a single bit.
The delay of a retransmission scheme is the number of bits that must be transmitted in order to receive the useful information (user data bits).
In what follows, we are interested by the average throughput and the average delay . We have the following lemma:
Consider an IR-HARQ scheme with at most transmissions and a set of fractions . Let the underlying mother code be of length and of rate . Denote by the probability that it takes exactly transmissions for the decoding to be successful. Then the average throughput and delay are determined by following expressions
Proof. The probability that one of the transmissions is successful is . Because our protocol is limited to transmissions, if none of these transmissions is successful, the throughput is equal to . When one of the transmissions is successful, the number of user data bits communicated to the receiver is . The number of encoded bits sent to the receiver through the th transmission is . So, the average throughput is given by (1). The calculation for is similar.
The expressions (1) for and (2) for implicitly assume that the feedback from the receiver to the transmitter is instantaneous. In practice the delay of the feedback transmission is positive, and we can introduce it in the above expressions as follows. Let the transmission time of one bit in the forward direction be . Since the feedback propagation delay, i.e. the time interval between two transmissions, is , it is equivalent to the time needed to transmit bits in the forward direction. Then the expression for becomes
where the term is proportional to the average feedback transmission delay. This term grows with the number of transmissions. On the other hand, note that the highest throughput can be achieved if the receiver is given a chance to attempt decoding upon receiving each additional bit, that is when .
The expression (2) for throughput becomes equal to its counterpart in  when . The authors of  expressed the quantity in terms of the probability that the asymptotic111i.e., when the codelength . bit erasure rate at transmission goes to , i.e., For LDPC codes, this probability has been computed with the help of density evolution. Clearly, is a lower bound on the failure probability at transmission , which thus gives an upper bound on and a lower bound on . We next derive expressions for these asymptotic bounds, while tighter bounds for finite length case will be presented in Section III-C.
Consider an example of sparse-graph codes. A randomly chosen code from a sparse ensemble of length has a successful iterative decoding with high probability when the channel erasure probability is smaller than , where is the so called finite-length iterative decoding threshold. We will discuss for a particular case of LDPC codes in Section III. Now we can state the following result:
Consider an IR-HARQ scheme based on a sparse-graph code of rate and iterative decoding threshold . The following bounds hold:
Proof. Consider the limiting case (that is bit-by-bit transmission) since the highest throughput can be achieved if the receiver is given a chance to attempt decoding upon receiving each additional bit, that is when . The smallest fraction of bits that are sufficient for successful decoding is . The channel with erasure probability passes on average a fraction of bits unerased. Hence, the smallest fraction of coded bits to be sent by the transmitter in order to receive a fraction of bits on average is
Note that , and (4) follows immediately.
Iii Performance of (Punctured) Finite-Length LDPC Codes over the BEC
As we have seen above, the performance of the IR-HARQ scheme depends on the decoding performance after each transmission. We assume that the mother code is an LDPC code. We will see later that the performance after each transmission in this case is related to the decoding performance of the punctured mother code. First let us define the mother code and describe the puncturing technique.
Iii-a The Mother Code and Puncturing
The mother code is taken at random from an irregular length-, LDPC code ensemble, defined by its degree distributions and .222We refer the reader unfamiliar with LDPC codes and their properties that we use below to the textbook . Each code in the ensemble corresponds to a different Tanner graph, having fraction of edges incident to variable nodes of degree and fraction of edges incident to check nodes of degree respectively.
A code taken at random from an ensemble of -LDPC codes has, with high probability, a bit error probability close to the average bit error probability of the ensemble. We will refer to this property as concentration. This property implies the concentration of the block error probability for the so called waterfall region of channel parameters, within which . The concentration property allows us to only consider the average performance of an LDPC ensemble (instead of looking at the performance of a particular code) by using the ensemble average analysis techniques.
The performance of iterative decoding averaged over the LDPC ensembles is well understood when is sufficiently large and when LDPC codes are used for a transmission over a channel with some fixed erasure rate . Namely, as long as the channel erasure rate is smaller than the threshold value given by
the iterative message passing algorithm leads to vanishing bit-erasure probability as the number of iterations grows.
Puncturing is a technique to obtain a code of a higher rate from a given code of some rate . It simply means not transmitting (puncturing) a fraction of the encoded bits. The performance of the resulting code depends on the number and the choice of punctured bits. One way to make this choice is at random, depending on the outcome of tossing the same (biased) coin for each variable node. This way of puncturing is often called random puncturing.
Another way to select the bits to puncture is to first choose the degree of the node to be punctured, according to a certain (optimized, degree biased) probability distribution, and then to select a node to puncture uniformly at random from all nodes with the chosen degree. This way of puncturing is often referred to as intentional puncturing. It has been shown  that intentional puncturing outperforms random puncturing, and, even more importantly, it can be designed to conserve the concentration property, whereas random puncturing cannot. Therefore, in what follows we only consider intentional puncturing.
A punctured LDPC ensemble of some length is described by three polynomials: the degree distributions and mentioned before and the puncturing degree distribution , where the ’s are the probabilities with which variable nodes of degree are punctured.
Using this notation, the asymptotic iterative threshold of such a punctured LDPC ensemble that was obtained in , becomes
and its design rate is given by
where is the code rate of the mother ensemble.
Iii-B Finite-Length Performance
We start with introducing some useful notation which we need to present finite-length performance of (punctured) LDPC codes.
Note that the fraction of the variable nodes of degree of a LDPC ensemble is , We denote by the variable node degree distribution. Also, given the puncturing degree distribution , let
Finally, we introduce the following notation:
where and . Here and further in the paper primes denote derivatives.
The following conjecture from  will be further used:
Assume transmission takes place over the BEC with erasure probability using a code chosen at random from a punctured LDPC ensemble with length and puncturing degree distribution . Then, with high probability, the block erasure rate is tightly approximated by the following expression
where is the Q-error function and and are the scaling and shift parameters, given by
where satisfies (6), , and
As we can see, and only depend on , and , as well as on polynomials , and . The justification for the conjecture follows the same line of reasoning as for the conjecture of the finite-length scaling law for unpunctured LDPC codes in . Note that the conjecture for unpunctured LDPC codes has been proven in  for a particular case.
Example 1 (Regular codes)
For regular LDPC codes with parameters and , we have that , where . Moreover, the performance parameters become
where , and are the parameters of the corresponding unpunctured ensemble.
Iii-C Equivalent Puncturing Model of the IR-HARQ Scheme Based on LDPC Codes
Consider the IR-HARQ scheme described in Section II. Its mother code is an LDPC code chosen at random from the ensemble of given length , with degree distributions and . Since it is irregular, the IR-HARQ scheme is now parametrized by the maximum number of transmissions and the sequence of ’s and , where denotes the probability with which a bit of degree is chosen for transmission .
Recall that in Section II, only one value was assigned to transmission . However, if the bits of an irregular code were chosen to be transmitted with probability regardless of their degree, this would correspond to random puncturing and the concentration property would be lost . By introducing , for variable nodes of degee and transmission , we obtain the intentional puncturing scheme and preserve the concentration of the code performance around the average.
The IR-HARQ protocol can be described with the help of the following equivalent punctured code model: the bits that the transmitter chooses to send through the -th transmission can be equivalently seen as obtained by implementing a puncturing device that punctures a bit corresponding to a variable node of degree with probability , where , or, as shown within the protocol described in Section II-A,
Further, assume that a transmission takes place over the BEC with probability . When a bit corresponding to a variable node of degree is assigned to one of the first transmissions, it can be viewed as passing through the channel with average erasure rate333In this case, it is assigned to transmission with probability . . So we can model the IR-HARQ protocol through transmission as the transmission of the punctured mother code over a BEC with average erasure rate
where the considered bit is punctured with probability .
The IR-HARQ protocol outlined below implements our model while conforming to the rate compatible puncturing; it is based on the one introduced in Section II-A.
Since the ’s are linked to the ’s, the IR-HARQ performance can be determined from the performance of punctured versions of the mother code. We now determine the expected throughput and delay of the IR-HARQ scheme. Consider expressions (1) and (2). First we switch to the irregular case by replacing by . Next we describe how the ’s can be determined.
Let denote the event of successful decoding after transmissions, so denotes a decoding failure. Then
Assuming the BEC, . Note that , where is the finite-length average block erasure rate at transmission . Remind that the expression for is given by (8). Therefore, we have for
Note that (14) is not valid for a more general type of transmission channel, where a subsequent transmission may result in a more noisy version of the codeword (whereas for the BEC, each subsequent transmission can only bring additional useful information). However, (14) could still be used as an approximation of in a more general case.
By using Conjecture 1 to approximate in (14), one gets an approximation of and for the IR-HARQ scheme. To support the use of Conjecture 1, we present here a figure from  that shows a good match of the approximation to numerical results. In Figure 2, the average throughput of the IR-HARQ scheme with , based on regular LDPC codes of length , is compared with its analytical approximation.
Iv Performance Optimization
Using the proposed puncturing model, we aim to optimize the performance of the IR-HARQ transmission scheme based on LDPC codes by deciding which bits should be sent at each transmission. Note that, thanks to the concentration result for punctured LDPC codes, one has only to choose the mother LDPC code and the puncturing degree distributions for each transmission, without choosing a particular LDPC code and/or particular puncturing patterns. The concentration of the punctured LDPC ensemble ensures that the performance of a particular punctured LDPC code, picked at random from the designed ensemble, will be close to the average performance of this ensemble with high probability. Thus our optimization problem is only to chose how many bits on average should be sent in each transmission, rather than which exact bits.
The performance measure that we choose to optimize are the average throughput and the average delay . In previous sections, we have seen that, for finite-length schemes, has a staircase behavior, and thus it can be optimized point-wise, i.e., for some particular operating points on the -axis, one optimizes to obtain the maximum possible throughput for those points.
We begin by assuming that the estimates of the erasure probabilities are available at the transmitter. We also fix the acceptable block erasure probability after the maximum number of transmissions444 In practice, is dictated by the supported application, i.e., image or voice transmission, video streaming, etc. and the feedback propagation delay .
In the following section, we discuss the choice of other parameters that should be fixed before the optimization, namely: a) the maximum number of transmissions , b) the codelength , c) a fixed or maximum transmission block size and d) the mother LDPC code ensemble. Then we investigate how to choose the puncturing degree distribution for each transmission , , which leads us to design a rate-adaptable punctured LDPC ensemble, based on the initial ensemble and then adapted to transmission conditions. Finally, we discuss how to obtain an estimate of erasure probabilities if they are not available at the transmitter.
Iv-a Choosing the Parameters , , and
In this section, we discuss how one should go about choosing the parameters , , , and , which in general depends on the anticipated IR-HARQ application. The choice of degree distributions and of the mother LDPC ensemble determines the iterative decoding threshold and the code rate of the ensemble, and consequently, an upper bound on the region of attainable throughputs versus transmission erasure probability. See Fig. 1 and Theorem 1.
The upper bound on the region of attainable throughputs versus transmission erasure probability achievable when and is sufficiently large. Clearly, for practical schemes, i.e., for small values of and finite of order of several hundreds/thousands of bits, the average throughput is smaller. However, if the degree distributions and are chosen in such a way that and the design rate is sufficiently large, they can be good initial choices for finite-length performance optimization. Finally note that, if the desired block erasure probability is very low (e.g., or lower, depending on the code), this imposes additional constraints on the minimum distance of the code ensemble, and hence on the degree distributions and . Concerning these additional constraints, see, for example, . The choice of the codelength depends on the desired value of , which should be attainable for the given and the chosen -pair. This can be verified using the finite-length analysis from .
The maximum number of retransmissions should be chosen depending on a) the coherence time and b) the delay penalty. The coherence time is the time during which the channel conditions are the same, and it depends on the transmission environment. Note that in our model the instantaneous erasure probability is assumed to be constant during the -th transmission. Therefore, knowing , we can transmit no more than bits in one transmission. From here we obtain that . Since the delay penalty is proportional to the total time of feedback transmissions needed to transmit a packet of data, to keep the delay penalty low one should choose so that the time of one single transmission, proportional to , is large compared to the feedback propagation delay .
In practice, the number of bits sent during one transmission is usually a constant, dictated by the transmission protocol. However, some applications may allow a variable length for the transmission block. To cover both cases, we define as the constant transmission block length in the first case and the maximum transmission block length in the second case. Most often, is fixed and chosen to be .
Iv-B Cost Function With a Feedback Penalty
We next modify our optimization problem to address the case when the feedback transmission is not instantaneous but happens with some delay . This delay introduces the feedback penalty into the IR-HARQ transmission, which can be accounted for in the average delay expression as explained by Remark 1.
We start by defining a cost function, which needs to be optimized in order to increase the average throughput and to decrease the average delay. From (1) and (3), the average throughput and delay can be written as
Note that, having expressed and in terms of the same function , one can see that the average throughput is inversely proportional to the average delay. Moreover, if there is no feedback penalty, then and there is no tradeoff between optimizing the throughput and the delay: one achieves both goals by minimizing . In the general case, when , either or can serve as the cost function for the optimization problem. By choosing , one ensures the optimum choice of coefficients to maximize the average throughput, and then is chosen to minimize the average delay. Note that the solutions of two optimization problems, defined in terms of and , are close to each other if the value of is small compared to .
Letting , we finally obtain
Iv-C Optimization of Puncturing Degree Distributions
Assuming the channel erasure probabilities are known at the transmitter, the optimization problem reduces to optimizing the puncturing degree distributions , , under the constraint of rate-compatibility, i.e.
In general, this is a non-linear optimization problem, given that depends on the parameters and , which themselves are dependent on the ’s.
We propose to use a gradient descent optimization algorithm to find a solution,
as described below.
For from to , find initial puncturing fractions ’s by assuming that the iterative threshold , given by (6), satisfies . Moreover, the ’s should satisfy one of the following conditions on :
for constant or variable transmission block size, respectively.
Choose the algorithm step size .
For from 1 to , do the following iteration until the optimization process converges:
Using (IV-B), compute , given , .
Find the ’s that minimize
under the following constraints:
Number of bits sent per transmission:
for constant or variable block size.
End of cycle over .
Set the puncturing fractions equal to , .
Below are some details concerning the algorithm:
Initialization of ’s and choice of : The initial values of the puncturing fractions are proposed to be set as if the LDPC code were of infinite length. This is an optimistic choice for the ’s, since a finite-length code will behave worse than an infinite-length one with the same parameters. The fractions are found by linear programming: namely, one chooses puncturing fractions to maximize the code rate of the punctured ensemble, under the conditions of (17). For more details on the optimization procedure, see, for instance, . Note that, for small and high values of , a solution may not exist. This means that the decoder will fail independently of the chosen puncturing fractions. In this case, any puncturing fractions can be chosen, assuming that they are rate-compatible with the optimized puncturing fractions for the later transmissions. Such an initial choice for the puncturing fractions ensures good convergence for the gradient descent algorithm, since it already lies close to an optimal solution (see Conjecture 1 and Remark 2). Hence, the algorithm step size should be chosen quite small, close to .
Minimization of (18): is given by
with and where is a constant,
and , and are parameters of the LDPC ensemble, punctured corresponding to the puncturing polynomial . and can be found by taking the derivative of (9) and (10), and is obtained by implicit differentiation of the density evolution equation
Note that the optimization problem based on instead of is defined in exactly the same way, except that the terms will in (19) will be zero.
Iv-D An Example of Optimization
Now we consider a particular example of the optimization of an LDPC ensemble for a particular value of the channel average erasure probability . The initial parameters are: , , and , where the transmission block size is constant. Denote by the maximum erasure probability that can be tolerated by the LDPC ensemble. We will choose and and optimize the throughput at under the constraint that the iterative decoding threshold .
The following degree distributions were chosen: and . This gives rise to an LDPC ensemble with rate , (from (6)) and (from (8)). The optimized puncturing degree distributions at the initialization stage are
We find that for , i.e. after the first two transmissions a decoder will fail because of an insufficient number of transmitted bits, no matter what puncturing degree distributions are used. and , however, are the best choices for the given initial parameters. Therefore, one needs to do at least 3 transmissions before starting to decode. Knowing this, we can send the first three coded packets one after another, without waiting for the feedback.
For the initial-stage ’s, the cost function . After the finite-length optimization, we obtain with the following new distributions and :
The average throughput, obtained using the described optimization procedure, is shown by the thick full line in Figure 3. The throughput with puncturing degree distributions obtained at the initialization stage is shown by the thick dashed line. Also, the thick dotted line represents the average throughput, obtained without any optimization by equally partitioning the bits of each degree between transmissions. As we can see, the throughput at has indeed been improved.
This example illustrates the interesting point that, in order to obtain a higher average throughput for some , one should not blindly send the bits with higher degrees first, trying to get the iterative decoder converge faster (which would seem intuitive), but instead find the optimal puncturing degree distributions for the given . The reason is the following: if one of the first transmissions, carrying a large number of high degree bits, is unsuccessful, it will cause a large fraction of those bits to be erased, and many more transmissions will be needed in order to accumulate a sufficient number of unerased bits with lower degrees to make the decoder converge.
Note that one can define an optimization problem for more than one target erasure probability, thus optimizing the throughput curve pointwise. Also note that the parameter operates as a regulator of the number of transmissions. If the number of sent bits per packet were unbounded, there would be at most 2 transmissions – for the first transmission, the optimizer would decide to send as many bits as needed to ensure the target at a given , and, if the first transmission were unsuccessful, it would allocate the rest of the bits to transmission 2.
Iv-E The Regular Code Case
In the case of regular LDPC codes, the scaling and shift parameters do not depend on the puncturing fraction . Indeed, based on Example 1, it is easy to see that, for punctured regular codes,
where , and are parameters of the initial unpunctured regular ensemble. Since is an increasing function of its argument, is a monotone increasing function in and the cost function is minimized by the smallest possible values of , .
Iv-F Estimating the ’s at the Transmitter
In general, the channel erasure probabilities are not known at the transmitter and must be estimated before performing the optimization of the puncturing degree distributions. The quality of estimation depends on the knowledge of the transmission channel statistics (mean, variance, probability distribution) and on the amount of feedback obtained at the transmitter (1 bit representing an ACK/NACK, the previous channel erasure probability,…).
A wealth of literature is available on channel estimation. As examples, we list below a few possible approaches to channel estimation.
Known mean: Let the mean of the channel erasure probability be known at the transmitter. Then the puncturing degree distribution can be optimized as discussed above, assuming , .
Known mean and previous erasure probabilities: Let the mean of the channel erasure probability be known and assume the receiver sends to the transmitter the erasure probabilities of the previous transmissions. In this case one can optimize the puncturing degree distributions in real time, i.e., just prior to transmission. At transmission , the transmitter sends the fraction of coded bits, optimized for , since it does not have any feedback information. At transmission , however, the estimated erasure probability becomes
Known probability distribution and 1-bit feedback: Assume that the probability density function is known and it has support . Then, for each transmission , we can estimate
Also note that, to ensure good performance, one should choose and in such a way that .
V Rateless Incremental Redundancy Protocols
V-a Rateless Protocols Using Repetition
As can be seen in Figure 1, the IR-HARQ protocols based on punctured codes achieve a high throughput only over a limited region of channel erasure rates. When they are based on iterative decoding and a mother LDPC code with threshold , this region extends from to . Naturally, to cover a larger region, one can choose a mother LDPC code with . However, such a code may have a lower rate , and moreover may be lower than , resulting in a lower throughput in the region (see Figure 1). Compare, for example, the rate regular code with and to the rate regular code with and .
To extend the region of high throughput for a given mother code, we propose to augment the HARQ protocol as follows. If, after the transmission of all the bits in the codeword, decoding still fails, we further increment redundancy simply by repeating the same codeword, using the same . Hence, each coded bit might be transmitted twice through channels with erasure probabilities and . At the receiver, both received values of a bit are combined together. So, after two transmissions, the bit is erased with probability . One can continue transmitting in this manner, making the scheme essentially rateless.
The proposed protocol is called the incremental redundancy protocol with repetition, and we denote it by IR-Rep-HARQ. Although repetition is in general not optimal, note that it takes place only when the channel conditions are bad (), when it actually is a good strategy to follow. Note that in the repetition stage, we can either retransmit the same blocks as in the first stage, or determine new blocks, according to the optimized fractions . This translates to generating new values in the protocol of Section II-A. We next find expressions for the average throughput and the average delay for these two cases.
Repetitions of the same blocks
Assume the IR-Rep-HARQ protocol with repetitions of the same blocks during the second transmission of the codeword. Denote the channel erasure probabilities by for the first transmission and by for the second transmission. Then, similar to (1) and (2), the average throughput and the average delay are given by
where and , with
where is given by (1) and