On the Design of Fast Convergent LDPC Codes: An Optimization Approach

On the Design of Fast Convergent LDPC Codes: An Optimization Approach

Vahid Jamali, Yasser Karimian, Student Member, IEEE, Johannes Huber, Fellow, IEEE,
Mahmoud Ahmadian, Member, IEEE
Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Erlangen, Germany
K. N. Toosi University of Technology (KNTU), Tehran, Iran
Abstract

The complexity-performance trade-off is a fundamental aspect of the design of low-density parity-check (LDPC) codes. In this paper, we consider LDPC codes for the binary erasure channel (BEC), use code rate for performance metric, and number of decoding iterations to achieve a certain residual erasure probability for complexity metric. We first propose a quite accurate approximation of the number of iterations for the BEC. Moreover, a simple but efficient utility function corresponding to the number of iterations is developed. Using the aforementioned approximation and the utility function, two optimization problems w.r.t. complexity are formulated to find the code degree distributions. We show that both optimization problems are convex. In particular, the problem with the proposed approximation belongs to the class of semi-infinite problems which are computationally challenging to be solved. However, the problem with the proposed utility function falls into the class of semi-definite programming (SDP) and thus, the global solution can be found efficiently using available SDP solvers. Numerical results reveal the superiority of the proposed code design compared to existing code designs from literature.

Low-density parity-check (LDPC) codes, complexity-performance trade-off, density evolution, message-passing decoding, semi-definite programming.
\EndPreamble

I Introduction

Efficient design of low-density parity-check (LDPC) codes under iterative message-passing have been widely investigated in the literature [1, 2, 3, 4, 5, 6]. In [1] and [2], capacity-achieving ensembles of LDPC codes were originally introduced. Several upper bounds on the maximum achievable rate of LDPC codes over the binary erasure channel (BEC) for a given check degree distribution were derived in [3]. Moreover, irregular LDPC codes were designed by optimizing the degree structure of the Tanner graphs with code rates extremely close to the Shannon capacity in [4]. These codes are referred to as performance-optimized codes since the objective of code design is to obtain degree distributions which maximize the code rate for a given channel as the performance metric. However, for a performance-optimized code, convergence of the decoder usually requires a large number of iterations. This leads to high decoding complexity and processing delay which is not appropriate for many practical applications. Hence, the complexity of decoding process has to be considered as a design criterion for the LDPC codes, as well. In contrast to performance-optimized codes, codes which are designed to minimize the decoding complexity for a given code rate are denoted by complexity-optimized codes.

The concept of performance-complexity trade-off under iterative message-passing decoding was introduced in [7, 8, 9, 10]. In [7] and [8], the authors investigated the decoding and encoding complexity for achieving the channel capacity of the binary erasure channel (BEC). Moreover, Sason and Wiechman [10] showed that the number of decoding iterations which is required to fall below a given residual erasure probability under iterative message-passing decoding, scales proportional to the inverse of the gap between code rate and capacity. However, specifically for a code rate significantly below Shannon capacity, there exist different ensembles for the same rate but with different convergence behavior of decoder. Therefore, how to find an ensemble which leads to the lowest number of decoding iterations is an important question in the code design.

Unfortunately, characterizing the number of decoding iterations as a function of code parameters is not straightforward. Hence, several bounds and approximations for the number of decoding iterations have been proposed in literature [11, 12, 10]. Simple lower bounds on the number of iterations required for successful message-passing decoding over BEC were proposed in [10]. These bounds are expressed in terms of some basic parameters of the considered code ensemble. In particular, the fraction of degree-2 variable nodes, the target residual erasure probability, and the gap to the channel capacity are considered in [10]. Thereby, the proposed lower bound in [10] suggests that for given code rate, the fraction of degree-2 variable nodes has to be minimized for a low number of decoding iterations. However, all other degree distribution parameters are not included in this lower bound. In [11], an approximation of the number of iterations was proposed and used to formulate an optimization problem for complexity minimization, i.e., a complexity-optimizing problem. The proposed approximation is a function of all degree distribution parameters. However, the resulting optimization problem is non-convex and is solved iteratively, only. Furthermore, it was proved that under a certain mild condition, the considered optimization problem in each iteration is convex. A similar approach to design a complexity-optimized code was investigated in [12]. We note that one of the essential constraints that has to be considered for LDPC code design is to guarantee that the residual erasure probability decreases after each decoding iteration. In general, this successful decoding constraint for the BEC leads to an optimization problem which belongs to a class of semi-infinite programming, i.e., the problem has infinite number of constraints [13, 14]. Semi-infinite optimization problems are computationally challenging to be solved. We will investigate this class of optimization problems in more detail in Section III.

The extrinsic information transfer (EXIT) chart [15, 16] and density evolution [17] are two powerful tools for tracking the convergence behavior of iterative decoders. For the BEC, EXIT charts coincide with the density evolution analysis [10, 18]. For simple presentation of the performance-complexity tradeoff, we analyze a modified density evolution in this paper. In particular, first a quite accurate approximation of the number of iterations is proposed. Based on this approximation, an optimization problem is formulated such that for any given check degree distribution, a variable degree distribution with a finite maximum variable node degree is found such that the number of required decoding iterations to achieve a certain target residual erasure probability is minimized while a desired code rate is guaranteed. Although, we prove that the considered optimization problem is convex, it still belongs to the class of semi-infinite programming. Therefore, we first propose a lower bound on the number of decoding iterations. Based on this, a simple, but efficient utility function corresponding to the number of decoding iterations is developed. We show that by applying this utility function, the optimization problem now falls into the class of semi-definite programming (SDP) where the global solution of the considered optimization problem can be found efficiently using available SDP solvers such as CVX [19]. It is worth mentioning that a general framework is developed to prove that the considered problem is SDP. Thus, this framework may be also used to design LDPC codes w.r.t. other design criteria. As an example, we formulate an optimization problem to obtain performance-optimized codes, i.e., for any given check degree distribution, a variable degree distribution with finite maximum variable node degree will be found such that the achievable code rate is maximized. The maximum achievable code rate obtained from the performance-optimizing optimization problem is also used as an upper bound for the desired rate in the considered complexity-optimizing code design.

To summarize, the contributions of this paper are twofolded: i) we propose a quite accurate approximation, a lower bound, and a simple utility function corresponding to the number of decoding iterations and formulate two optimization problems w.r.t. complexity to find the best variable degree distribution for any given check node distribution, and ii) we show that both problems are convex in the optimization variables, and as the main contribution of the paper, we prove that the optimization problem with the proposed utility function has a SDP representability which allows to efficiently solve the problem. Note that the approximation proposed here is derived for the number of iterations between any two general functions and is quite different from the one proposed in [11]. Moreover, the formulated problem using the utility function in this paper is proved to be a semi-definite and convex problem, compared to the semi-infinite problems considered in [11] and [12]. Numerical results reveal that for a given limited number of iterations, the complexity-optimized codes significantly outperform the performance-optimized codes. Moreover, the codes designed by the proposed utility function require a number of decoding iterations which is very close to that of the codes designed by the proposed quite accurate approximation.

The rest of the paper is organized as follows: In Section II, some preliminaries of LDPC codes are recapitulated and the proposed approximation of the number of iterations is devised. Section III is devoted to the optimization problems for the design of complexity-optimized codes based on this approximation and the utility function. The optimization problem for the performance-optimized codes is presented in Section IV. In Section V, numerical results, and comparisons are given and Section VI concludes the paper.

Ii Problem Formulation

In this section, we first discuss some basic preliminaries of LDPC codes for the BEC. Then, we introduce a modified density evolution and propose an approximation for the number of iterations based on the concept of performance-complexity tradeoff.

Ii-a Preliminaries

An ensemble of an irregular LDPC codes is characterized by the edge-degree distributions and with and . In particular, the fraction of edges in the Tanner graph of a LDPC code that are connected to degree- variable nodes is denoted , and the fraction of edges that are connected to degree- check nodes, is denoted (degree distributions from the edge perspective) [1]. Moreover, and denote the maximum variable node degree and check node degree, respectively. Furthermore, let

(1a)
(2a)

be defined as generating functions of the variable and check degree distributions, respectively [1]. Using these notation, the rate of the LDPC code is given by [20]

(3)

We consider a BEC with bit erasure probability , i.e., the channel capacity is . Assuming message-passing decoding for an ensemble of LDPC codes with degree distributions , the average residual erasure probability over all variable nodes at the -th iteration, , when the block length tends to infinity, is given by [20]

(4)

where . An essential constraint for a successful decoding is that the residual erasure probability decreases after each decoding iteration, i.e., . In particular, the condition to achieve a target residual erasure probability is [21]

(5)

In this paper, we consider the achievable code rate as the performance metric and the number of decoding iterations which yields to a residual average erasure probability below as the complexity metric. Note that the decoding threshold has also been considered as a performance metric in the literature, i.e., for a given code rate, the best degree distribution is determined such that the successful decoding constraint in (5) holds for the maximum possible erasure probability . However, the code rate maximization problem for a given erasure probability is in principle equivalent to the threshold maximization problem for a given code rate [11].

In general, the complexity of the decoding process is comprised of the number of required iterations and the complexity per iteration which is also referred to as graphical complexity. Formally, graphical complexity is defined in [10] as the number of edges in the Tanner graph per information bit. In this paper, however, our goal is to find a variable degree distribution for any given check node distribution and code rate . Moreover, for given , the number of edges in the Tanner graph is obtained as where is the number of check nodes. Furthermore, the code rate is defined as where is the number of variable nodes, i.e., the length of the code. Therefore, for given and , the graphical complexity is fixed to , and hence, not changed by varying . We can conclude that the decoding complexity depends only on the number of decoding iterations. Moreover, the number of iterations is also mainly used to measure the processing delay in decoding [10].

Ii-B Number of Decoding Iterations

The expected performance of a particular ensemble over BEC can be determined by tracking the residual erasure probabilities through the iterative decoding process by density evolution [20, 11]. In this paper, a modified version of the density evolution is utilized to determine the required number of decoding iterations to achieve a certain residual erasure probability. To this end, we define the following function

(6)

Note that by defining , we have separated the known parameters and from the optimization variables . Then, the residual erasure probability can be tracked by the iterations between the two curves and , cf. eq. (4) and Fig. 1. In particular, the successful decoding constraint in (5) is equivalent to constraint where and . Furthermore, for a given code rate , we can conclude from (3) that the area bounded by the curves and is fixed for a given code rate, i.e.

(7)

The above property is well known as area Theorem [16]. Therefore, the problem of code design can be interpreted as a curve shaping problem for subjected to the constraints that is below and the area bounded by the two curves is fixed. Asymptotically, as the target code rate approaches the capacity, , the area bounded by the curves and vanishes, and the number of decoding iterations grows to infinity. However, for the code rate below the capacity, one can find different that have code rate while acquiring different convergence behaviors. Therefore, the goal of code design is to shape the curves for the best complexity-performance trade-off. In the following, we utilize the distance concept of performance-complexity trade-off illustrated in Fig. 1 to propose a quite accurate approximation for the number of iterations. To this end, we state the following definition, see also Fig. 2.

\pstool

[width=0.55] Fig/DensityEvolution/DensityEvolution.eps \psfragPl[t][c][1] \psfragP1[l][c][0.75] \psfragP2[l][c][0.75] \psfragP3[l][c][0.75] \psfragPn[l][c][0.75] \psfragS1[c][c][0.75] \psfragS2[c][c][0.75] \psfragX[c][c][1] \psfragKhi[c][t][0.75] \psfragZeta[c][t][0.75]

Fig. 1: Modified density evolution for BEC with bit erasure probability , , and and .
Definition 1

Let functions and have positive first-order derivatives and satisfy condition . Then, is defined as

(8)

where

(9)

From the above definition, the number of iterations to achieve a certain target residual erasure probability is given by . We observe that due to iterative structure of the decoding process, the number of iterations as a function of code parameters is a non-differentiable function which is difficult to deal with in a code design in general and does not offer much insight. In the following, we propose a continuous approximation of the number of iterations between two general functions and .

\pstool

[width=0.55] Fig/AppDeriv/AppDeriv.eps \psfragX[c][c][1] \psfragY[c][c][1] \psfragX1[c][c][0.8] \psfragX2[c][c][0.8] \psfragX3[c][c][0.8] \psfragT1[c][c][0.8] \psfragT2[c][c][0.8] \psfragT3[c][c][0.8] \psfragD1[c][c][0.45] \psfragD2[c][c][0.45] \psfragD3[c][c][0.45] \psfragf1[c][c][0.8] \psfragf2[c][c][0.8] \psfraga[c][c][0.8] \psfragb[c][c][0.8] \psfragxs1[c][c][0.8] \psfragxs2[c][c][0.8]

Fig. 2: Approximation of decreasing step from towards between functions and .

We first define a function for the distance between and , i.e., , which plays a main role in characterizing the number of iterations, i.e., is the decreasing step at . Then, we can rewrite in (9) as follows

(10)

where . Assume where is sufficiently small such that is assumed to be constant within this interval, see Fig. 2. Thus, each decreasing step in this interval toward axis , i.e., , or toward axis , i.e., , is fixed with the relation . Hence, the number of iterations in this interval is given by

(11)

where is the ceiling function where is the set of integer numbers. The main idea of extending the above expression to all interval is to momentarily ignore the fact that the number of iterations has to be an integer, and to compute the incremental increase in the number of iterations as a function of the incremental change in . Notice that a similar method was also used in [11] but a completely different approximation had been obtained. By this, the incremental change in the number of iterations as a function of incremental change in can be written as . To calculate the total number of iterations, we use the integration over which leads to

(12)

where equality is obtained as an approximation for the number of decoding iterations, i.e., by substituting , , , and . Since we use the continuous assumption in the incremental change of the number of iterations, the expression in (12) is an approximation of the number of iterations.

In Section IV, it is shown that the approximation (12) of number of iterations is quite accurate for several numerical examples. Moreover, this approximation has nice properties compared to the number of iterations given in (8): i) differentiability, and ii) convexity w.r.t. the optimization variables . The convexity of (12) will be investigated in detail in the following section.

Iii Complexity-Optimized LDPC Codes

In this section, we discuss the approximation (12) for the number of decoding iterations for the complexity-optimizing code design. Moreover, to facilitate obtaining a complexity-optimized code, another optimization problem is also formulated based on a utility function corresponding to the number of decoding iterations.

Iii-a Code Design with Approximation (12)

In this subsection, we formulate an optimization problem to find a fast convergent LDPC code based on the approximation in (12) for BEC with bit erasure probability , target residual erasure probability , and desired code rate . In particular, the following optimization problem is considered to obtain the complexity-optimized LDPC codes

(13)

where is defined in (6). The cost function is indeed the approximation (12), constraint is the decoding constraint in (5), constraint specifies the code rate requirement, and constraints and impose the restrictions of the definition of degree distribution . The following lemma states that the problem in (III-A) is convex.

Lemma 1

The optimization problem (III-A) is convex w.r.t optimization variables and in domain .

Proof:

We first note that constraints , , and in the optimization problem (III-A) are affine in . Moreover, constraint can be rewritten as which is an affine form in . Therefore, it suffices to show the convexity of the cost function . Moreover, the convexity of the integrand is sufficient for the convexity of the integral since integration preserves convexity [22]. To show the convexity of the integrand, i.e., , we show that it has a positive semi-definite Hessian matrix, i.e., , which is a sufficient condition of convexity [22]. In particular, the Hessian matrix of is given by

(14)

A matrix is positive semi-definite if all its eigenvalues are non-negative. Moreover, the trace of a matrix is equal to the sum of its eigenvalues, i.e., holds where are the eigenvalues of . Herein, the Hessian matrix in (14) has rank one since all the columns of the matrix is linearly dependent with the first column. This implies that all the eigenvalues are zero except one. The non-zero eigenvalue is given by

(15)

where we use and . Thus, the Hessian matrix of the integrand is positive semi-definite and the integrand and consequently the cost function in (III-A) are convex. This completes the proof. \qed

Although the optimization problem in (III-A) is convex, it still belongs to the class of semi-infinite programming [13, 14]. In particular, optimization problems with finite number of variables and infinite number of constraints or alternatively, infinite number of variables and finite number of constraints are referred to as semi-infinite programming. Herein, the optimization problem in (III-A) has finite number of variables, i.e., , but infinite number of constraints since must hold for all . One way to solve the optimization problem in (III-A) is to approximate the continuous interval by a discrete set [14]. We note that a discrete set is also required to numerically calculate the integral expression for the number of iterations since, in general, no closed form solution is available. Thereby, we have finite number of constraints and the cost function can be expressed as where . For a uniform discretization, we obtain , , , and .

The solution obtained via this discritization method is asymptotically optimal as . However, it is in general computationally challenging. The problems associated with the successful decoding constraint for BEC given in (5) encounters the same computational complexity, e.g., such as the problems considered in [11, 12].

Iii-B Code Design by Means of a Utility Function

In this subsection, our goal is to formulate and solve an optimization problem based on a utility function corresponding to the number of decoding iterations. In particular, a lower bound on the number of iterations is first proposed. Based on this lower bound, we develop this utility function and show that the resulting optimization problem can be solved more efficiently than the optimization problem (III-A). Optimizing a utility function instead of the original problem has been frequently applied in practical and engineering designs when the original problem is not manageable or computationally challenging (NB: using pair-wise error probability (PEP) instead of the exact error probability or using minimum min square error (MMSE) and zero-forcing (ZF) detectors instead of maximum-likelihood detector are the well-known examples).

Lemma 2

For two functions and with positive first-order derivatives and satisfying in interval , cf. Fig. 2, where the area bound by the two functions is fixed, i.e., , the approximation of the number of iterations is lower bounded by

(16)

where the inequality holds with equality if

(17)
Proof:

The Jensen’s inequality is used to obtain the lower bound. In particular, for a convex function and a non-negative function over , Jensen’s inequality indicates

(18)

where the inequality holds with equality if and only if where is a constant. In order to use the Jensen’s inequality, we assume and . Therefore, the following lower bound for the approximation (12) is obtained

(19)

where the lower bound in (19) is achieved with equality if and only if

(20)

In order to obtain constant , we integrate both side of over the interval which leads to

(21)

Substituting in (21) into (20) and (19) gives the lower bound (16) and the condition (17) to achieve this lower bound with equality stated in Lemma 2. Notice that the constant corresponds to a decent in Figs. 1 and 2 with equal length steps on abscissa . We conclude from (16) and (17) that the number of iterations is minimized if is chosen in a way that, for a given area , all the steps have equal length. This completes the proof. \qed

For a given , the choice of introduced in (17) to achieve the lower bound with equality provides an insightful design tool. Specifically, we can conclude that the best choice of for any given is the one that has the maximum distance with weighted by . For the number of decoding iterations, we have to set , . Then, we propose the smallest step size as a utility function, i.e.

(22)

where is a design parameter with a value close to . denotes the smallest step size of decent on abscissa for all , i.e., intuitively the bottleneck within the iterative process. The lower bound (16) indicates that there should not exists any such bottleneck. Thus, a maximization of is obviously reasonable for lowering the number of iterations.

Remark 1

Note that for the constraints in Lemma 2 on , i.e., and , the optimal choice of to minimize the number of iterations is already given in (17) and there is no need for optimization. However, for the code design, we have an extra constraint on which is the structure of , i.e., and . Therefore, (17) is not directly applicable for the code design. However, as we will see in Section V, the insight that (17) offers, i.e., maximizing the smallest step size , is very efficient and leads to codes with quite low number of decoding iterations.

Remark 2

The reason that we do not choose is that the expression in (22) is a utility function corresponding to the number of iterations and is neither the exact nor an approximation of the number of iterations. Hence, the choice of does not necessarily lead to the minimum number of decoding iterations at the target residual erasure probability , note that . For the code design, we can choose for the lowest number of iterations at the target residual erasure probability.

Now, we are ready to find the complexity-optimized codes by the maximizing the minimum step size, i.e., the utility function in (22), for a given desired code rate , as follows

(23)

In order to show that the optimization problem (III-B) is a semi-definite programming, we introduce an auxiliary variable and maximize where holds as a constraint. Moreover, considering , we can rewrite as which leads to the following equivalent optimization problem

(24)

The above optimization problem is still a semi-infinite programming since it contains infinite number of constraints with respect to . In the following, we state a lemma which is useful to transform some category of semi-infinite problems into equivalent SDP problems.

Lemma 3

Let be a polynomial of degree . There exists a matrix where a constraint has the following SDP representability w.r.t. variable

(25)

where is the element in -th row and -th column of the matrix .

Proof:

Please refer to [23, Chapter 4]. \qed

For the clarity of Lemma 3, we consider as an example the quadratic polynomial . Then, from Lemma 3, we obtain an equivalent SDP representation of as where , , and . The aforementioned SDP representation is equivalent to the well-known conditions and for the non-negativity of a quadratic polynomial.

Note that Lemma 3 is developed for and variables . Therefore, to apply Lemma 3 to the first constraint in (III-B), we have to consider: i) the interval has to be mapped to , and ii) the coefficients of the polynomial functions in constraint in (III-B) have to be calculated and shown to be in an affine form in the optimization parameters (). In the following theorem, we present the SDP representation of constraint in (III-B). To this end, the function is expanded into a Taylor series around .

Theorem 1

The constraint , i.e., constraint in (III-B), has the following equivalent SDP representation

(26)

where and

(27)

where and coefficients and are given by

(28a)
(29a)
(30a)
(31a)
(32a)
Proof:

Please refer to Appendix A. \qed

We note that the matrix elements, i.e., , are in an affine form of the optimization variables, since all are affine in and and finally is affine in . Therefore, all the constraints of the optimization problem in (III-B) have shown to be affine or matrix semi-definite constraints. Thus, the optimization problem in (III-B) is SDP and can be efficiently solved using available SDP solvers [19].

Remark 3

Note that, for practical code design, a relatively large is enough for a quite accurate approximation . Moreover, Taylor series coefficients can be represented in close form for check regular ensembles, i.e., , as

(33)

where is the fractional binomial expansion and defined for real valued and a positive integer valued as [24, 21]

(34)

Iv Performance-Optimized LDPC Code

A general framework has been developed for the statement and the proof of Theorem 1 such that it can be possibly used to formulate and solve optimization problems with similar structures. In particular, the optimization problems that have constraint like where contains the optimization variables and the constraint must hold for all within an interval might have equivalent SDP representation as shown by the framework of Section III and Appendix A. Specifically, one should first map the interval to and then show that the coefficients of the resulting polynomial are in affine form of the optimization variables. As a relevant example, we formulate an optimization problem for code rate maximization to obtain the performance-optimized code in the following and show how the global optimal solution can be obtained via the optimization framework developed in this paper.

For a performance-optimized code, our goal is to maximize the code rate for BEC with given bit erasure probability such that the successful decoding is guaranteed. This optimization problem is formulated as follows

(35)

It can easily be observed that constraints and are affine in the optimization variables and constraint has a SDP representation in the optimization variables using the aforementioned Taylor series expansion of . Moreover, from (3), we can conclude that maximizing the code rate for a given is equivalent to maximizing . Therefore, with a similar approach as in problem (III-B), we can write the following SDP representation of (IV)

(36)

where , and are the same as the ones given in Theorem 1 setting and .

Remark 4

The performance-optimized codes resulting from (IV) are not constrained w.r.t. the required number of decoding iterations. Thus, the the achievable code rate of the performance-optimized code, , can be used as an upper bound for the rate constraint in the complexity-optimizing problems. In other words, the desired rate for the complexity-optimized code has to be chosen such that holds, otherwise, the optimization problems in (III-A) and (III-B) become infeasible.

Remark 5

We note that the maximum achievable code rate obtained from (IV) usually is below the channel capacity, , since a finite maximum variable degree is assumed. In particular, one can conclude from (7) that as , the area between curves and vanishes which leads to . In general, in order to construct such that holds for any arbitrary , a maximum variable degree is required which may tend to infinity, i.e., .

V Numerical Results

In this section, we evaluate the LDPC codes which are obtained by solution of the proposed optimization problems. For benchmark schemes, we consider the complexity-optimized code (COC) reported in [12] and performance-optimized codes (POCs) in [21] and [24]. We consider both regular and irregular check degree distributions and , respectively, see footnote 111This irregular check degree distribution is assumed in [12]. Therefore, to have a fair comparison, we also adopt this check degree distribution for the code design..

\psfragfig

Fig/ComPer/ComPer

Fig. 3: Modified density evolution: function for the performance-optimized and complexity-optimized codes obtained for , , , and .

Note that both, the proposed approximation of the number of decoding iterations given in (12) and the utility function given in (22), are based on the distance concept introduced for the modified density evolution in Section II-B. Therefore, the distance concept in Fig. 3 is investigated for some performance-optimized and complexity-optimized codes. At first, we assume the following parameters for the code design , , , and . Fig. 3 shows the modified density evolutions introduced in Section II-B for the performance-optimized code obtained by means of (IV) and complexity-optimized code obtained by means of (III-A). The maximum achievable code rate for the considered set of parameter is obtained as from (IV). We observe that the obtained variable degree distribution, , for the performance-optimized code in (IV) is very close to function which leads to the high number of decoding iteration required to achieve the considered target residual erasure probability . However, if a lower code rate is considered, i.e., , we are able to design complexity-optimized codes which lead to a lower number of decoding iterations compared to performance-optimized codes. Fig. 3 shows that as the desired code rate decreases, the distance between designed in (III-A) and increases which leads to a lower number of decoding iterations.

\psfragfig

Fig/Approximation/Approximation

Fig. 4: Number of iterations vs. the residual erasure probability , , , , and .

Using the distance concept introduced for the modified density evolution, the approximation of the number of iterations (12) is proposed. In Fig. 4, the exact number of iterations and the proposed approximation of the number of iterations vs. the residual erasure probability are shown where the following parameters are used: , , , and . Note that and lead to code rates and , respectively, where the channel capacity is . We observe that the approximation (12) is quite accurate although we utilized the continuous assumption of the number of iterations in the derivation. Moreover, changing the code rate does not noticeably change the accuracy of the proposed approximation. Furthermore, it can be easily seen from Fig. 4 that the function given in (8) for the exact number of iterations is a non-differentiable function while the proposed approximation is a continuous function which significantly facilitates tackling the optimization problem.

\psfragfig

Fig/Comparison/Comparison

Fig. 5: Number of iterations vs. rate to capacity ratio, , for different codes, , , , and .

In Fig. 5, the number of decoding iterations vs. the rate to capacity ratio, i.e., , is depicted. We consider the following parameters for the code design , , , and . The results for the proposed complexity-optimized codes designed by means of (III-A) (the approximation of the number of iterations) and (III-B) (the utility function) are illustrated. We observe that the codes obtained with the proposed utility function leads to quite similar number of iterations compared to that obtained by the accurate approximation in (12) which confirms the effectiveness of the proposed utility function. Note that the maximum achievable rate to capacity ratio for the considered set of parameter is obtained as from the equivalent threshold maximization problem to the rate maximization problem in (IV). Therefore, is the upper limit for and the given . For any , both problems in (III-A) and (III-B) become infeasible. As a performance benchmark, we probe the complexity-optimized code in [12] and performance-optimized codes in [21] and [24]. Note that the codes proposed in [12] and [24] are obtained for a fixed rate which is the reason we consider the same rate. As can be seen from Fig. 5, the proposed code in [21] requires a lower number of iterations compared to the code in [24]. However, both of them are outperformed by the new complexity-optimized codes in terms of the number of decoding iterations. Unfortunately, only one point is reported in [12] which coincides with the curves obtained via the proposed complexity-optimizing approach.

\psfragfig

Fig/DiffDv/DiffDv

Fig. 6: Number of Iterations vs. rate to capacity ratio, , for different maximum variable node degree, , , and .

In Fig. 6, the effect of the value of the maximum variable degree on the proposed code design is investigated. Thereby, the same parameters are used as the ones in Fig. 5 and also the number of decoding iterations vs. the rate to capacity ratio is depicted. First, for a given rate to capacity ratio, the number of iterations decreases as increases. Second, as increases the maximum achievable rate to capacity ratio, the upper limits in Fig. 6, increases. This is due to the fact that a higher leads to a larger feasibility solution set in the optimization problems in (III-A), (III-B) and (IV) for the complexity-optimized and performance-optimized codes, respectively. However, the effect of increasing for low rate to capacity ratio is negligible. In order to illustrate the effect of increasing on the feasibility solution set for the considered optimization problem in (III-B), we also plot the maximum rate to capacity ratio, , vs. maximum variable node degree, , for and different channel erasure probabilities , see Fig. 7. Since as increases, the feasible set for the solution of the optimization problem in (III-B) becomes larger which leads to a higher rate to capacity ratio. Moreover, as ultimately , we obtain . As an interesting observation here, at least for , we observe from Fig. 7 that as decreases, i.e., capacity increases, a higher rate to capacity is achieved.

\psfragfig

Fig/RateDv/RateDv

Fig. 7: Maximum achievable rate to capacity ratio, , vs. maximum variable node degree, for different channel erasure probabilities and .
\psfragfig

Fig/ErrorCom/ErrorCom

Fig. 8: Number of Iterations vs. residual erasure probability for different values of design parameter , , , and .

Fig. 8 presents the number of decoding iterations vs. the residual erasure probability. We assume the following parameters for the code design , , , , and . It can be seen that that each code requires lower number of iterations for the respective target residual erasure probability that is designed for. For instance, considering the set of parameters for , for the target residual erasure probability , the code that is designed for needs iterations while codes that designed for and need and number of iterations, respectively. Moreover, focusing on the result for as an example, it can be seen that in order to have a lower number of iterations for target erasure probability , first the code designed for requires a higher number of iterations compared to the code designed for lower target residual erasure probabilities, i.e., , but finally, it outperforms them at the target erasure probability . This can be interpreted also in the modified density evolution in Fig. 1. In particular, is designed for has a closer distance to in the regimes and compared to that for that are designed for and , respectively, which leads to a higher number of iterations in these regimes. However, at this cost, the distance between which is designed for to is higher in the regimes and compared to that for that are designed for and , respectively, which in total leads to a lower number of iterations at the target erasure probability .

Finally, in Table I, the degree distributions of the proposed codes used in this section and found by using CVX [19] to solve the optimization problems (III-A), (III-B) and (IV) are presented. Note that all the presented coefficients of are rounded for four-digit accuracy. Optimized are given in Table I for different design criteria and parameters which allows some interpretations and intuitions. For instance, for the performance-optimized code in Fig. 3 with code rate , we obtain while for the complexity-optimized codes with given code rates and , the values are and , respectively. Thus, we can conclude that, as a lower code rate is required, we have to reduce the value of for a complexity-optimized code. As an other example, we compare the codes designed by means of the proposed approximation (12) with that obtained by means the proposed utility function (22). The resulting corresponding to the graphs in Fig. 5 are similar but not identical which roughly confirms the effectiveness of the proposed utility function. Last but not least, from the codes designed for different values of corresponding to Fig. 6, we can conclude that, for the rate-to-capacity ratio , increasing from to , crucially changes the resulting complexity-optimized codes. Moreover, the maximum degree is non-zero for the codes with . However, for , we observe that the maximum degree with non-zero value is by which we can conclude that increasing more than cannot decrease the number of iterations for the considered rate-to-capacity ratio. This observation can also be confirmed from Fig. 6 as for and , the number of decoding iterations for the rate-to-capacity ratio are and , respectively.

Comments
Fig. 3, POC,
Fig. 3, COC,