Binary Systematic Network Codingfor Progressive Packet Decoding

Binary Systematic Network Coding for Progressive Packet Decoding


We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional network coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve the theoretical optimal performance. Furthermore, we demonstrate that systematic network codes equipped with the proposed algorithm are good candidates for progressive packet recovery owing to their overall decoding delay characteristics.

Network coding, Gaussian elimination, decoding probability, rank-deficient decoding.

I Introduction

Network coding (NC), originally proposed in [1], has the potential to significantly improve network reliability by mixing packets at a source node or at intermediate network nodes prior to transmission. The classical implementation of NC, which is often referred to as straightforward NC [2], randomly combines source packets using finite field arithmetic. As the size of the field increases, the likelihood of the transmitted packets being linearly independent also increases. However, the decoding process at the receiver is computationally expensive, especially if the field size is large. Furthermore, straightforward NC incurs a substantial decoding delay because source packets can be recovered at the receiver only if the received network-coded packets are at least as many as the source packets.

Heide et al. [3] proposed the adoption of binary systematic NC, which operates over a finite field of only two elements, as a means of reducing the decoding complexity of straightforward NC. A source node using systematic NC first transmits the original source packets and then broadcasts linear combinations of the source packets. The reduction in decoding complexity at the receiver decreases energy consumption and makes systematic NC suitable for energy-constrained devices, such as mobile phones and laptops. Lucani et al. [4] developed a Markov chain model to show that the decoding process of systematic NC in time division duplexing channels requires considerably fewer operations, on average, than that of straightforward NC. Barros et al. [5] and Prior and Rodrigues [6] observed that opting for systematic NC as opposed to straightforward NC reduces decoding delay without sacrificing throughput. Therefore, systematic network codes exhibit desirable characteristics for multimedia broadcasting and streaming applications. More recently, Saxena and Vázquez-Castro [7] discussed the advantages of systematic NC for transmission over satellite links.

As in [3], we also consider binary systematic network codes and investigate their potential in delivering services, such as multimedia and streaming, which often require the progressive recovery of source packets and the gradual refinement of the source message. Our objective is to prove that systematic NC not only exhibits a lower decoding complexity than straightforward NC, as shown in [4], but also a better performance, as observed in [5]. Even though our focus is on binary systematic NC, we explain that our analysis can be easily extended to finite fields of larger size. In addition, we develop a decoding algorithm and propose a framework, which helps us study the performance of systematic NC in terms of the probability of recovering a source message either in part or in full.

The rest of the paper has been organised as follows. Section II analyses the performance of systematic NC and introduces metrics for evaluating its capability of progressively recovering source messages. Section III proposes a modification to the Gaussian elimination algorithm that allows source packets to be progressively decoded. Section IV discusses the computational cost and accuracy of the proposed decoding algorithm, validates the derived theoretical expressions and contrasts the performance of systematic NC with that of benchmark transmission schemes. The main contributions of the paper are summarised in Section V.

Ii Binary Systematic Network Coding

Let us consider a source node, which segments a message into source packets and encodes them using a systematic NC encoder. The encoder generates and transmits packets, which comprise systematic packets followed by coded packets. The systematic packets are identical to the source packets, while the coded packets are obtained by linearly combining source packets. The -th transmitted packet, denoted by , can be expressed as follows


where is a binary coefficient chosen uniformly at random from the elements of the finite field . We can also express in matrix notation as , where is the coding vector associated with . Note that when , in line with the definition of binary systematic NC, we set if , else .

In the remainder of this section, we investigate the theoretical performance of systematic NC and derive analytical expressions for the probability of decoding the entire source message or a fraction of the source message. We also present performance metrics and benchmarks for the evaluation of systematic NC for progressive packet recovery.

Ii-a Probability of Decoding the Entire Source Message

As previously mentioned, a source node using systematic NC transmits packets, of which are systematic and the remaining are coded. Assume that a receiver successfully recovers packets, of which are systematic and are coded. The coding vectors of the received packets are stacked to form the decoding matrix .

Let denote the probability of decoding the source packets given that packets have been received. We understand that is non-zero only if . The value of also determines the smallest allowable value of . For instance, if , the transmitted coded packets are fewer than the transmitted systematic packets; given that packets are received, the number of received systematic packets should be at least . Otherwise, if , the smallest value of can be zero. Therefore, is defined in the range . Having defined the parameters of the system model and their interdependencies, we can now proceed with the derivation of an analytical expression for .

Lemma 1.

For transmitted packets, the probability of a receiver decoding all of the source packets, given that packets have been successfully received, is


where .


The decoding probability can be decomposed into the sum of the following probabilities


The term represents the probability of recovering the source packets directly from the successfully received systematic packets. This is the case when out of the coded packets have been successfully delivered to the receiver along with the systematic packets. Considering that out of the transmitted packets have been received, we can deduce that is given by


The sum of products in (3) considers the probability of recovering systematic packets and decoding the remaining source packets from the received coded packets. More specifically, the probability of receiving out of the systematic packets and out of the coded packets is equal to


On the other hand, the probability of having linearly independent coded packets among the received ones can be obtained from the literature of straightforward NC, for example [8]. We find that


Substituting (4), (5) and (6) into (3) gives (2). This concludes the proof. ∎

Proposition 1.

The probability of a receiver decoding all of the source packets, after the transmission of packets over a channel characterized by a packet erasure probability , can be expressed as follows


The proof follows from Lemma 1. The conditional probability has been weighted by the probability of successfully receiving out of transmitted packets and averaged over all valid values of . ∎

The closed-form expressions for the decoding probability of systematic network codes can be used to contrast their performance to the performance of straightforward network codes and give rise to the following proposition.

Proposition 2.

Systematic network codes exhibit a higher probability of decoding all of the packets of a source message than straightforward network codes.


For the same number of received packets , the probability of decoding all of the source packets is for systematic NC and for straightforward NC, where as per (6). If we show that the relationship holds for all valid values of , we can infer that the decoding probability of systematic NC is higher than that of straightforward NC. Dividing by gives




Note that and for all valid values of , that is, . Therefore, the right-hand side of (8) can become a lower bound on the ratio if coefficients and are removed. More specifically, we can obtain


if the binomial coefficient in (8) is included into the sum and the upper limit of the sum is updated accordingly. We distinguish the following two cases for the value of :

  • : In this case, we have . Invoking a special instance of the Chu-Vandermonde identity [9, p.  41], we can reduce the sum at the right-hand side of (10) to

  • : As previously explained, . Setting , expressing the sum in (10) in terms of , exploiting the properties of binomial coefficients and using the widely-known Vandermonde’s convolution [10, p.  29] gives


If we combine identities (11) and (12) with inequality (10), we obtain for all valid values of , which concludes the proof. We note that the ratio approaches as the value of increases. ∎


Even though this paper is concerned with binary systematic NC, i.e. the elements of matrix are selected uniformly at random from , the same reasoning can be employed to obtain when operations are performed over for . The probability of decoding the entire source message, given that packets have been received, can be written as


Both Propositions 1 and 2 hold for . Substituting (13) into (7) gives the general expression for .

Ii-B Probability of Decoding a Fraction of the Source Message

In Section II-A, we focused on deriving the probability of decoding the source packets when packets have been transmitted. Of equal interest is the probability of recovering at least source packets when packets have been transmitted. To the best of our knowledge, a closed-form expression for this probability, denoted hereafter as , has not been obtained for straightforward NC. However, a good approximation, which follows readily from Proposition 1, can be computed for the case of systematic NC.

Corollary 1.

The probability of recovering at least source packets, when packets have been transmitted over a channel with packet erasure probability , can be approximated by


where .


The number of transmitted systematic packets is either if , or if . In general, systematic packets are sent over the packet erasure channel, for any value of . If we wish to recover at least source packets and the erasure probability is small, or more received packets will most likely be systematic and, thus, linearly independent. As a result, the probability of decoding at least source packets reduces to the probability of recovering at least systematic packets, given by (14). ∎

We remark that the assumption of a low value of is reasonable when the physical layer employs error correcting codes that improve the channel conditions as “seen” by higher network layers, where NC is usually applied. For example, the Long Term Evolution Advanced (LTE-A) framework considers an erasure probability of [11].

Ii-C Performance Metrics and Benchmarks

In order to assess the performance of systematic NC and explore its capability to progressively decode a source message, we will compare it with ordered uncoded (OU) transmission [12] and straightforward NC. In OU transmission, the source packets are periodically repeated. The transmitted packet at time step can be expressed as for and . We note that transmission is uncoded in the sense that transmitted packets are not linear combinations of the source packets. By contrast, the -th transmitted packet in straightforward NC is given by for , implying that all transmitted packets are linear combinations of the source packets.

Probabilities and will be used to contrast the performance of systematic NC, straightforward NC and OU transmission. In order to create links between the two decoding probabilities, we introduce the following parameters:

  • is a predetermined target probability of packet recovery that a transmission scheme has to attain. Probabilities and can be set equal to in order to determine the number of transmitted packets and that are required for the partial or full recovery of the source message, respectively.

  • signifies the minimum number of transmitted packets required by the receiver to recover at least source packets with a probability of at least .

  • denotes the minimum number of additional packets that should be transmitted so that the receiver recovers the source packets with a probability of at least .

A performance comparison of the investigated schemes will be carried out in Section IV. Prior to that, we discuss decoding algorithms for NC schemes and propose a decoding process that allows progressive decoding of source packets in the following section.

Iii Progressive Decoding

If the objective of the decoding algorithm is the recovery of the source packets after the reception of at least transmitted packets, Gaussian Elimination (GE) could be used especially when the value of is small. The GE algorithm transforms the decoding matrix into row-echelon form [13]. The rank of the transformed matrix, which is equal to the rank of the original decoding matrix, can be obtained by inspecting the number of non-zero rows within the echelon form. If the rank is , that is, if is a full-rank matrix, the source packets can be successfully recovered.

GE and schemes based on Belief Propagation (BP) [14] experience a large spike in computation when transmitted packets are received. On-the-Fly Gaussian Elimination (OFGE) [15] manages to mitigate the decoding delay and computational complexity of GE by invoking an optimized triangulation process every time a packet is received. The OFGE decoder spreads computation out over each packet arrival and the decoding matrix is already in partial triangular form by the time the -th transmitted packet is received.

Both GE and OFGE have been designed to perform full-rank decoding. As a result, if the rank of is less than , that is, if the decoding matrix is rank-deficient, some source packets might still be decodable but GE or OFGE will not necessarily identify them. A modified version of OFGE, which we refer to as OFGE for Progressive Decoding (OFGE-PD), was presented in [12]. Similarly to OFGE, OFGE-PD also comprises a triangulation stage and a back-substitution stage. An additional stage, called the XORing phase, enables OFGE-PD to decode source packets from rank-deficient decoding matrices at the expense of increased computational complexity.

We revisited the original GE algorithm and we amalgamated the OFGE principle of initiating the decoding process whenever a packet is received. A sketch of the proposed algorithm, referred to as Gaussian Elimination for Progressive Decoding (GE-PD), is presented in Algorithm 1. To facilitate the description of GE-PD, we introduced function Degree, which determines the number of non-zero elements in a row vector; function Diag, which generates a row vector containing the elements of the main diagonal of a matrix; function LeftmostOne, which returns the position of the first non-zero entry in a row vector; and function Swap, which swaps two rows in a matrix. The decoding matrix is initially set equal to the zero matrix. Recall that represents the -th row of , while denotes the entry of in the -th row and -th column (equivalent to ). We note that, depending on the adopted programming language, the code can be further optimized and the execution speed of GE-PD improved.

As line 2 in Algorithm 1 indicates, whenever a new coding vector is received, it is updated so that any previously decoded source packets are not considered again in the decoding process. If the updated row-vector still contains non-zero entries, it is appended to the bottom of the decoding matrix (lines 3-4). Lines 6-16 rearrange the rows of in an effort to transform it into an upper triangular matrix. Lines 17-23 aim to transform into row-echelon form by ensuring that each non-zero element on the main diagonal of is the only non-zero element in that column. Finally, function BackSubstitution is called in line 25 to establish which source packets are decodable. The efficiency and accuracy of GE-PD are investigated in the following section.

1:Receive new coding vector
2:Set entries in that correspond to decoded packets to 0
3:if  then
5:       for  to  do
7:             if  then
9:                    repeat
10:                          if  then
13:                          end if
15:                    until 
16:             end if
17:             if  then
18:                    for  to  do
19:                          if  then
21:                          end if
22:                    end for
23:             end if
24:       end for
25:        BackSubstitution(, )
26:        Top rows of
27:end if
2:Function BackSubstitution(, )
4:for  to step  do
5:       if  then
7:             for  to  do
8:                    if  then 
9:             end for
10:       end if
11:end for
Algorithm 1 Gaussian Elimination for Progressive Decoding

Iv Results and Discussion

This section compares the proposed GE-PD with OFGE-PD, OFGE and GE in terms of computational cost and capability of progressively recovering source packets. The decoding algorithm that achieves the best accuracy but requires the least computational time is identified. It is then used to obtain simulation results, which are compared to theoretical predictions in order to validate the derived analytical expressions for systematic NC. The performance of systematic NC is then contrasted to that of straightforward NC and OU transmission, and the suitability of each scheme for progressive packet recovery is discussed.

Iv-a Assessment of GE-PD

Fig. 1 compares the computational cost of the considered decoding schemes. Recall that GE-PD and OFGE-PD are modified versions of GE and OFGE, respectively, which have been adapted to recover source packets from rank-deficient decoding matrices, as described in Section III. The computational cost has been expressed in terms of the time required for a decoder to recover the full sequence of source packets when straightforward NC is applied and channel conditions are perfect, i.e. . The plotted results were obtained on a simulation platform equipped with an Intel Core i7-3770 processor and 8 GB of RAM. As expected [15], Fig. 1 shows that OFGE yields substantial computational savings over the conventional GE. However, the inclusion of progressive decoding capabilities in OFGE adds noticeable overhead to the decoding process. We observe that the computational cost of the resultant OFGE-PD increases rapidly for large values of . On the other hand, GE-PD is not only more efficient than the original GE but also executes faster than OFGE.

Straightforward NC for source packets and perfect channel conditions were also assumed for the performance assessment of the four decoding schemes. Fig. 2 depicts the probability of each scheme recovering at least half () or all () of the source packets when packets have been transmitted. As we see, OFGE is not optimized for recovering a fraction of the source message in contrast to OFGE-PD, which requires a smaller number of transmitted packets to recover half of the source message but at a higher computational cost. A fact worthy of attention is that the decoding accuracy of GE is matched by that of GE-PD, which exhibits a computational cost as low as that of OFGE. For this reason, the proposed GE-PD was the decoding algorithm of choice for the simulation of the considered NC-based schemes.

Fig. 1: Computational cost of the decoding schemes for different numbers of source packets .
Fig. 2: Performance comparison of the decoding schemes for .

Iv-B Performance Validation of Systematic NC

In order to validate the derived analytical expressions for the decoding probability of systematic NC, a comparison between theoretical and simulation results was carried out. We considered a source message comprising packets, which are encoded using a systematic NC and transmitted over a packet erasure channel with .

Fig. (a)a shows that expression (14) for accurately predicts the probability of decoding at least half of the source message (). Similarly, expression (7) for matches the simulated results for decoding the entire source message (), as reported in Fig. (b)b. The excellent agreement between theory and simulation establishes the validity of the theoretical analysis. It also demonstrates that the proposed GE-PD is both efficient and accurate, considering that the number of decoded source packets matches the one predicted by the theoretical model.

Fig. 3: Performance validation of systematic NC for , different values of and (a) partial recovery () or (b) full recovery () of the source packets.

Iv-C Evaluation of Systematic NC for Progressive Decoding

Fig. 4 shows the probability that a receiver employing systematic NC recovers at least half or all of the source packets, when packets have been transmitted. The performance of systematic NC is contrasted with that of OU transmission and straightforward NC, referred to here as SF NC for brevity. Two scenarios have been considered; Fig. (a)a depicts the performance of the three transmission schemes when , while Fig. (b)b presents plots for the case of . In both scenarios, the packet erasure probability has been set to .

We observe in Fig. (a)a that OU transmission allows the recovery of at least half of the source message for a small value of . However recovery of the whole source message requires a large number of transmitted packets. For example, for a target probability of , a system using OU transmission can retrieve source packets if just packets are transmitted. On the other hand, recovery of all source packets requires the transmission of at least 39 packets. In other words, packets need to be transmitted, on average, to allow recovery of the whole source message, when half of the message has already been retrieved. As we see in Fig. (b)b, a larger value of will markedly increase the value of .

By contrast, SF NC incurs a significant delay in recovering at least half of the source message but only a few extra transmitted packets are required to obtain the entire message. We observe in Fig. (a)a that if then packets are needed to reconstruct half of the message, while the transmission of only additional packet is sufficient for the decoding of the entire message.

As is apparent from Fig. (a)a and Fig. (b)b, systematic NC combines the best performance characteristics of both OU transmission and SF NC. We observe that the value of for recovering at least half of the source packets is as small as that of OU transmission, while the required number of transmitted packets for retrieving all of the source packets is smaller than or similar to that of SF FC. The latter observation confirms Proposition 2. Consequently, systematic NC is the most appropriate of the considered transmission schemes for progressive packet decoding, as it exhibits a high probability of either partially or fully decoding the source message.

Fig. 4: Decoding probabilities as a function of for and (a) or (b) .

V Conclusions

In this paper, we considered systematic random linear network coding, obtained theoretical expressions that accurately describe its decoding probability and proved that systematic network codes exhibit a higher probability of decoding the entirety of a source message than straightforward network coding. We also proposed Gaussian elimination for Progressive Decoding (GE-FD), which aims to recover source packets as soon as one or more transmitted packets are successfully delivered to a receiver. We demonstrated that GE-PD performs similarly to the optimal theoretical decoder in terms of decoding probability and also exhibits low computational cost. Furthermore, we established that the decoding delay characteristics of systematic network coding for both partial and full recovery of source messages are notably better than those of straightforward network coding.


This work was conducted as part of the R2D2 project, which is supported by EPSRC under Grant EP/L006251/1.


  1. R. Ahlswede, N. Cai, S.-Y. Li, and R. Yeung, “Network information flow,” IEEE Trans. Inf. Theory, vol. 46, no. 4, pp. 1204–1216, Jul. 2000.
  2. S. Zhang, S. C. Liew, and P. P. Lam, “Hot topic: Physical-layer network coding,” in Proc. MobiCom, Los Angeles, USA, Sep. 2006.
  3. J. Heide, M. Pedersen, F. Fitzek, and T. Larsen, “Network coding for mobile devices - Systematic binary random rateless codes,” in Proc. IEEE ICC Workshops, Dresden, Germany, Jun. 2009.
  4. D. Lucani, M. Médard, and M. Stojanovic, “Systematic network coding for time-division duplexing,” in Proc. IEEE ISIT, Austin, USA, Jun. 2010.
  5. J. Barros, R. Costa, D. Munaretto, and J. Widmer, “Effective delay control in online network coding,” in Proc. IEEE INFOCOM, Rio de Janeiro, Brazil, Apr. 2009.
  6. R. Prior and A. Rodrigues, “Systematic network coding for packet loss concealment in broadcast distribution,” in Proc. ICOIN, Kuala Lumpur, Malaysia, Jan. 2011.
  7. P. Saxena and M. Vázquez-Castro, “Network coding advantage over MDS codes for multimedia transmission via erasure satellite channels,” in Personal Satellite Services.   Springer International Publishing, 2013, vol. 123, pp. 199–210.
  8. O. Trullols-Cruces, J. Barcelo-Ordinas, and M. Fiore, “Exact decoding probability under random linear network coding,” IEEE Commun. Lett., vol. 15, no. 1, pp. 67–69, Jan. 2011.
  9. W. Koepf, Hypergeometric Summation: An Algorithmic Approach to Summation and Special Function Identities.   Vieweg Verlag, 1998, p. 41.
  10. S. Roman, The Umbral Calculus.   Academic Press, 1984, p. 29.
  11. S. Sesia, I. Toufik, and M. Baker, LTE - The UMTS Long Term Evolution: From Theory to Practice.   John Wiley & Sons, 2011.
  12. A. L. Jones, I. Chatzigeorgiou, and A. Tassi, “Performance assessment of fountain-coded schemes for progressive packet recovery,” in Proc. CSNDSP, Manchester, UK, Jul. 2014.
  13. J. Epperson, An Introduction to Numerical Methods and Analysis, 2nd ed.   John Wiley & Sons, 2013, ch. 7, pp. 420–427.
  14. W. Niu, Z. Xiao, M. Huang, J. Yu, and J. Hu, “An algorithm with high decoding success probability based on LT codes,” in Proc. ISAPE, Guangzhou, China, Nov. 2010.
  15. V. Bioglio, M. Grangetto, R. Gaeta, and M. Sereno, “On the fly Gaussian elimination for lt codes,” IEEE Commun. Lett., vol. 13, no. 12, pp. 953–955, Dec. 2009.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description