On the Construction of Jointly Superregular Lower Triangular Toeplitz Matrices
Abstract
Superregular matrices have the property that all of their submatrices, which can be full rank are so. Lower triangular superregular matrices are useful for e.g., maximum distance separable convolutional codes as well as for (sequential) network codes. In this work, we provide an explicit design for all superregular lower triangular Toeplitz matrices in for the case of matrices with dimensions less than or equal to . For higher dimensional matrices, we present a greedy algorithm that finds a solution provided the field size is sufficiently high. We also introduce the notions of jointly superregular and product preserving jointly superregular matrices, and extend our explicit constructions of superregular matrices to these cases. Jointly superregular matrices are necessary to achieve optimal decoding capabilities for the case of codes with a rate lower than , and the product preserving property is necessary for optimal decoding capabilities in network recoding.
I Introduction
Wireless networks are used more and more for the streaming of audio and video data. Generally, wireless packet–based streaming requires some amount of forward erasure correction in order to cope with packet erasures and latency constraints. In a streaming context, erasure correcting codes and reliable transport protocols have been investigated in e.g., [1, 2, 3, 4]. Erasure correcting codes are either applied as a block code on consecutive blocks of the incoming data or as a convolutional code that sequentially process the incoming data packets. If the block code is lower triangular, it can be used sequentially on the incoming data in the same manner as a convolutional code. Decoding can also be done sequentially as data packets are received, and, thus, the latency can be kept low. If further coding is allowed within the network, and not only at the edges, it is usually referred to as network coding. Besides enhanced reliability, network coding can offer increased throughput and security and has been successfully applied in various communication scenarios [5, 6, 7, 8].
Convolutional codes or lower triangular block codes may be constructed using a random linear code. One of the benefits of random linear codes is simplicity, e.g., with respect to coordination between nodes. Furthermore, for large field sizes and code dimensions, optimal decoding capabilities can often be proven at least asymptotically. On the other hand, for small field sizes and small code dimensions, it is generally hard to guarantee optimal decoding capabilities, and the need for coordination usually implies that the resulting codes, if used as network codes, suffer from high overhead requirements [9].
Coding matrices in small dimensions are of great interest for streaming applications. The advantage of using small matrices are twofold. First, they can be decoded with generic decoding algorithms such as Gaussian elimination, even on embedded devices, despite the cubic complexity of the algorithm. Second, the small dimension allows for the construction of coding matrices that are guaranteed to be optimal in the non–asymptotic regime, and with memory requirements and field sizes that are feasible for encoding and decoding on embedded devices. The implementation of arithmetic is also straightforward on digital devices, since they are based on binary processors. This makes it feasible to implement high–performance arithmetic. It is particular useful to use elements of since they can each be represented exactly by a single byte.
Both convolutional codes and lower triangular block codes may be constructed from lower triangular matrices. In the latter case, we show in Fig. 1, examples of rate and rate codes obtained by concatenating two or three lower triangular matrices, respectively. In particular, let be an coding matrix, where can e.g., be illustrated as in Figs. 1LABEL:sub@fig:matrix_structure_4x8_channel and 1LABEL:sub@fig:matrix_structure_4x12_channel. The rate of the code is given by . Let be the source data matrix, and let be the coded data matrix, i.e., the output of the error correcting code. Then , which implies that the source vectors of dimension are encoded into coded vectors each of length . The matrices shown in Figs. 1LABEL:sub@fig:matrix_structure_4x8_math and 1LABEL:sub@fig:matrix_structure_4x12_math, contain the same rows as those in Figs. 1LABEL:sub@fig:matrix_structure_4x8_channel and 1LABEL:sub@fig:matrix_structure_4x12_channel. However, the rows are ordered differently to better illustrate that the source vectors can be processed sequentially as they appear. The use of the identity matrix as a code matrix yields a systematic code. The benefit of using two concatenated coding matrices instead of one matrix is twofold. First, the entire coding matrix needs to preserve the low latency property, this is straightforward for the two square matrices by having them be lower triangular. This property is not well defined for a tall matrix. Second, in a multipath network the two square matrices may be used on different paths. Splitting a tall matrix and using it in two different paths in a network is not desirable.
If an lower triangular matrix is superregular, then it is also an optimal block code, i.e., it has optimal decoding capabilities [10]. A lower triangular matrix is superregular, if and only if all of its proper submatrices are non–singular [11]. It was shown in [11] that MDS convolutional codes can be constructed from lower triangular superregular matrices. Thus, it is of great interest to find a way to construct superregular lower triangular matrices in small dimensions and with small field sizes. This is, however, an open problem. In [11], a few of such matrices were shown without providing insights to how they were obtained. In [12], an explicit construction for superregular (totally positive) matrices was provided for real and complex fields. This construction can easily be extended to very large prime fields, which is impractical. In [13] a new class of lower block triangular matrices that are superregular over a sufficiently large field was presented.
In this paper, we provide an explicit design for all superregular lower triangular Toeplitz matrices in for the case of matrices with dimensions less than or equal to . For general dimensions, we propose a greedy approach to design the lower triangular superregular Toeplitz matrices.
By concatenating the identity matrix and code matrices, a rate code is obtained. Codes with a rate lower than are of concern in various applications such as audio/video streaming. For example, it may be used in a streaming context when the underlying erasure channel suffers from a significant amount of erasures or in one–to–many scenarios such as broadcast erasure channels with limited feedback options. Unfortunately, even if all the , individual code matrices are superregular, it is not guaranteed that their concatenation with the identity matrix yields an optimal rate code. To this end, we introduce the notion of jointly superregular matrices. The use of two jointly superregular matrices maximizes the decoding capabilities, see Definition 1. With this stronger notion of superregularity, optimal decoding capabilities can be obtained for any rate code. We provide explicit constructions for such lower triangular matrices in small dimensions and any field .
In ad–hoc and peer–to–peer networks, such as machine–to–machine communication or Internet of things, it is becoming more and more relevant to recode at intermediate nodes. Recoding in network coding basically corresponds to multiplication of different coding matrices. However, the resulting coding matrix obtained by multiplying two (jointly) superregular matrices is not guaranteed to be superregular. We therefore introduce the notion of product preserving jointly superregular matrices. In particular, given a pair of jointly superregular matrices, say and , where is used for encoding at the source and is used at an intermediate node in the network to perform recoding. Maximum decoding capabilities at the end–node is achieved if and only if or is superregular, which is guaranteed if and are product preserving jointly superregular matrices. We provide a few explicit constructions for product preserving jointly superregular matrices in small dimensions and any field .
Ii Superregular Matrices
In a slightly different context the authors of [14] define a dense matrix to be superregular if and only if every square submatrix is non–singular. This definition of superregularity is extended in [11] to lower triangular matrices. That is, a lower triangular matrix is superregular if and only if all of its proper submatrices are non–singular [11, Definition 3.3]. Let be an lower triangular Toeplitz matrix with all the elements in the first column being non–zero. Let be a submatrix of . Where is constructed using the rows and columns of with indices and , respectively [11, Definition 3.2]. Then, is a proper submatrix of if and only if , and . We adopt this notion of superregularity since it maximises the decoding capability [10], when a superregular matrix is used in a code with rate . This notion of superregularity is somewhat different from the notion used in [15]. Naturally, a code with rate greater than for some can be generated through puncturing.
A code with rate can be constructed by using two jointly superregular matrices. Naturally, two jointly superregular matrices are individually superregular. The following definition describes the notion of joint superregularity. The essential part of the definition is that any square submatrix formed by any combination of the two matrices that can be non–singular must also be non–singular.
Definition 1 (Joint superregularity).
Two superregular matrices are said to be jointly superregular if and only if all of the proper submatrices of any matrix, formed by taking and rows from the two matrices, respectively, are non–singular. In the context of jointly superregular matrices, a proper submatrix is any square matrix that is not trivially rank deficient. An matrix, when sorted by increasing row support^{1}^{1}1The support of a vector is equal to its number of non–zero elements., is said to be trivially rank deficient if the support of row , , is less than . A proper submatrix need not be triangular.
Lemma 2(iii)  Corollary 1  

2  0  0  
3  84  0  
4  17280  9  
5  582180  2011  
6  12700800  76506  
7  233847322  1234973  
8  2000121984  17274832 
Definition 2 (Product preserving jointly superregular).
Two jointly superregular matrices are product preserving if and only if their product is a superregular matrix.
Let denote the set of roots of a primitive polynomial , which generates . Let . Let and let denote the set of all superregular lower triangular Toeplitz matrices with their first column given by , where . Let and let denote an matrix obtained by extending below and to the right by the row vector and column vector , respectively, so that is lower triangular and Toeplitz. Finally, let be an lower triangular Toeplitz matrix having the first column given by .
Let denote the set of all pairs of jointly superregular matrices according to Definition 1. For two jointly superregular matrices, and , we use the subscripts and to distinguish between their elements. For , the first columns of and are given by and , respectively, where .
Let be the pair of matrices obtained by extending and using the straightforward generalization of the –operator for a single matrix.
In [15], a construction of matrices that preserve superregularity after multiplication with block diagonal matrices was constructed. In our case, the product of two superregular matrices is not guaranteed to be a superregular matrix. Note that the multiplication (from the right) in [15] is different as the matrices have entries in different fields.
Lemma 1.
Given , then such that their product .
Proof.
The proof follows easily from [11, Corollary 3.6]. For any then and it follows that . ∎
Let denote the set of all pairs of product preserving jointly superregular lower triangular Toeplitz matrices:
(1) 
Iii Explicit construction of superregular and jointly superregular matrices
In this section we first show methods for explicit construction of lower triangular Toeplitz superregular matrices of size , where . Any matrix of size with is superregular over some . This follows easily from the definition since . In the following field operations on the elements of are taken modulo .
Lemma 2.
Let and .

Then if and only if and .

Let and . Then if and only if, and satisfy:
(2) 
Let and . Then if and only if, and satisfy:
(3) and and jointly satisfy:
(4)
Lemma 2 (whose proof can be found in the Appendix) provides necessary and sufficient conditions for superregularity. For the case of only sufficient conditions, the four non–trivial equations in (4) can be replaced by a single equation as shown in Corollary 1. Table I shows the number of superregular lower triangular Toeplitz matrices.
Corollary 1.
Remark 1.
Let . If then .
The two lemmas below, 3 and 4, list the necessary and sufficient conditions for constructing jointly superregular lower triangular Toeplitz matrices of size and , respectively. Furthermore, Lemma 3 also define a necessary condition for constructing jointly superregular lower triangular Toeplitz matrices of size .
Lemma 3.
Let . For , if and only if, and . For any , , if such that .
Proof.
The determinant of the submatrix is given by , and is only zero when . ∎
Lemma 4.
Let , and let . Let . Then if and only if, and satisfy:
(6) 
and and jointly satisfy:
(7)  
(8) 
The proof of Lemma 4 uses a similar technique as used in the proof of Lemma 2, and it has therefore been omitted.
Remark 2.
Let , where , then .
Proof.
The proof follows from the fact that for is equal to for , which does not satisfy Lemma 3. ∎
Jointly superregular matrices of size are always product preserving. The following lemma provides necessary and sufficient conditions for product preserving jointly superregular lower triangular Toeplitz matrices of size and .
Lemma 5.
Let .

Let . Then if and only if, and jointly satisfy:
(9) (10) 
Let . Then if and only if, and jointly satisfy:
(11) (12) (13) (14) (15) (16)
Iv Greedy algorithm
We present a greedy algorithm for an superregular lower triangular Toeplitz matrix. The algorithm is illustrated in Algorithm 1. The algorithm starts by searching for a superregular matrix. When a superregular matrix is found, the algorithm will search for a superregular matrix by extending the matrix using the –operator and .
The search is implemented by having running through all the elements of the finite field, except the last element. The last element, , is excluded since , where . This method is used until an superregular matrix is found, provided that the field size is sufficiently large. If such that then backtracking is required. That is, without backtracking the algorithm could reach a matrix, where , that cannot be extended further. In case of such an event, then is set to the next element and the resulting matrix is tested for superregularity. Under sufficiently large field size the algorithm is guaranteed to find an superregular lower triangular Toeplitz matrix. In the worst case, the algorithm will fail after having checked all possible combinations of before returning Insufficient field size. On an Intel 2.3 GHz Core i5 (I5–2415M) our single threaded implementation of the algorithm requires less than ms to find a superregular lower triangular Toeplitz matrix over . Furthermore, without backtracking our experiments show that the algorithm will at most work for over .
(17)  
(18)  
(19) 
V Examples of coding matrices
We now present two superregular matrices, where , , where . The matrices are shown in Equations (20) and (21). The two matrices have identical performance with respect to decoding capabilities, since they are both superregular. However, outperforms with respect to encoding and decoding throughput. Our experiments of encoding and decoding data packets of bytes using and show a throughput gain of %. The gain in throughput comes from the fact that when an element equals , there is no need for multiplication during the encoding and decoding process. Inspecting (20) and (21) reveals that has , which in turn ensures that of the matrix elements below the diagonal are . Whereas, has no elements below the diagonal that are . Equations (22) and (23) show the first column of and respectively, with . Given their structure these matrices are superregular for .
(20)  
(21) 
(22)  
(23) 
In addition to the two superregular matrices, we also present two jointly superregular matrices. These matrices are jointly superregular over using the previous and its roots . Furthermore, the two matrices are not only jointly superregular but they are also product preserving. The matrices are shown in Equations (24) and (25). Note that the matrices have several parameters that are 0. A consequence of the lower triangular Toeplitz structure of the matrices is that they are product preserving jointly superregular for any block of size .
(24)  
(25) 
Vi Conclusions
This paper has delivered explicit matrix constructions for superregular matrices. We also presented a greedy algorithm for larger superregular matrices. The matrix attributes joint superregularity and product preserving joint superregularity are defined for lower triangular matrices. Furthermore, explicit matrix constructions for matrices with the two attributes are provided. We demonstrated the applicability of (product preserving) jointly superregular matrices, with use–cases such as intermediate recoding or codes with a rate lower than , respectively. Both use–cases benefit greatly from optimal decoding capabilities. We also exposed some general attributes of (jointly) superregular matrices. All of the methods presented in this paper can be implemented on embedded devices. The field size and matrix dimensions used in the example section are feasible even on low–power devices with limited instruction sets. All the presented matrices still provide optimal decoding capabilities. Finally, we showed that the parameters of a lower triangular Toeplitz superregular matrix have a significant impact on the throughput performance of an implementation.
[Proof of Lemma 2]

The determinants of the proper submatrices of are: and , where . Since is primitive, , if . Thus, (modulo ).

We only need to check the determinants of the proper submatrices that include the new element . Since is primitive, it is easy to obtain (2).
References
 [1] D. G. Sachs, I. Kozintsev, M. Yeung, and D. L. Jones, “Hybrid ARQ for Robust Video Streaming Over Wireless LANs,” Int. Conference on Information Technology: Coding and Computing, pp. 317–321, 2001.
 [2] A. Badr, A. Khisti, W. tian Tan, and J. Apostolopoulos, “Robust streaming erasure codes based on deterministic channel approximations,” in IEEE Int. Symposium on Inf. Theory, July 2013, pp. 1002–1006.
 [3] A. Nafaa, T. Taleb, and L. Murphy, “Forward error correction strategies for media streaming over wireless networks,” IEEE Communications Magazine, vol. 46, no. 1, pp. 72–79, January 2008.
 [4] D. Leong and T. Ho, “Erasure coding for realtime streaming,” in IEEE Int. Symposium on Inf. Theory, July 2012, pp. 289–293.
 [5] J. Krigslund, J. Hansen, M. Hundebøll, D. Lucani, and F. Fitzek, “Core: Cope with more in wireless meshed networks,” in IEEE 77th Vehicular Technology Conference, June 2013, pp. 1–6.
 [6] H. Seferoglu, A. Markopoulou, and K. Ramakrishnan, “I2nc: Intra and intersession network coding for unicast flows in wireless networks,” in IEEE INFOCOM, April 2011, pp. 1035–1043.
 [7] P. Pahlevani, D. Lucani, M. Pedersen, and F. Fitzek, “Playncool: Opportunistic network coding for local optimization of routing in wireless mesh networks,” in IEEE Globecom Workshops, Dec 2013, pp. 812–817.
 [8] J. Hansen, D. Lucani, J. Krigslund, M. Médard, and F. Fitzek, “Network coded software defined networking: enabling 5g transmission and storage networks,” IEEE Commun. Mag., pp. 100–107, September 2015.
 [9] Heide, J. and Pedersen, M.V. and Fitzek, F.H.P. and Médard, M., “On Code Parameters and Coding Vector Representation for Practical RLNC,” in IEEE Int. Conference on Communications, June 2011.
 [10] R. Smarandache, H. GluesingLuerssen, and J. Rosenthal, “Strongly MDS convolutional codes, a new class of codes with maximal decoding capability,” in IEEE Int. Symposium on Inf. Theory, 2002, p. 426.
 [11] H. GluesingLuerssen, J. Rosenthal, and R. Smarandache, “StronglyMDS convolutional codes,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 584–598, Feb 2006.
 [12] M. Aissen, I. Schoenberg, and A. Whitney, “On the generating functions of totally positive sequences I,” Journal d’Analyse Mathématique, vol. 2, no. 1, pp. 93–103, 1952.
 [13] R. Hutchinson, R. Smarandache, and J. Trumpf, “A new class of superregular matrices and MDP convolutional codes,” Linear Alg. and its Applications, vol. 439, no. 7, pp. 2145 – 2157, 2013.
 [14] R. Roth and A. Lempel, “On MDS codes via Cauchy matrices,” IEEE Trans. Inf. Theory, vol. 35, no. 6, pp. 1314–1319, Nov 1989.
 [15] R. Mahmood, A. Badr, and A. Khisti, “Convolutional codes with maximum column sum rank for network streaming,” in IEEE Int. Symposium on Inf. Theory, June 2015, pp. 2271–2275.