An Algebraic Approach for Decoding Spread Codes
Abstract
In this paper we study spread codes: a family of constantdimension codes for random linear network coding. In other words, the codewords are fullrank matrices of size with entries in a finite field . Spread codes are a family of optimal codes with maximal minimum distance. We give a minimumdistance decoding algorithm which requires operations over an extension field . Our algorithm is more efficient than the previous ones in the literature, when the dimension of the codewords is small with respect to . The decoding algorithm takes advantage of the algebraic structure of the code, and it uses original results on minors of a matrix and on the factorization of polynomials over finite fields.
1 Introduction
Network coding is a branch of coding theory that arose in 2000 in the work by Ahlswede, Cai, Li and Yeung [ACLY00]. While classical coding theory focuses on pointtopoint communication, network coding focuses on multicast communication, i.e., a source communicating with a set of sinks. The source transmits messages to the sinks over a network, which is modeled as a directed multigraph. Some examples of multicast communication come from Internet protocol applications of streaming media, digital television, and peertopeer networking.
The goal in multicast communication is achieving maximal information rate. Informally, this corresponds to maximizing the amount of messages per transmission, i.e., per single use of the network. Li, Cai and Yeung in [LYC03] prove that maximal information rate can be achieved in multicast communication using linear network coding, provided that the size of the base field is large enough.
The algebraic aspect of network coding emerged with the work by Kötter and Kschischang [KK08b]. The authors introduced a new setting for random linear network coding: Given the linearity of the combinations, the authors suggest to employ subspaces of a given vector space as codewords. Indeed, subspaces are invariant under taking linear combinations of their elements. Let be the set of all linear subspaces of . They show that is a metric space, with distance
Kötter and Kschischang define network codes to be subsets of . In particular, they define constantdimension codes as subsets, whose elements have all the same dimension. Notions of errors and erasures compatible with the new transmission model are introduced in [KK08b]. In addition, upper and lower bounds for the cardinality of network codes are established in [KK08b, EV08].
We review here some of the constructions of constantdimension codes present in the literature. The first one is introduced by Kötter and Kschischang in [KK08b]. The construction uses evaluation of linearized polynomials over a subspace. The codes that one obtains are called ReedSolomonlike codes, because of the similarities with ReedSolomon codes in classical coding theory. Due to their connection with the rankmetric codes introduced in [Gab85], these codes are also called lifted rankmetric codes. Kötter and Kschischang devise a list minimumdistance decoding algorithm for their codes. Spread codes, which are the subject of this paper, were first introduced by the authors in [MGR08]. Spread codes contain the codes with maximal minimum distance in [KK08b]. Another family of network codes, based on analogs of designs, appears in [KK08a]. Aided by computer search, the authors find constantdimension codes based on designs with big cardinality. Another family of codes is constructed in [ES09]. The construction builds on that of ReedSolomonlike codes, and the codes that the authors obtain contain them. The construction is also based on binary constantweight codes, Ferrer diagrams, and rankmetric codes. The proposed decoding algorithm operates on two levels: First one decodes a constantweight code, then one applies a decoding algorithm for rankmetric codes. In [Ska10] Skachek introduces a family of codes, that is a subfamily of the one in [ES09]. In [MV10] the authors introduce another family of codes, which they obtain by evaluating pairs of linearized polynomials. The codes obtained can be decoded via a list decoding algorithm, which is introduced in the same work.
This work focuses on spread codes which are a family of constantdimension codes first introduced in [MGR08]. Spreads of are a collection of subspaces of , all of the same dimension, which partition the ambient space. Such a family of subspaces of exists if and only if the dimension of the subspaces divides . The construction of spread codes is based on the algebra where is the companion matrix of a monic irreducible polynomial of degree . Concretely, we define spread codes as
where is the Grassmannian of all subspaces of of dimension .
Since spreads partition the ambient space, spread codes are optimal. More precisely, they have maximum possible minimum distance , and the largest possible number of codewords for a code with minimum distance . Indeed, they achieve the anticode bound from [EV08]. This family is closely related to the family of ReedSolomonlike codes introduced in [KK08b]. We discuss the relation in detail in Section 2.2. In Lemma 17, we show how to extend to spread codes the existing decoding algorithms for ReedSolomonlike codes and rankmetric codes.
The structure of the spreads that we use in our construction helps us devise a minimumdistance decoding algorithm, which can correct up to half the minimum distance of . In Lemma 28 we reduce the decoding algorithm for a spread code to at most instances of the decoding algorithm for the special case . Therefore, we focus on the design of a decoding algorithm for the spread code
The paper is structured as follows. In Section 2 we give the construction of spread codes, discuss their main properties. In Subsection 2.1 we introduce the main notations. In Subsection 2.2 we discuss the relation between spread codes and ReedSolomonlike codes, which is given explicitly in Proposition 15. Proposition 18 shows how to apply a minimumdistance decoding algorithm for ReedSolomonlike codes to spread codes, and estimates the complexity of decoding a spread code using such an algorithm.
The main results of the paper are contained in Section 3. In Subsection 3.1 we prove some results on matrices, which will be needed for our decoding algorithm. Our main result is a new minimumdistance decoding algorithm for spread codes, which is given in pseudocode as Algorithm 2. The decoding algorithm is based on Theorem 34, where we explicitly construct the output of the decoder. Our algorithm can be made more efficient when the first columns of the received word are linearly independent. Proposition 35 and Corollary 36 contain the theoretical results behind this simplification, and the algorithm in pseudocode is given in Algorithm 3. Finally, in Section 4 we compute the complexity of our algorithm. Using the results from Subsection 2.2, we compare it with the complexity of the algorithms in the literature. It turns out that our algorithm is more efficient than the all the known ones, provided that .
2 Preliminaries and notations
Definition 1 ([Hir98, Section 4.1]).
A subset is a spread if it satisfies

for all distinct, and

Theorem 2 ([Hir98, Theorem 4.1]).
A spread exists if and only if .
In [MGR08] we give a construction of spreads suitable for use in Random Linear Network Coding (RLNC). Our construction is based on companion matrices.
Definition 3.
Let be a finite field and a monic polynomial. The companion matrix of
Let with , a monic irreducible polynomial of degree and its companion matrix.
Lemma 4.
The algebra is a finite field, i.e., .
This is a wellknown fact (see [LN94, page 64]).
Lemma 5.
Let be a ring isomorphism. Denote by
the projective space, where is the following equivalence relation
where . Then the map
is injective.
Proof.
Theorem 6 ([Mgr08, Theorem 1]).
is a spread of for .
Remark 8.
Notice that
In order to have a unique representative for the elements of , we bring the matrices in row reduced echelon form.
Lemma 9 ([Mgr08, Theorem 1]).
Let be a spread code. Then

, for all distinct, i.e., the code has maximal minimum distance, and

, i.e., the code has maximal cardinality with respect to the given minimum distance.
Remark 10.
Definition 11.
A vector space is uniquely decodable by the spread code if
In Section 3 we devise a minimumdistance decoding algorithm for uniquely decodable received spaces.
2.1 Further notations
We introduce in this subsection the notation we use in the paper.
Definition 12.
Let with and denote by the set of linearized polynomials of degree less than . Equivalently, if and only if for some .
In the rest of the work we denote th power exponents such as with .
Let be a finite field with elements, and let be a monic irreducible polynomial of degree . denotes the companion matrix of , and is a matrix which diagonalizes .
We denote by a diagonal matrix, whose entry in position is for .
Let be a matrix of size and let , . denotes the minor of the matrix corresponding to the submatrix with row indices and column indices . We skip the suffix when the matrix is clear from the context.
We introduce some operations on tuples. Let .

means that .

means that for .

is the length of the tuple.

denotes the such that is maximal.

If then , i.e., denotes the concatenation of tuples.

If then denotes the with maximal such that where is the empty tuple.

, with the convention that for any .
We define the non diagonal rank of a matrix as follows.
Definition 13.
Let . We define the non diagonal rank of as
At last, algorithms’ complexities are expressed as , which corresponds to performing operations over a field , where are given parameters.
2.2 Relation with ReedSolomonlike codes
ReedSolomonlike codes, also called lifted rankmetric codes, are a class of constantdimension codes introduced in [KK08b]. They are strictly related to maximal rank distance codes as introduced in [Gab85]. We give here an equivalent definition of these codes.
Definition 14.
Let be finite fields. Fix some linearly independent elements . Let be an isomorphism of vector spaces. A ReedSolomonlike (RSL) code is defined as
The following proposition establishes a relation between spread codes and RSL codes. The proof is easy, but rather technical, hence we omit it.
Proposition 15.
Let , finite fields, and the companion matrix of a monic irreducible polynomial of degree . Let be a root of , a basis of over . Moreover, let be the isomorphism of vector spaces which maps the basis to the standard basis of over . Then for every choice of there exists a unique linearized polynomial of the form with such that
The constant is where is the first row of .
The proposition allows us to relate our spread codes to some RSL codes. The following corollary makes the connection explicit. We use the notation of Proposition 15.
Corollary 16.
For each , let be a basis of over . Let denote the isomorphism of vector spaces that maps the basis to the standard basis of . Then
The connection that we have established with RSL codes allows us to extend any minimumdistance decoding algorithm for RSL codes to a minimumdistance decoding algorithm for spread codes. We start with a key lemma.
Lemma 17.
Let be a spread code, and for some . Assume there exists a such that . Let
It holds that:

for ,

, and

.
Proof.
In the next proposition, we use Corollary 16 and Lemma 17 to adapt to spread codes any decoding algorithm for RSL codes. In particular, we apply our results to the algorithms contained in [KK08b] and [SKK08], and we give the complexity of the resulting algorithms for spread codes.
Proposition 18.
Any minimumdistance decoding algorithm for RSL codes may be extended to a minimumdistance decoding algorithm for spread codes. In particular, the algorithms described in [KK08b] and [SKK08] can be extended to minimumdistance decoding algorithms for spread codes, with complexities for the former and for the latter.
Proof.
Suppose we are given a minimumdistance decoding algorithm for RSL codes. We construct a minimumdistance decoding algorithm for spread codes as follows: Let be the received word, and assume that there exists a such that . First, one computes the rank of until one finds an such that , for . Thanks to Lemma 17, one knows that for and . Moreover, one has
Therefore, one can apply the minimumdistance decoding algorithm for RSL codes to the received word in order to compute .
Assume now that one uses as minimumdistance decoder for RSL codes either the decoding algorithm from [KK08b], or the one from [SKK08]. The complexity of computing the rank of by computing row reduced echelon forms is . The complexity of the decoding algorithm for RSL codes is for the one in [KK08b] and for the one in [SKK08]. The complexity of the decoding algorithm is the dominant term in the complexity estimate. ∎
It is well known that RSL codes are strictly related to the rankmetric codes introduced in [Gab85]. Although the rank metric on rankmetric codes is equivalent to the subspace distance on RSL codes, the minimumdistance decoding problem in the former is not equivalent to the one in the latter. In [SKK08] the authors introduced the Generalized Decoding Problem for RankMetric Codes, which is equivalent to the minimumdistance decoding problem of RSL codes. Decoding algorithms for rankmetric codes such as the ones contained in [Gab85, Loi06, RP04] must be generalized in order to be effective for the Generalized Decoding Problem for RankMetric Codes, and consequently, to be applicable to RSL codes.
Another interesting application of Lemma 17 allows us to improve the efficiency of the decoding algorithm for the codes proposed in [Ska10]. For the relevant definitions, we refer the interested reader to the original article.
Corollary 19.
There is an algorithm which decodes the codes from [Ska10] and has complexity .
3 The MinimumDistance Decoding Algorithm
In this section we devise a new minimumdistance decoding algorithm for spread codes. In the next section, we show that our algorithm is more efficient than the ones present in the literature, when .
We start by proving some results on matrices, which we will be used to design and prove the correctness of the decoding algorithm.
3.1 Preliminary results on matrices
Let be a field and let be a polynomial of the form where , .
Lemma 20.
The following are equivalent:

The polynomial decomposes in linear factors, i.e.,
where .

It holds
(2) for all such that and
Proof.
We proceed by induction on .

The thesis is trivial for . Let us assume that the thesis holds for . We explicitly show the extraction of a linear factor of the polynomial.
The thesis is true by induction.
∎
Let be a polynomial ring with coefficients in a field . Consider the generic matrix of size
Denote by the ideal generated by all minors of size of , which do not involve entries on the diagonal, i.e.,
We establish some relations on the minors of , modulo the ideal .
Lemma 21.
Let , , and . Then
Proof.
Notice that if we consider as convention that , i.e., when , we get the determinant formula.
We proceed by induction on . Let us consider the case when , i.e., . Then,
For the thesis is true because since column appears twice.
Assume that the thesis is true for .
Let us now focus on the factor for , we get
By substitution it follows that
where . The repetition of column twice in implies that . The last equality follows from the induction hypothesis. ∎
The following is an easy consequence of Lemma 21.
Proposition 22.
Let such that . Then
with for any .
We now study the minors of a matrix of the form where and has a special form, which we describe in the next lemma.
Lemma 23.
Let to be the companion matrix of a monic irreducible polynomial of degree , and let be a root of . Then the matrix
(3) 
diagonalizes .
Proof.
The eigenvalues of the matrix correspond to the roots of the irreducible polynomial . If is an element such that , then by [LN94, Theorem 2.4]. It is enough to show that the columns of correspond to the eigenvectors of . Let , then
∎
We now establish some properties of .
Lemma 24.
The matrices and defined by (3) satisfy the following properties:

the entries of the first column of (respectively, the first row of ) form a basis of over , and

the entries of the th column of (respectively, row of ) are the th power of the ones of the th column (respectively, row) for .
Proof.
The two properties for the matrix come directly from its definition. By [LN94, Definition 2.30] we know that there exists a unique basis of over such that
where for . We have
∎
The next theorem and corollary will be used in Subsection 3.3 to devise a simplified minimumdistance decoding algorithm, under the assumption that the first columns of the received vector space are linearly independent.
Theorem 25.
Let and let and be two matrices satisfying the following properties:

has full rank,

the entries of the first column of form a basis of over , and

the entries of the th column of are the th power of the ones of the th column, for .
Then .
Proof.
Let
Let where form a basis of over . Then:
since the entries of are in . Let , then
The elements