An Algebraic Approach for Decoding Spread Codes

# An Algebraic Approach for Decoding Spread Codes

E. Gorla The author was supported by the Swiss National Science Foundation under grant no. 123393. Mathematics Institute
University of Basel
F. Manganiello The author was partially supported by the Swiss National Science Foundation under grants no. 126948 and no. 135934. Department of Electrical and Computer Engineering
University of Toronto
J. Rosenthal The author was partially supported by the Swiss National Science Foundation under grant no. 126948. Mathematics Institute
University of Zürich
###### Abstract

In this paper we study spread codes: a family of constant-dimension codes for random linear network coding. In other words, the codewords are full-rank matrices of size with entries in a finite field . Spread codes are a family of optimal codes with maximal minimum distance. We give a minimum-distance decoding algorithm which requires operations over an extension field . Our algorithm is more efficient than the previous ones in the literature, when the dimension of the codewords is small with respect to . The decoding algorithm takes advantage of the algebraic structure of the code, and it uses original results on minors of a matrix and on the factorization of polynomials over finite fields.

## 1 Introduction

Network coding is a branch of coding theory that arose in 2000 in the work by Ahlswede, Cai, Li and Yeung [ACLY00]. While classical coding theory focuses on point-to-point communication, network coding focuses on multicast communication, i.e., a source communicating with a set of sinks. The source transmits messages to the sinks over a network, which is modeled as a directed multigraph. Some examples of multicast communication come from Internet protocol applications of streaming media, digital television, and peer-to-peer networking.

The goal in multicast communication is achieving maximal information rate. Informally, this corresponds to maximizing the amount of messages per transmission, i.e., per single use of the network. Li, Cai and Yeung in [LYC03] prove that maximal information rate can be achieved in multicast communication using linear network coding, provided that the size of the base field is large enough.

The algebraic aspect of network coding emerged with the work by Kötter and Kschischang [KK08b]. The authors introduced a new setting for random linear network coding: Given the linearity of the combinations, the authors suggest to employ subspaces of a given vector space as codewords. Indeed, subspaces are invariant under taking linear combinations of their elements. Let be the set of all -linear subspaces of . They show that is a metric space, with distance

 d(U,V)=dim(U+V)−dim(U∩V) for all U,V∈P(Fnq).

Kötter and Kschischang define network codes to be subsets of . In particular, they define constant-dimension codes as subsets, whose elements have all the same dimension. Notions of errors and erasures compatible with the new transmission model are introduced in [KK08b]. In addition, upper and lower bounds for the cardinality of network codes are established in [KK08b, EV08].

We review here some of the constructions of constant-dimension codes present in the literature. The first one is introduced by Kötter and Kschischang in [KK08b]. The construction uses evaluation of linearized polynomials over a subspace. The codes that one obtains are called Reed-Solomon-like codes, because of the similarities with Reed-Solomon codes in classical coding theory. Due to their connection with the rank-metric codes introduced in [Gab85], these codes are also called lifted rank-metric codes. Kötter and Kschischang devise a list- minimum-distance decoding algorithm for their codes. Spread codes, which are the subject of this paper, were first introduced by the authors in [MGR08]. Spread codes contain the codes with maximal minimum distance in [KK08b]. Another family of network codes, based on -analogs of designs, appears in [KK08a]. Aided by computer search, the authors find constant-dimension codes based on designs with big cardinality. Another family of codes is constructed in [ES09]. The construction builds on that of Reed-Solomon-like codes, and the codes that the authors obtain contain them. The construction is also based on binary constant-weight codes, Ferrer diagrams, and rank-metric codes. The proposed decoding algorithm operates on two levels: First one decodes a constant-weight code, then one applies a decoding algorithm for rank-metric codes. In [Ska10] Skachek introduces a family of codes, that is a sub-family of the one in [ES09]. In [MV10] the authors introduce another family of codes, which they obtain by evaluating pairs of linearized polynomials. The codes obtained can be decoded via a list decoding algorithm, which is introduced in the same work.

This work focuses on spread codes which are a family of constant-dimension codes first introduced in [MGR08]. Spreads of are a collection of subspaces of , all of the same dimension, which partition the ambient space. Such a family of subspaces of exists if and only if the dimension of the subspaces divides . The construction of spread codes is based on the -algebra where is the companion matrix of a monic irreducible polynomial of degree . Concretely, we define spread codes as

where is the Grassmannian of all subspaces of of dimension .

Since spreads partition the ambient space, spread codes are optimal. More precisely, they have maximum possible minimum distance , and the largest possible number of codewords for a code with minimum distance . Indeed, they achieve the anticode bound from [EV08]. This family is closely related to the family of Reed-Solomon-like codes introduced in [KK08b]. We discuss the relation in detail in Section 2.2. In Lemma 17, we show how to extend to spread codes the existing decoding algorithms for Reed-Solomon-like codes and rank-metric codes.

The structure of the spreads that we use in our construction helps us devise a minimum-distance decoding algorithm, which can correct up to half the minimum distance of . In Lemma 28 we reduce the decoding algorithm for a spread code to at most instances of the decoding algorithm for the special case . Therefore, we focus on the design of a decoding algorithm for the spread code

 S=S2={rowsp(A1A2)∈GFq(k,2k)∣A1,A2∈Fq[P]}.

The paper is structured as follows. In Section 2 we give the construction of spread codes, discuss their main properties. In Subsection 2.1 we introduce the main notations. In Subsection 2.2 we discuss the relation between spread codes and Reed-Solomon-like codes, which is given explicitly in Proposition 15. Proposition 18 shows how to apply a minimum-distance decoding algorithm for Reed-Solomon-like codes to spread codes, and estimates the complexity of decoding a spread code using such an algorithm.

The main results of the paper are contained in Section 3. In Subsection 3.1 we prove some results on matrices, which will be needed for our decoding algorithm. Our main result is a new minimum-distance decoding algorithm for spread codes, which is given in pseudocode as Algorithm 2. The decoding algorithm is based on Theorem 34, where we explicitly construct the output of the decoder. Our algorithm can be made more efficient when the first columns of the received word are linearly independent. Proposition 35 and Corollary 36 contain the theoretical results behind this simplification, and the algorithm in pseudocode is given in Algorithm 3. Finally, in Section 4 we compute the complexity of our algorithm. Using the results from Subsection 2.2, we compare it with the complexity of the algorithms in the literature. It turns out that our algorithm is more efficient than the all the known ones, provided that .

## 2 Preliminaries and notations

###### Definition 1 ([Hir98, Section 4.1]).

A subset is a spread if it satisfies

• for all distinct, and

###### Theorem 2 ([Hir98, Theorem 4.1]).

A spread exists if and only if .

In [MGR08] we give a construction of spreads suitable for use in Random Linear Network Coding (RLNC). Our construction is based on companion matrices.

###### Definition 3.

Let be a finite field and a monic polynomial. The companion matrix of

 P=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝010⋯00010⋮⋱⋮0001−p0−p1−p2⋯−pk−1⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠∈Fk×kq.

Let with , a monic irreducible polynomial of degree and its companion matrix.

###### Lemma 4.

The -algebra is a finite field, i.e., .

This is a well-known fact (see [LN94, page 64]).

###### Lemma 5.

Let be a ring isomorphism. Denote by

 Pr−1(Fqk):=(Frqk∖{0})/∼

the projective space, where is the following equivalence relation

 v∼w⟺∃λ∈F∗qk such that v=λw,

where . Then the map

 ~φ:Pr−1(Fqk)→GFq(k,n)[v1:⋯:vr]↦rowsp(φ(v1)⋯φ(vr)).

is injective.

###### Proof.

Let . If there exists an such that

 = (1) =

Let be the least indices such that and . From (1) it follows that . Since, without loss of generality, we can consider , it follows that and consequently . Then, (1) becomes

 (φ(v1)⋯φ(vr))=(φ(w1)⋯φ(wr))

###### Theorem 6 ([Mgr08, Theorem 1]).

is a spread of for .

###### Definition 7 ([Mgr08, Definition 2]).

We call spread codes of the subsets from Theorem 6.

###### Remark 8.

Notice that

 Sr={rowsp(A1⋯Ar)∈GFq(k,n)∣Ai∈Fq[P]  for all i∈{1,…,r}}

In order to have a unique representative for the elements of , we bring the matrices in row reduced echelon form.

###### Lemma 9 ([Mgr08, Theorem 1]).

Let be a spread code. Then

1. , for all distinct, i.e., the code has maximal minimum distance, and

2. , i.e., the code has maximal cardinality with respect to the given minimum distance.

###### Remark 10.

In [TMR10] the authors show that spread codes are an example of orbit codes. Moreover, in [TR11] it is shown that some spread codes are cyclic orbit codes under the action of the cyclic group generated by the companion matrix of a primitive polynomial.

###### Definition 11.

A vector space is uniquely decodable by the spread code if

 there exists a C∈Sr such % that d(R,C)

In Section 3 we devise a minimum-distance decoding algorithm for uniquely decodable received spaces.

### 2.1 Further notations

We introduce in this subsection the notation we use in the paper.

###### Definition 12.

Let with and denote by the set of linearized polynomials of degree less than . Equivalently, if and only if for some .

In the rest of the work we denote -th power exponents such as with .

Let be a finite field with elements, and let be a monic irreducible polynomial of degree . denotes the companion matrix of , and is a matrix which diagonalizes .

We denote by a diagonal matrix, whose entry in position is for .

Let be a matrix of size and let , . denotes the minor of the matrix corresponding to the submatrix with row indices and column indices . We skip the suffix when the matrix is clear from the context.

We introduce some operations on tuples. Let .

• means that .

• means that for .

• is the length of the tuple.

• denotes the such that is maximal.

• If then , i.e., denotes the concatenation of tuples.

• If then denotes the with maximal such that where is the empty tuple.

• , with the convention that for any .

We define the non diagonal rank of a matrix as follows.

###### Definition 13.

Let . We define the non diagonal rank of as

 ndrank(N):=min{t∈N∣[J,L]N=0  for all J,L∈{1,…,k}t, J∩L=∅}−1.

At last, algorithms’ complexities are expressed as , which corresponds to performing operations over a field , where are given parameters.

### 2.2 Relation with Reed-Solomon-like codes

Reed-Solomon-like codes, also called lifted rank-metric codes, are a class of constant-dimension codes introduced in [KK08b]. They are strictly related to maximal rank distance codes as introduced in [Gab85]. We give here an equivalent definition of these codes.

###### Definition 14.

Let be finite fields. Fix some -linearly independent elements . Let be an isomorphism of -vector spaces. A Reed-Solomon-like (RSL) code is defined as

The following proposition establishes a relation between spread codes and RSL codes. The proof is easy, but rather technical, hence we omit it.

###### Proposition 15.

Let , finite fields, and the companion matrix of a monic irreducible polynomial of degree . Let be a root of , a basis of over . Moreover, let be the isomorphism of -vector spaces which maps the basis to the standard basis of over . Then for every choice of there exists a unique linearized polynomial of the form with such that

 (A0 ⋯ Ar−1)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝ψ(f(1))ψ(f(λ))⋮ψ(f(λk−1))⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

The constant is where is the first row of .

The proposition allows us to relate our spread codes to some RSL codes. The following corollary makes the connection explicit. We use the notation of Proposition 15.

###### Corollary 16.

For each , let be a basis of over . Let denote the isomorphism of vector spaces that maps the basis to the standard basis of . Then

 Sr=r−1⋃i=1⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩rowsp⎛⎜ ⎜ ⎜⎝0 ⋯ 0i−1 times I ψi(f(1))⋮ψi(f(λk−1))⎞⎟ ⎟ ⎟⎠∣∣ ∣ ∣ ∣∣f=ax,a∈Fq(r−i)k⎫⎪ ⎪ ⎪⎬⎪ ⎪ ⎪⎭⋃⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩⎛⎜ ⎜ ⎜⎝0 ⋯ 0r−1 times I⎞⎟ ⎟ ⎟⎠⎫⎪ ⎪ ⎪⎬⎪ ⎪ ⎪⎭.

Corollary 16 readily follows from Proposition 15.

The connection that we have established with RSL codes allows us to extend any minimum-distance decoding algorithm for RSL codes to a minimum-distance decoding algorithm for spread codes. We start with a key lemma.

###### Lemma 17.

Let be a spread code, and for some . Assume there exists a such that . Let

 i:=min{j∈{1,…,r}∣rank(Rj)>~k−12}.

It holds that:

• for ,

• , and

• .

###### Proof.

The result follows from Lemma 28 and the observation that

 d(C,R)≥d(rowsp(Ci⋯Cr),rowsp(Ri⋯Rr)).

In the next proposition, we use Corollary 16 and Lemma 17 to adapt to spread codes any decoding algorithm for RSL codes. In particular, we apply our results to the algorithms contained in [KK08b] and [SKK08], and we give the complexity of the resulting algorithms for spread codes.

###### Proposition 18.

Any minimum-distance decoding algorithm for RSL codes may be extended to a minimum-distance decoding algorithm for spread codes. In particular, the algorithms described in [KK08b] and [SKK08] can be extended to minimum-distance decoding algorithms for spread codes, with complexities for the former and for the latter.

###### Proof.

Suppose we are given a minimum-distance decoding algorithm for RSL codes. We construct a minimum-distance decoding algorithm for spread codes as follows: Let be the received word, and assume that there exists a such that . First, one computes the rank of until one finds an such that , for . Thanks to Lemma 17, one knows that for and . Moreover, one has

 d(rowsp(RiRi+1⋯Rr),rowsp(ICi+1⋯Cr))

Therefore, one can apply the minimum-distance decoding algorithm for RSL codes to the received word in order to compute .

Assume now that one uses as minimum-distance decoder for RSL codes either the decoding algorithm from [KK08b], or the one from [SKK08]. The complexity of computing the rank of by computing row reduced echelon forms is . The complexity of the decoding algorithm for RSL codes is for the one in [KK08b] and for the one in [SKK08]. The complexity of the decoding algorithm is the dominant term in the complexity estimate. ∎

It is well known that RSL codes are strictly related to the rank-metric codes introduced in [Gab85]. Although the rank metric on rank-metric codes is equivalent to the subspace distance on RSL codes, the minimum-distance decoding problem in the former is not equivalent to the one in the latter. In [SKK08] the authors introduced the Generalized Decoding Problem for Rank-Metric Codes, which is equivalent to the minimum-distance decoding problem of RSL codes. Decoding algorithms for rank-metric codes such as the ones contained in [Gab85, Loi06, RP04] must be generalized in order to be effective for the Generalized Decoding Problem for Rank-Metric Codes, and consequently, to be applicable to RSL codes.

Another interesting application of Lemma 17 allows us to improve the efficiency of the decoding algorithm for the codes proposed in [Ska10]. For the relevant definitions, we refer the interested reader to the original article.

###### Corollary 19.

There is an algorithm which decodes the codes from [Ska10] and has complexity .

The algorithm is a combination of Lemma 17 and the decoding algorithm contained in [SKK08]. First, by Lemma 17, one finds the position of the identity matrix. This reduces the minimum-distance decoding problem to decoding a RSL code, so one can use the algorithm from [SKK08].

## 3 The Minimum-Distance Decoding Algorithm

In this section we devise a new minimum-distance decoding algorithm for spread codes. In the next section, we show that our algorithm is more efficient than the ones present in the literature, when .

We start by proving some results on matrices, which we will be used to design and prove the correctness of the decoding algorithm.

### 3.1 Preliminary results on matrices

Let be a field and let be a polynomial of the form where , .

###### Lemma 20.

The following are equivalent:

1. The polynomial decomposes in linear factors, i.e.,

 m=a(1,…,s)∏u∈(1,…,s)(yu+μu)

where .

2. It holds

 aUaV=aU∩Va(1,…,s) (2)

for all such that and

 min((1,…,s)∖V)
###### Proof.

We proceed by induction on .

If , is a linear polynomial. Let us now suppose the thesis is true for . Then

 a(1,…,s)∏u∈(1,…,s)(yu+μu)=a(1,…,s)(ys+μs)⎛⎝∑U⊆(1,…,s−1)~aUyU⎞⎠

where and the coefficients with satisfy by hypothesis condition (2). The coefficients of are if , and otherwise. Therefore we only need to prove that (2) holds for . The equality is hence it is trivial.

The thesis is trivial for . Let us assume that the thesis holds for . We explicitly show the extraction of a linear factor of the polynomial.

 m =∑U⊆(1,…,s)aUyU=∑U⊆(1,…,s)1∈U(aUyU+aU∖(1)yU∖(1))= =∑U⊆(1,…,s)1∈U(aUy1yU∖(1)+aUa(2,…,s)a(1,…,s)yU∖(1))= =(y1+a(2,…,s)a(1,…,s))⋅⎛⎜ ⎜⎝∑U⊆(1,…,s)1∈UaUyU∖(1)⎞⎟ ⎟⎠.

The thesis is true by induction.

Let be a polynomial ring with coefficients in a field . Consider the generic matrix of size

 M:=⎛⎜ ⎜⎝x1,1⋯x1,k⋮⋮xk,1⋯xk,k⎞⎟ ⎟⎠.

Denote by the ideal generated by all minors of size of , which do not involve entries on the diagonal, i.e.,

 Is+1:=([J,L]∣J,L∈{1,…,k}s+1, J∩L=∅).

We establish some relations on the minors of , modulo the ideal .

###### Lemma 21.

Let , , and . Then

 [Js;Ls][J;L]=k∑t=s+1(−1)t+s+1[Js∪(jt);Ls∪(ls+1)][J∖(jt);L∖(ls+1)].
###### Proof.

Notice that if we consider as convention that , i.e., when , we get the determinant formula.

We proceed by induction on . Let us consider the case when , i.e., . Then,

 (xj1,l1)[J;L] = k∑t=1(−1)t+2xj1,l1xjt,l2[J∖(jt);L∖(l2)] = −xj1,l1xj1,l2[J∖(j1);L∖(l2)] +k∑t=2(−1)t+2([(j1,jt);(l1,l2)]+xjt,l1xj1,l2)[J∖(jt);L∖(l2)] = k∑t=2(−1)t+2[(j1,jt);(l1,l2)][J∖(jt);L∖(l2)] +xj1,l2[J;(l1,l1,l3,…,lk)].

For the thesis is true because since column appears twice.

Assume that the thesis is true for .

 [Js;Ls][J;L]=k∑t=1(−1)t+s+1xjt,ls+1[Js;Ls][J∖(jt);L∖(ls+1)].

Let us now focus on the factor for , we get

 xjr,ls+1[Js;Ls] =[Js∪(jr);Ls∪(ls+1)]+s∑t=1(−1)t+sxjt,ls+1[Js∖(jt)∪(jr);Ls].

By substitution it follows that

 [Js;Ls][J;L] = k∑t=s+1(−1)t+s+1[Js∪(jt);Ls∪(ls+1)][J∖(jt);L∖(ls+1)]+ +s∑t=1(−1)t+s+1xjt,ls+1([Js;Ls][J∖(jt);L∖(ls+1)]+k∑j=s+1(−1)j+s +k∑r=s+1(−1)r+s[Js∖(jt)∪(jr);Ls][L∖(jr);L∖(ls+1)]) = k∑t=s+1(−1)t+s+1[Js∪(jt);Ls∪(ls+1)][J∖(jt);L∖(ls+1)]+ +s∑t=1(−1)t+s+1xjt,ls+1([Js∖(jt);Ls∖(ls)][J;¯L])

where . The repetition of column twice in implies that . The last equality follows from the induction hypothesis. ∎

The following is an easy consequence of Lemma 21.

###### Proposition 22.

Let such that . Then

 [J,L][K,K]−[J∪(i);L∪(i)][K∖(i);K∖(i)]=∑l∈K∖(J∪(i))hl[J∪(i),L∪(l)]∈Is+1,

with for any .

We now study the minors of a matrix of the form where and has a special form, which we describe in the next lemma.

###### Lemma 23.

Let to be the companion matrix of a monic irreducible polynomial of degree , and let be a root of . Then the matrix

 S:=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝111⋯1λλ[1]λ[2]⋯λ[k−1]λ2λ2⋅[1]λ2⋅[2]⋯λ2⋅[k−1]⋮⋮⋮⋮λk−1λ(k−1)⋅[1]λ(k−1)⋅[2]⋯λ(k−1)⋅[k−1]⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (3)

diagonalizes .

###### Proof.

The eigenvalues of the matrix correspond to the roots of the irreducible polynomial . If is an element such that , then by [LN94, Theorem 2.4]. It is enough to show that the columns of correspond to the eigenvectors of . Let , then

 P⎛⎜ ⎜ ⎜ ⎜ ⎜⎝1λ[i]⋮λ(k−1)⋅[i]⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ =⎛⎜ ⎜ ⎜ ⎜ ⎜⎝λ[i]λ2⋅[i]⋮λk⋅[i]⎞⎟ ⎟ ⎟ ⎟ ⎟⎠=λ[i]⎛⎜ ⎜ ⎜ ⎜ ⎜⎝1λ[i]⋮λ(k−1)⋅[i]⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

We now establish some properties of .

###### Lemma 24.

The matrices and defined by (3) satisfy the following properties:

1. the entries of the first column of (respectively, the first row of ) form a basis of over , and

2. the entries of the -th column of (respectively, row of ) are the -th power of the ones of the -th column (respectively, row) for .

###### Proof.

The two properties for the matrix come directly from its definition. By [LN94, Definition 2.30] we know that there exists a unique basis of over such that

 TrFqk/Fq(λiγj)={1i=j0i≠j,

where for . We have

 S−1=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝γ0γ1⋯γk−1γ[1]0γ[1]1⋯γ[1]k−1⋮⋮⋮γ[k−1]0γ[k−1]1⋯γ[k−1]k−1⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

The next theorem and corollary will be used in Subsection 3.3 to devise a simplified minimum-distance decoding algorithm, under the assumption that the first columns of the received vector space are linearly independent.

###### Theorem 25.

Let and let and be two matrices satisfying the following properties:

• has full rank,

• the entries of the first column of form a basis of over , and

• the entries of the -th column of are the -th power of the ones of the -th column, for .

Then .

###### Proof.

Let

 N:=(nij)1≤i≤t1≤j≤k and NS=(tij)1≤i≤t1≤j≤t.

Let where form a basis of over . Then:

 tij:=k∑l=1nilslj=k∑l=1nils[j−1]l=(k∑l=1nilsl)[j−1],

since the entries of are in . Let , then

 NS=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝τ1τ[1]1…τ[t−1]2τ2τ[1]1…τ[t−1]2⋮⋮⋮τrτ[1]r…τ[t−1]r⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

The elements