A Rank-Metric Approach to Error Control in Random Network Coding

# A Rank-Metric Approach to Error Control in Random Network Coding

Danilo Silva,  Frank R. Kschischang,  and Ralf Kötter,  This work was supported by CAPES Foundation, Brazil, and by the Natural Sciences and Engineering Research Council of Canada. The material in this paper was presented in part at the IEEE International Symposium on Information Theory, Nice, France, June 2007 and at the IEEE Information Theory Workshop, Bergen, Norway, July 2007.D. Silva and F. R. Kschischang are with The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada (e-mail: danilo@comm.utoronto.ca, frank@comm.utoronto.ca).R. Kötter is with the Institute for Communications Engineering, Technical University of Munich, D-80333 Munich, Germany (e-mail: ralf.koetter@tum.de).
###### Abstract

The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of Kötter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if erasures and deviations occur, then errors of rank can always be corrected provided that , where is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application where packets of length over are transmitted, the complexity of the decoding algorithm is given by operations in an extension field .

Constant-dimension codes, error correction, linearized polynomials, random network coding, rank-metric codes.

## I Introduction

While random linear network coding [1, 2, 3] is an effective technique for information dissemination in communication networks, it is highly susceptible to errors. The insertion of even a single corrupt packet has the potential, when linearly combined with legitimate packets, to affect all packets gathered by an information receiver. The problem of error control in random network coding is therefore of great interest.

In this paper, we focus on end-to-end error control coding, where only the source and destination nodes apply error control techniques. Internal network nodes are assumed to be unaware of the presence of an outer code; they simply create outgoing packets as random linear combinations of incoming packets in the usual manner of random network coding. In addition, we assume that the source and destination nodes have no knowledge—or at least make no effort to exploit knowledge—of the topology of the network or of the particular network code used in the network. This is in contrast to the pioneering approaches [4, 5, 6], which have considered the design of a network code as part of the error control problem.

In the basic transmission model for end-to-end coding, the source node produces packets, which are length- vectors in a finite field , and the receiver gathers packets. Additive packet errors may occur in any of the links. The channel equation is given by , where , and are matrices whose rows represent the transmitted, received and (possibly) corrupting packets, respectively, and and are the (unknown) corresponding transfer matrices induced by linear network coding.

There have been three previous quite different approaches to reliable communication under this model. In [7], Zhang characterizes the error correction capability of a network code under a brute-force decoding algorithm. He shows that network codes with good error-correcting properties exist if the field size is sufficiently large. His approach can be applied to random network coding if an extended header is included in each packet in order to allow for the matrix (as well as ) to be estimated at a sink node. A drawback of this approach is that the extended header has size equal to the number of network edges, which may incur excessive overhead. In addition, no efficient decoding algorithm is provided for errors occurring according to an adversarial model.

Jaggi et al. [8] propose a different approach specifically targeted to combat Byzantine adversaries. They provide rate-optimal end-to-end codes that do not rely on the specific network code used and that can be decoded in polynomial time. However, their approach is based on probabilistic arguments that require both the field size and the packet length to be sufficiently large.

In contrast, Kötter and Kschischang [9] take a more combinatorial approach to the problem, which provides correction guarantees against adversarial errors and can be used with any given field and packet size. Their key observation is that, under the unknown linear transformation applied by random network coding, the only property of the matrix that is preserved is its row space. Thus, information should be encoded in the choice of a subspace rather than a specific matrix. The receiver observes a subspace, given by the row space of , which may be different from the transmitted space when packet errors occur. A metric is proposed to account for the discrepancy between transmitted and received spaces, and a new coding theory based on this metric is developed. In particular, nearly-optimal Reed-Solomon-like codes are proposed that can be decoded in operations in an extension field .

Although the approach in [9] seems to be the appropriate abstraction of the error control problem in random network coding, one inherent difficulty is the absence of a natural group structure on the set of all subspaces of the ambient space . As a consequence, many of the powerful concepts of classical coding theory such as group codes and linear codes do not naturally extend to codes consisting of subspaces.

In this paper, we explore the close relationship between subspace codes and codes for yet another distance measure: the rank metric. Codewords of a rank metric code are matrices and the rank distance between two matrices is the rank of their difference. The rank metric was introduced in coding theory by Delsarte [10]. Codes for the rank metric were largely developed by Gabidulin [11] (see also [10, 12]). An important feature of the coding theory for the rank metric is that it supports many of the powerful concepts and techniques of classical coding theory, such as linear and cyclic codes and corresponding decoding algorithms [11, 12, 13, 14].

One main contribution of this paper is to show that codes in the rank metric can be naturally “lifted” to subspace codes in such a way that the rank distance between two codewords is reflected in the subspace distance between their lifted images. In particular, nearly-optimal subspace codes can be obtained directly from optimal rank-metric codes. Conversely, when lifted rank-metric codes are used, the decoding problem for random network coding can be reformulated purely in rank-metric terms, allowing many of the tools from the theory of rank-metric codes to be applied to random network coding.

In this reformulation, we obtain a generalized decoding problem for rank-metric codes that involves not only ordinary rank errors, but also two additional phenomena that we call erasures and deviations. Erasures and deviations are dual to each other and correspond to partial information about the error matrix, akin to the role played by symbol erasures in the Hamming metric. Here, an erasure corresponds to the knowledge of an error location but not its value, while a deviation correspond to the knowledge of an error value but not its location. These concepts generalize similar concepts found in the rank-metric literature under the terminology of “row and column erasures” [15, 16, 13, 17, 18]. Although with a different terminology, the concept of a deviation (and of a code that can correct deviations) has appeared before in [19].

Our second main contribution is an efficient decoding algorithm for rank-metric codes that takes into account erasures and deviations. Our algorithm is applicable to Gabidulin codes [11], a class of codes, analogous to conventional Reed-Solomon codes, that attain maximum distance in the rank metric. We show that our algorithm fully exploits the correction capability of Gabidulin codes; namely, it can correct any pattern of errors, erasures and deviations provided , where is the minimum rank distance of the code. Moreover, the complexity of our algorithm is operations in , which is smaller than that of the algorithm in [9], especially for practical high-rate codes.

In the course of setting up the problem, we also prove a result that can be seen as complementary to [9]; namely, we relate the performance guarantees of a subspace code with more concrete network parameters such as the maximum number of corrupting packets that can be injected in the network. This result provides a tighter connection between the subspace approach of [9] and previous approaches that deal with link errors.

The remainder of this paper is organized as follows. In Section II, we provide a brief review of rank-metric codes and subspace codes. In Section III, we describe in more detail the problem of error control in random network coding, along with Kötter and Kschischang’s approach to this problem. In Section IV, we present our code construction and show that the resulting error control problem can be replaced by a generalized decoding problem for rank-metric codes. At this point, we turn our attention entirely to rank-metric codes. The generalized decoding problem that we introduce is developed in more detail in Section V, wherein the concepts of erasures and deviations are described and compared to related concepts in the rank-metric literature. In Section VI, we present an efficient algorithm for decoding Gabidulin codes in the presence of errors, erasures and deviations. Finally, Section VII contains our conclusions.

## Ii Preliminaries

### Ii-a Notation

Let be a power of a prime. In this paper, all vectors and matrices have components in the finite field , unless otherwise mentioned. We use to denote the set of all matrices over and we set . In particular, is a column vector and is a row vector.

If is a vector, then the symbol denotes the th entry of . If is a matrix, then the symbol denotes either the th row or the th column of ; the distinction will always be clear from the way in which is defined. In either case, the symbol always refers to the entry in the th row and th column of .

For clarity, the identity matrix is denoted by . If we set , then the notation will denote the th column of . More generally, if , then will denote the sub-matrix of consisting of the columns indexed by .

The linear span of a set of vectors is denoted by . The row space, the rank and the number of nonzero rows of a matrix are denoted by , and , respectively. The reduced row echelon (RRE) form of a matrix is denoted by .

### Ii-B Properties of Matrix Rank and Subspace Dimension

Let . By definition, ; however, there are many useful equivalent characterizations. For example, is the smallest for which there exist matrices and such that , i.e.,

 rankX=minr,A∈Fn×rq,B∈Fr×mq:X=ABr. (1)

It is well-known that, for any , we have

 rank(X+Y)≤rankX+rankY (2)

and that, for and , we have

 rank(AX)≥rankA+rankX−n. (3)

Recall that if and are subspaces of some fixed vector space, then the sum

 U+V={u+v:u∈U,v∈V}

is the smallest subspace that contains both and . Recall also that

 dim(U+V)=dimU+dimV−dim(U∩V). (4)

We will make extensive use of the fact that

 ⟨[XY]⟩=⟨X⟩+⟨Y⟩ (5)

and therefore

 rank[XY] =dim(⟨X⟩+⟨Y⟩) =rankX+rankY−dim(⟨X⟩∩⟨Y⟩). (6)

### Ii-C Rank-Metric Codes

A matrix code is defined as any nonempty subset of . A matrix code is also commonly known as an array code when it forms a linear space over [12].

A natural and useful distance measure between elements of is given in the following definition.

###### Definition 1

For , the rank distance between and is defined as .

As observed in [11], rank distance is indeed a metric. In particular, the triangle inequality for the rank metric follows directly from (2). In the context of the rank metric, a matrix code is called a rank-metric code. The minimum (rank) distance of a rank-metric code is defined as

 dR(C)≜minx,x′∈Cx≠x′dR(x,x′).

Associated with every rank-metric code is the transposed code , whose codewords are obtained by transposing the codewords of , i.e., . We have and . Observe the symmetry between rows and columns in the rank metric; the distinction between a code and its transpose is in fact transparent to the metric.

A minimum distance decoder for a rank-metric code takes a word and returns a codeword that is closest to in rank distance, that is,

 ^x=argminx∈Crank(r−x). (7)

Note that if , then a minimum distance decoder is guaranteed to return .

Throughout this paper, problem (7) will be referred to as the conventional rank decoding problem.

There is a rich coding theory for rank-metric codes that is analogous to the classical coding theory in the Hamming metric. In particular, we mention the existence of a Singleton bound [11, 10] (see also [20][21]), which states that every rank metric code with minimum distance must satisfy

 logq|C| ≤min{n(m−d+1),m(n−d+1)} =max{n,m}(min{n,m}−d+1). (8)

Codes that achieve this bound are called maximum-rank-distance (MRD) codes. An extensive class of MRD codes with was presented by Gabidulin in [11]. By transposition, MRD codes with can also be obtained. Thus, MRD codes exist for all and and all , irrespectively of the field size .

### Ii-D Subspace Codes

Let denote the set of all subspaces of . We review some concepts of the coding theory for subspaces developed in [9].

###### Definition 2

Let . The subspace distance between and is defined as

 dS(V,V′) ≜dim(V+V′)−dim(V∩V′) =2dim(V+V′)−dimV−dimV′ (9) (10)

It is shown in [9] that the subspace distance is indeed a metric on .

A subspace code is defined as a nonempty subset of . The minimum (subspace) distance of a subspace code is defined as

 dS(Ω)≜minV,V′∈ΩV≠V′dS(V,V′).

The minimum distance decoding problem for a subspace code is to find a subspace that is closest to a given subspace , i.e.,

 ^V=argminV∈ΩdS(V,U). (11)

A minimum distance decoder is guaranteed to return if .

Let denote the set of all -dimensional subspaces of . A subspace code is called a constant-dimension code if . It follows from (9) or (10) that the minimum distance of a constant-dimension code is always an even number.

Let be denote the maximum number of codewords in a constant-dimension code with minimum subspace distance . Many bounds on were developed in [9], in particular the Singleton-like bound

 Aq[M,2d,n]≤[M−d+1max{n,M−n}]q (12)

where

 [Mn]q≜(qM−1)⋯(qM−n+1−1)(qn−1)⋯(q−1)

denotes the Gaussian coefficient. It is well known that the Gaussian coefficient gives the number of distinct -dimensional subspaces of an -dimensional vector space over , i.e., . A useful bound on is given by [9, Lemma 5]

 [Mn]q<4qn(M−n). (13)

Combining (12) and (13) gives

 Aq[M,2d,n]<4qmax{n,M−n}(min{n,M−n}−d+1). (14)

There exist also bounds on that are tighter than (12), namely the Wang-Xing-Safavi-Naini bound [22] and a Johnson-type bound [23].

For future reference, we define the sub-optimality of a constant-dimension code with to be

 α(Ω)≜logqAq[M,2d,n]−logq|Ω|logqAq[M,2d,n]. (15)

## Iii Error Control in Random Network Coding

### Iii-a Channel Model

We start by reviewing the basic model for single-source generation-based random linear network coding [3, 2]. Consider a point-to-point communication network with a single source node and a single destination node. Each link in the network is assumed to transport, free of errors, a packet of symbols in a finite field . Links are directed, incident from the node transmitting the packet and incident to the node receiving the packet. A packet transmitted on a link incident to a given node is said to be an incoming packet for that node, and similarly a packet transmitted on a link incident from a given node is said to be an outgoing packet for that node.

During each transmission generation, the source node formats the information to be transmitted into packets , which are regarded as incoming packets for the source node. Whenever a node (including the source) has a transmission opportunity, it produces an outgoing packet as a random -linear combination of all the incoming packets it has until then received. The destination node collects packets and tries to recover the original packets .

Let be an matrix whose rows are the transmitted packets and, similarly, let be an matrix whose rows are the received packets . Since all packet operations are linear over , then, regardless of the network topology, the transmitted packets and the received packets can be related as

 Y=AX, (16)

where is an matrix corresponding to the overall linear transformation applied by the network.

Before proceeding, we remark that this model encompasses a variety of situations:

• The network may have cycles or delays. Since the overall system is linear, expression (16) will be true regardless of the network topology.

• The network could be wireless instead of wired. Broadcast transmissions in wireless networks may be modeled by constraining each intermediate node to send exactly the same packet on each of its outgoing links.

• The source node may transmit more than one generation (a set of packets). In this case, we assume that each packet carries a label identifying the generation to which it corresponds and that packets from different generations are processed separately throughout the network [2].

• The network topology may be time-varying as nodes join and leave and connections are established and lost. In this case, we assume that each network link is the instantiation of an actual successful packet transmission.

• The network may be used for multicast, i.e., there may be more than one destination node. Again, expression (16) applies; however, the matrix may be different for each destination.

Let us now extend this model to incorporate packet errors. Following [4, 5, 6], we consider that packet errors may occur in any of the links of the network. Suppose the links in the network are indexed from 1 to , and let denote the error packet applied at link . The application of an error packet is modeled as follows. We assume that, for each link , the node transmitting on that link first creates a prescribed packet following the procedure described above. Then, an error packet is added to in order to produce the outgoing packet on this link, i.e., . Note that any arbitrary packet can be formed simply by choosing .

Let be an matrix whose rows are the error packets . By linearity of the network, we can write

 Y=AX+BZ, (17)

where is an matrix corresponding to the overall linear transformation applied to on route to the destination. Note that means that no corrupt packet was injected at link . Thus, the number of nonzero rows of , , gives the total number of (potentially) corrupt packets injected in the network. Note that it is possible that a nonzero error packet happens to be in the row space of , in which case it is not really a corrupt packet.

Observe that this model can represent not only the occurrence of random link errors, but also the action of malicious nodes. A malicious node can potentially transmit erroneous packets on all of its outgoing links. A malicious node may also want to disguise itself and transmit correct packets on some of these links, or may simply refuse to transmit some packet (i.e., transmitting an all-zero packet), which is represented in the model by setting . In any case, gives the total number of “packet interventions” performed by all malicious nodes and thus gives a sense of the total adversarial “power” employed towards jamming the network.

Equation (17) is our basic model of a channel induced by random linear network coding, and we will refer to it as the random linear network coding channel (RLNCC). The channel input and output alphabets are given by and , respectively. To give a full probabilistic specification of the channel, we would need to specify the joint probability distribution of , and given . We will not pursue this path in this paper, taking, instead, a more combinatorial approach.

### Iii-B Transmission via Subspace Selection

Let be a subspace code with maximum dimension . In the approach in [9], the source node selects a subspace and transmits this subspace over the RLNCC as some matrix such that . The destination node receives and computes , from which the transmitted subspace can be inferred using a minimum distance decoder (11).

In this paper, it will be convenient to view the above approach from a matrix perspective. In order to do that, we simply replace by an (arbitrarily chosen) matrix code that generates . More precisely, let be a matrix code consisting of all the matrices in RRE form whose row space is in . Now, the above setup can be reinterpreted as follows. The source node selects a matrix to transmit over the RLNCC. Upon reception of , the destination node tries to infer the transmitted matrix using the minimum distance decoding rule

 ^X=argminX∈[Ω]dS(⟨X⟩,⟨Y⟩). (18)

Note that the decoding is guaranteed to be successful if .

### Iii-C Performance Guarantees

In this subsection, we wish to relate the performance guarantees of a subspace code with more concrete network parameters. Still, we would like these parameters to be sufficiently general so that we do not need to take the whole network topology into account.

We make the following assumptions:

• The column-rank deficiency of the transfer matrix is never greater than , i.e., .

• The adversarial nodes together can inject at most corrupting packets, i.e., .

The following result characterizes the performance guarantees of a subspace code under our assumptions.

###### Theorem 1

Suppose and . Then, decoding according to (18) is guaranteed to be successful provided .

In order to prove Theorem 1, we need a few results relating rank and subspace distance.

###### Proposition 2

Let . Then

 rank[XY]≤rank(Y−X)+min{rankX,rankY}.
###### Proof:

We have

 rank[XY] =rank[XY−X]≤rank(Y−X)+rankX rank[XY] =rank[Y−XY]≤rank(Y−X)+rankY.
###### Corollary 3

Let and . Then

 dS(⟨X⟩,⟨Y⟩)≤2rankZ−|rankX−rankY|.
###### Proof:

From Proposition 2, we have

 dS(⟨X⟩,⟨Y⟩) =2rank[XY]−rankX−rankY ≤2rankZ+2min{rankX,rankY} −rankX−rankY =2rankZ−|rankX−rankY|.

We can now give a proof of Theorem 1.

###### Proof:

From Corollary 3, we have that

 dS(⟨AX⟩,⟨Y⟩)≤2rankBZ≤2rankZ≤2wt(Z)≤2t.

Using (3), we find that

 dS(⟨X⟩,⟨AX⟩)=rankX−rankAX≤n−rankA≤ρ.

Since satisfies the triangle inequality, we have

 dS(⟨X⟩,⟨Y⟩) ≤dS(⟨X⟩,⟨AX⟩)+dS(⟨AX⟩,⟨Y⟩) ≤ρ+2t

and therefore the decoding is guaranteed to be successful.

Theorem 1 is analogous to Theorem 2 in [9], which states that minimum subspace distance decoding is guaranteed to be successful if , where and are, respectively, the number of “insertions” and “deletions” of dimensions that occur in the channel [9]. Intuitively, since one corrupted packet injected at a network min-cut can effectively replace a dimension of the transmitted subspace, we see that corrupted packets can cause deletions and insertions of dimensions. Combined with possible further deletions caused by a row-rank deficiency of , we have that and . Thus,

 δ+μ

In other words, under the condition that corrupt packets may be injected in any of the links in network (which must be assumed if we do not wish to take the network topology into account), the performance guarantees of a minimum distance decoder are essentially given by Theorem 1.

It is worth to mention that, according to recent results [24], minimum subspace distance decoding may not be the optimal decoding rule when the subspaces in have different dimensions. For the remainder of this paper, however, we focus on the case of a constant-dimension code and therefore we use the minimum distance decoding rule (18). Our goal will be to construct constant-dimension subspace codes with good performance and efficient encoding/decoding procedures.

## Iv Codes for the Random Linear Network Coding Channel Based on Rank-Metric Codes

In this section, we show how a constant-dimension subspace code can be constructed from any rank-metric code. In particular, this construction will allow us to obtain nearly-optimal subspace codes that possess efficient encoding and decoding algorithms.

### Iv-a Lifting Construction

From now on, assume that , where . Let .

###### Definition 3

Let , given by . The subspace is called the lifting of the matrix . Similarly, if is a rank-metric code, then the subspace code , obtained by lifting each codeword of , is called the lifting of .

Definition 3 provides an injective mapping between rank-metric codes and subspace codes. Note that a subspace code constructed by lifting is always a constant-dimension code (with codeword dimension ).

Although the lifting construction is a particular way of constructing subspace codes, it can also be seen as a generalization of the standard approach to random network coding [3, 2]. In the latter, every transmitted matrix has the form , where the payload matrix corresponds to the raw data to be communicated. In our approach, each transmitted matrix is also of the form , but the payload matrix is restricted to be a codeword of a rank-metric code rather than uncoded data.

Our reasons for choosing to be a rank-metric code will be made clear from the following proposition.

###### Proposition 4

Let and . Then

 dS(I(x),I(x′)) =2dR(x,x′) dS(I(C)) =2dR(C).
###### Proof:

Since , we have

 dS(I(x),I(x′)) =2rank[IxIx′]−2n =2rank[Ix0x′−x]−2n =2rank(x′−x).

The second statement is immediate.

Proposition 4 shows that a subspace code constructed by lifting inherits the distance properties of its underlying rank-metric code. The question of whether such lifted rank-metric codes are “good” compared to the whole class of constant-dimension codes is addressed in the following proposition.

###### Proposition 5

Let be an MRD code with . Then and

 Aq[n+m,2d,n]<4|I(C)|=4|C|.

Moreover, for any code parameters, the sub-optimality of in satisfies

 α(I(C))<4(n+m)log2q.
###### Proof:

Using (14) and the fact that achieves the Singleton bound for rank-metric codes (8), we have

 Aq[n+m,2d,n] <4qmax{n,m}(min{n,m}−d+1) =4|C|.

Applying this result in (15), we obtain

 α(I(C))

Proposition 5 shows that, for all practical purposes, lifted MRD codes are essentially optimal as constant-dimension codes. Indeed, the rate loss in using a lifted MRD code rather than an optimal constant-dimension code is smaller than , where is the packet size in bits. In particular, for packet sizes of 50 bytes or more, the rate loss is smaller than 1%.

In this context, it is worth mentioning that the nearly-optimal Reed-Solomon-like codes proposed in [9] correspond exactly to the lifting of the class of MRD codes proposed by Gabidulin [11]. The latter will be discussed in more detail in Section VI.

### Iv-B Decoding

We now specialize the decoding problem (18) to the specific case of lifted rank-metric codes. We will see that it is possible to reformulate such a problem in a way that resembles the conventional decoding problem for rank-metric codes, but with additional side-information presented to the decoder.

Let the transmitted matrix be given by , where and is a rank-metric code. Write the received matrix as

 Y=[^Ay]

where and . In accordance with the formulation of Section III-B, we assume that , since any linearly dependent received packets do not affect the decoding problem and may be discarded by the destination node. Now, define

 μ≜n−rank^A and δ≜N−rank^A.

Here measures the rank deficiency of with respect to columns, while measures the rank deficiency of with respect to rows.

Before examining the general problem, we study the simple special case that arises when .

###### Proposition 6

If , then

 dS(⟨X⟩,⟨Y⟩)=2dR(x,r)

where .

###### Proof:

Since , is invertible. Thus, is row equivalent to , i.e., . Applying Proposition 4, we get the desired result.

The above proposition shows that, whenever is invertible, a solution to (18) can be found by solving the conventional rank decoding problem. This case is illustrated by the following example.

###### Example 1

Let and . Let denote the rows of a codeword . Suppose that

 A=⎡⎢ ⎢ ⎢⎣2424003310430414⎤⎥ ⎥ ⎥⎦,

and . Then

 Y=⎡⎢ ⎢ ⎢⎣12402x1+4x2+2x3+4x4+4z00333x3+3x42222x1+4x3+3x4+z04144x2+x3+4x4⎤⎥ ⎥ ⎥⎦.

Converting to RRE form, we obtain

 ¯Y=[Ir] (19)

where

 r =⎡⎢ ⎢ ⎢⎣3x2+2x3+x4+z3x1+2x2+4x3+2x4+2z4x1+3x2+3x3+x4+zx1+2x2+3x3+4z⎤⎥ ⎥ ⎥⎦.

Note that, if no errors had occurred, we would expect to find .

Now, observe that we can write

 r =⎡⎢ ⎢ ⎢⎣x1x2x3x4⎤⎥ ⎥ ⎥⎦+⎡⎢ ⎢ ⎢⎣1214⎤⎥ ⎥ ⎥⎦[4x1+3x2+2x3+x4+z].

Thus, . We can think of this as an error word of rank 1 applied to . This error can be corrected if .

Let us now proceed to the general case, where is not necessarily invertible. We first examine a relatively straightforward approach that, however, leads to an unattractive decoding problem.

Similarly to the proof of Proposition 6, it is possible to show that

 dS(⟨X⟩,⟨Y⟩)=2rank(y−^Ax)+μ−δ

which yields the following decoding problem:

 ^x=argminx∈Crank(y−^Ax). (20)

If we define a new code , then a solution to (20) can be found by first solving

 ^x′=argminx′∈C′rank(y−x′)

using a conventional rank decoder for and then choosing any as a solution. An obvious drawback of this approach is that it requires a new code to be used at each decoding instance. This is likely to increase the decoding complexity, since the existence of an efficient algorithm for does not imply the existence of an efficient algorithm for for all . Moreover, even if efficient algorithms are known for all , running a different algorithm for each received matrix may be impractical or undesirable from an implementation point-of-view.

In the following, we seek an expression for where the structure of can be exploited. In order to motivate our approach, we consider the following two examples, which generalize Example 1.

###### Example 2

 A=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣10231303140320401124⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

and . Then

 Y=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0344x1+2x3+3x4+4z1303x1+3x2+3x42132x1+4x2+3x4+z20402x1+4x31124x1+x2+2x3+4x4⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=[^Ay].

Although is not invertible, we can nevertheless convert to RRE form to obtain

 ¯Y=[Ir0^E] (21)

where

 r=⎡⎢ ⎢ ⎢⎣2x1+2x2+3x3+4x4+4z4x1+4x2+2x3+x4+z2x1+4x2+2x3+3x4+3z3x1+x2+4x3+3x4+2z⎤⎥ ⎥ ⎥⎦

and

 ^E=2x1+4x2+x3+3x4+3z.

Observe that

 e=r−x=⎡⎢ ⎢ ⎢⎣x1+2x2+3x3+4x4+4z4x1+3x2+2x3+x4+z2x1+4x2+x3+3x4+3z3x1+x2+4x3+2x4+2z⎤⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢⎣3214⎤⎥ ⎥ ⎥⎦^E.

Thus, we see not only that , but we have also recovered part of its decomposition as an outer product, namely, the vector .

###### Example 3

Consider again the parameters of Example 1, but now let

 A=⎡⎢⎣321104322104⎤⎥⎦

and suppose that there are no errors. Then

Once again we cannot invert ; however, after converting to RRE form and inserting an all-zero row in the third position, we obtain

 ^Y =⎡⎢ ⎢ ⎢⎣1040x1+4x30120x2+2x3000000001x4⎤⎥ ⎥ ⎥⎦ =⎡⎢ ⎢ ⎢⎣1040x1+4x30120x2+2x3001−10x3−x30001x4⎤⎥ ⎥ ⎥⎦ =[I+^LIT3x+^Lx3] =[I+^LIT3r] (22)

where

 ^L=⎡⎢ ⎢ ⎢⎣42−10⎤⎥ ⎥ ⎥⎦.

Once again we see that the error word has rank 1, and that we have recovered part of its decomposition as an outer product. Namely, we have

 e=r−x=^Lx3

where this time is known.

Having seen from these two examples how side information (partial knowledge of the error matrix) arises at the output of the RLNCC, we address the general case in the following proposition.

###### Proposition 7

Let , and be defined as above. There exist a tuple and a set satisfying

 |U| =μ (23) ITUr =0 (24) ITU^L =−Iμ×μ (25) rank^E =δ (26)

such that

 ⟨[I+^LITUr0^E]⟩=⟨Y⟩. (27)
###### Proof:

See the Appendix.

Proposition 7 shows that every matrix is row equivalent to a matrix

 ¯Y=[I+^LITUr0^E]

which is essentially the matrix in reduced row echelon form. Equations (19), (21) and (22) are examples of matrices in this form. We can think of the matrices , and and the set as providing a compact description of the received subspace . The set is in fact redundant and can be omitted from the description, as we show in the next proposition.

###### Proposition 8

Let be a tuple and be a set that satisfy (23)–(26). For any , and such that and satisfy (23)–(26), we have

 ⟨[I+^LTITSr0R^E]⟩=⟨[I+^LITUr0^E]⟩.
###### Proof:

See the Appendix.

Proposition 8 shows that, given a tuple obtained from Proposition 7, the set can be found as any set satisfying (23)–(25). Moreover, the matrix can be multiplied on the right by any nonsingular matrix (provided that the resulting matrix satisfies (23)–(25) for some ), and the matrix can be multiplied on the left by any nonsingular matrix; none of these operations change the subspace described by . The notion of a concise description of a subspace is captured in the following definition.

###### Definition 4

A tuple that satisfies (23)–(27) for some is said to be a reduction of the matrix .

###### Remark 1

It would be enough to specify, besides the matrix , only the column space of and the row space of in the definition of a reduction. For simplicity we will, however, not use this notation here.

Note that if is a lifting of , then is a reduction of (where denotes an empty matrix). Thus, reduction can be interpreted as the inverse of lifting.

We can now prove the main theorem of this section.

###### Theorem 9

Let be a reduction of . Then

 dS(⟨X⟩,⟨Y⟩) =2rank[^Lr−x0^E]−(μ+δ).
###### Proof:

See the Appendix.

A consequence of Theorem 9 is that, under the lifting construction, the decoding problem (18) for random network coding can be abstracted to a generalized decoding problem for rank-metric codes. More precisely, if we cascade an RLNCC, at the input, with a device that takes to its lifting and, at the output, with a device that takes to its reduction , then the decoding problem (18) reduces to the following problem:

Generalized Decoding Problem for Rank-Metric Codes: Let be a rank-metric code. Given a received tuple with and , find

 ^x=argminx∈Crank[^Lr−x0^E]. (28)

The problem above will be referred to as the generalized decoding problem for rank-metric codes, or generalized rank decoding for short. Note that the conventional rank decoding problem (7) corresponds to the special case where .

The remainder of this paper is devoted to the study of the generalized rank decoding problem and to its solution in the case of MRD codes.

## V A Generalized Decoding Problem for Rank-Metric Codes

In this section, we develop a perspective on the generalized rank decoding problem that will prove useful to the understanding of the correction capability of rank-metric codes, as well as to the formulation of an efficient decoding algorithm.

### V-a Error Locations and Error Values

Let be a rank-metric code. For a transmitted codeword and a received word , define as the error word.

Note that if an error word has rank , then we can write for some full-rank matrices