On Metrics for Error Correction in Network Coding

On Metrics for Error Correction in Network Coding

Danilo Silva and Frank R. Kschischang This work was supported by CAPES Foundation, Brazil, and by the Natural Sciences and Engineering Research Council of Canada. Portions of this paper were presented at the IEEE Information Theory Workshop, Bergen, Norway, July 2007, and at the 46th Annual Allerton Conference on Communications, Control, and Computing, Monticello, IL, September 2008.The authors are with The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada (e-mail: danilo@comm.utoronto.ca, frank@comm.utoronto.ca).
Abstract

The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a new metric, called the injection metric, which is closely related to, but different than, the subspace metric of Kötter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the injection metric is shown to correct more errors then a minimum-subspace-distance decoder. All of these results are based on a general approach to adversarial error correction, which could be useful for other adversarial channels beyond network coding.

Adversarial channels, error correction, injection distance, network coding, rank distance, subspace codes.

I Introduction

The problem of error correction for a network implementing linear network coding has been an active research area since 2002 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. The crucial motivation for the problem is the phenomenon of error propagation, which arises due to the recombination characteristic at the heart of network coding. A single corrupt packet occurring in the application layer (e.g., introduced by a malicious user) may proceed undetected and contaminate other packets, causing potentially drastic consequences and essentially ruling out classical error correction approaches.

In the basic multicast model for linear network coding, a source node transmits packets, each consisting of symbols from a finite field . Each link in the network transports a packet free of errors, and each node creates outgoing packets as -linear combinations of incoming packets. There are one or more destination nodes that wish to obtain the original source packets. At a specific destination node, the received packets may be represented as the rows of an matrix , where is the matrix whose rows are the source packets and is the transfer matrix of the network. Errors are incorporated in the model by allowing up to error packets to be added (in the vector space ) to the packets sent over one or more links. The received matrix at a specific destination node may then be written as

(1)

where is a matrix whose rows are the error packets, and is the transfer matrix from these packets to the destination. Under this model, a coding-theoretic problem is how to design an outer code and the underlying network code such that reliable communication (to all destinations) is possible.

This coding problem can be posed in a number of ways depending on the set of assumptions made. For example, we may assume that the network topology and the network code are known at the source and at the destination nodes, in which case we call the system coherent network coding. Alternatively, we may assume that such information is unavailable, in which case we call the system noncoherent network coding. The error matrix may be random or chosen by an adversary, and there may be further assumptions on the knowledge or other capabilities of the adversary. The essential assumption, in order to pose a meaningful coding problem, is that the number of injected error packets, , is bounded.

Error correction for coherent network coding was originally studied by Cai and Yeung [1, 2, 3]. Aiming to establish fundamental limits, they focused on the fundamental case . In [2, 3] (see also [9, 10]), the authors derive a Singleton bound in this context and construct codes that achieve this bound. A drawback of their approach is that the field size required can be very large (on the order of , where is the number of edges in the network), and no efficient decoding method is given. Similar constructions, analyses and bounds appear also in [5, 11, 4, 12].

In Section IV, we approach this problem (for general ) under a different framework. We assume the pessimistic situation in which the adversary can not only inject up to packets but can also freely choose the matrix . In this scenario, it is essential to exploit the structure of the problem when . The proposed approach allows us to find a metric—the rank metric—that succinctly describes the error correction capability of a code. We quite easily obtain bounds and constructions analogous to those of [2, 3, 9, 10, 11], and show that many of the results in [4, 12] can be reinterpreted and simplified in this framework. Moreover, we find that our pessimistic assumption actually incurs no penalty since the codes we propose achieve the Singleton bound of [2]. An advantage of this approach is that it is universal, in the sense that the outer code and the network code may be designed independently of each other. More precisely, the outer code may be chosen as any rank-metric code with a good error-correction capability, while the network code can be designed as if the network were error-free (and, in particular, the field size can be chosen as the minimum required for multicast). An additional advantage is that encoding and decoding of properly chosen rank-metric codes can be performed very efficiently [8].

For noncoherent network coding, a combinatorial framework for error control was introduced by Kötter and Kschischang in [7]. There, the problem is formulated as the transmission of subspaces through an operator channel, where the transmitted and received subspaces are the row spaces of the matrices and in (1), respectively. They proposed a metric that is suitable for this channel, the so-called subspace distance [7]. They also presented a Singleton-like bound for their metric and subspace codes achieving this bound. The main justification for their metric is the fact that a minimum subspace distance decoder seems to be the necessary and sufficient tool for optimally decoding the disturbances imposed by the operator channel. However, when these disturbances are translated to more concrete terms such as the number of error packets injected, only decoding guarantees can be obtained for the minimum distance decoder of [7], but no converse. More precisely, assume that error packets are injected and a general (not necessarily constant-dimension) subspace code with minimum subspace distance is used. In this case, while it is possible to guarantee successful decoding if , and we know of specific examples where decoding fails if this condition is not met, a general converse is not known.

In Section V, we prove such a converse for a new metric—called the injection distance—under a slightly different transmission model. We assume that the adversary is allowed to arbitrarily select the matrices and , provided that a lower bound on the rank of is respected. Under this pessimistic scenario, we show that the injection distance is the fundamental parameter behind the error correction capability of a code; that is, we can guarantee correction of packet errors if and only if is less than half the minimum injection distance of the code. While this approach may seem too pessimistic, we provide a class of examples where a minimum-injection-distance decoder is able to correct more errors than a minimum-subspace-distance decoder. Moreover, the two approaches coincide when a constant-dimension code is used.

In order to give a unified treatment of both coherent and noncoherent network coding, we first develop a general approach to error correction over (certain) adversarial channels. Our treatment generalizes the more abstract portions of classical coding theory and has the main feature of mathematical simplicity. The essence of our approach is to use a single function—called a discrepancy function—to fully describe an adversarial channel. We then propose a distance-like function that is easy to handle analytically and (in many cases, including all the channels considered in this paper) precisely describes the error correction capability of a code. The motivation for this approach is that, once such a distance function is found, one can virtually forget about the channel model and fully concentrate on the combinatorial problem of finding the largest code with a specified minimum distance (just like in classical coding theory). Interestingly, our approach is also useful to characterize the error detection capability of a code.

The remainder of the paper is organized as follows. Section II establishes our notation and review some basic facts about matrices and rank-metric codes. Section III-A presents our general approach to adversarial error correction, which is subsequently specialized to coherent and noncoherent network coding models. Section IV describes our main results for coherent network coding and discusses their relationship with the work of Yeung et al. [2, 3, 4]. Section V describes our main results for noncoherent network coding and discusses their relationship with the work of Kötter and Kschischang [7]. Section VI presents our conclusions.

Ii Preliminaries

Ii-a Basic Notation

Define and . The following notation is used many times throughout the paper. Let be a set, and let . Whenever a function is defined, denote

If is called a “distance” between and , then is called the minimum “distance” of .

Ii-B Matrices and Subspaces

Let denote the finite field with elements. We use to denote the set of all matrices over and use to denote the set of all subspaces of the vector space .

Let denote the dimension of a vector space , let denote the row space of a matrix , and let denote the number of nonzero rows of . Recall that .

Let and be subspaces of some fixed vector space. Recall that the sum is the smallest vector space that contains both and , while the intersection is the largest vector space that is contained in both and . Recall also that

(2)

The rank of a matrix is the smallest for which there exist matrices and such that . Note that both matrices obtained in the decomposition are full-rank; accordingly, such a decomposition is called a full-rank decomposition [13]. In this case, note that, by partitioning and , the matrix can be further expanded as

where .

Another useful property of the rank function is that, for and , we have [13]

(3)

Ii-C Rank-Metric Codes

Let be matrices. The rank distance between and is defined as

It is well known that the rank distance is indeed a metric; in particular, it satisfies the triangle inequality [14, 13].

A rank-metric code is a matrix code used in the context of the rank metric. The Singleton bound for the rank metric [14] (see also [8]) states that every rank-metric code with minimum rank distance must satisfy

(4)

Codes that achieve this bound are called maximum-rank-distance (MRD) codes and they are known to exist for all choices of parameters , , and [14].

Iii A General Approach to Adversarial Error Correction

This section presents a general approach to error correction over adversarial channels. This approach is specialized to coherent and noncoherent network coding in sections IV and V, respectively.

Iii-a Adversarial Channels

An adversarial channel is specified by a finite input alphabet , a finite output alphabet and a collection of fan-out sets for all . For each input , the output is constrained to be in but is otherwise arbitrarily chosen by an adversary. The constraint on the output is important: otherwise, the adversary could prevent communication simply by mapping all inputs to the same output. No further restrictions are imposed on the adversary; in particular, the adversary is potentially omniscient and has unlimited computational power.

A code for an adversarial channel is a subset111There is no loss of generality in considering a single channel use, since the channel may be taken to correspond to multiple uses of a simpler channel. . We say that a code is unambiguous for a channel if the input codeword can always be uniquely determined from the channel output. More precisely, a code is unambiguous if the sets , , are pairwise disjoint. The importance of this concept lies in the fact that, if the code is not unambiguous, then there exist codewords that are indistinguishable at the decoder: if , then the adversary can (and will) exploit this ambiguity by mapping both and to the same output.

A decoder for a code is any function , where denotes a decoding failure (detected error). When is transmitted and is received, a decoder is said to be successful if . We say that a decoder is infallible if it is successful for all and all . Note that the existence of an infallible decoder for implies that is unambiguous. Conversely, given any unambiguous code , one can always find (by definition) a decoder that is infallible. One example is the exhaustive decoder

In other words, an exhaustive decoder returns if is the unique codeword that could possibly have been transmitted when is received, and returns a failure otherwise.

Ideally, one would like to find a large (or largest) code that is unambiguous for a given adversarial channel, together with a decoder that is infallible (and computationally-efficient to implement).

Iii-B Discrepancy

It is useful to consider adversarial channels parameterized by an adversarial effort . Assume that the fan-out sets are of the form

(5)

for some . The value , which we call the discrepancy between and , represents the minimum effort needed for an adversary to transform an input into an output . The value of represents the maximum adversarial effort (maximum discrepancy) allowed in the channel.

In principle, there is no loss of generality in assuming (5) since, by properly defining , one can always express any in this form. For instance, one could set if , and otherwise. However, such a definition would be of no practical value since would be merely an indicator function. Thus, an effective limitation of our model is that it requires channels that are naturally characterized by some discrepancy function. In particular, one should be able to interpret the maximum discrepancy as the level of “degradedness” of the channel.

On the other hand, the assumption imposes effectively no constraint. Since is finite, given any “naturally defined” , one can always shift, scale and round the image of in order to produce some that induces the same fan-out sets as for all .

Example 1

Let us use the above notation to define a -error channel, i.e., a vector channel that introduces at most symbol errors (arbitrarily chosen by an adversary). Assume that the channel input and output alphabets are given by . It is easy to see that the channel can be characterized by a discrepancy function that counts the number of components in which an input vector and an output vector differ. More precisely, we have , where denotes the Hamming distance function.

A main feature of our proposed discrepancy characterization is to allow us to study a whole family of channels (with various levels of degradedness) under the same framework. For instance, we can use a single decoder for all channels in the same family. Define the minimum-discrepancy decoder given by

(6)

where any ties in (6) are assumed to be broken arbitrarily. It is easy to see that a minimum-discrepancy decoder is infallible provided that the code is unambiguous. Thus, we can safely restrict attention to a minimum-discrepancy decoder, regardless of the maximum discrepancy in the channel.

Iii-C Correction Capability

Given a fixed family of channels—specified by , and , and parameterized by a maximum discrepancy —we wish to identify the largest (worst) channel parameter for which we can guarantee successful decoding. We say that a code is -discrepancy-correcting if it is unambiguous for a channel with maximum discrepancy . The discrepancy-correction capability of a code is the largest for which is -discrepancy-correcting.

We start by giving a general characterization of the discrepancy-correction capability. Let the function be given by

(7)

We have the following result.

Proposition 1

The discrepancy-correction capability of a code is given exactly by . In other words, is -discrepancy-correcting if and only if .

{proof}

Suppose that the code is not -discrepancy-correcting, i.e., that there exist some distinct and some such that and . Then . In other words, implies that the code is -discrepancy-correcting.

Conversely, suppose that , i.e., . Then there exist some distinct such that . This in turn implies that there exists some such that . Since this implies that both and , it follows that the code is not -discrepancy-correcting.

At this point, it is tempting to define a “distance-like” function given by , since this would enable us to immediately obtain results analogous to those of classical coding theory (such as the error correction capability being half the minimum distance of the code). This approach has indeed been taken in previous works, such as [12]. Note, however, that the terminology “distance” suggests a geometrical interpretation, which is not immediately clear from (7). Moreover, the function (7) is not necessarily mathematically tractable. It is the objective of this section to propose a “distance” function that is motivated by geometrical considerations and is easier to handle analytically, yet is useful to characterize the correction capability of a code. In particular, we shall be able to obtain the same results as [12] with much greater mathematical simplicity—which will later turn out to be instrumental for code design.

For , define the -distance between and as

(8)

The following interpretation holds. Consider the complete bipartite graph with vertex sets and , and assume that each edge is labeled by a “length” . Then is the length of the shortest path between vertices . Roughly speaking, gives the minimum total effort that an adversary would have to spend (in independent channel realizations) in order to make and both plausible explanations for some received output.

Example 2

Let us compute the -distance for the channel of Example 1. We have , since the Hamming distance satisfies the triangle inequality. This bound is achievable by taking, for instance, . Thus, , i.e., the -distance for this channel is given precisely by the Hamming distance.

The following result justifies our definition of the -distance.

Proposition 2

For any code , .

{proof}

This follows from the fact that for all .

Proposition 2 shows that gives a lower bound on the correction capability of a code—therefore providing a correction guarantee. The converse result, however, is not necessarily true in general. Thus, up to this point, the proposed function is only partially useful: it is conceivable that the -distance might be too conservative and give a guaranteed correction capability that is lower than the actual one. Nevertheless, it is easier to deal with addition, as in (8), rather than maximization, as in (7).

A special case where the converse is true is for a family of channels whose discrepancy function satisfies the following condition:

Definition 1

A discrepancy function is said to be normal if, for all and all , there exists some such that and .

Theorem 3

Suppose that is normal. For every code , we have .

{proof}

We just need to show that . Take any . Since is normal, there exists some such that and either or . Thus, and therefore .

Theorem 3 shows that, for certain families of channels, our proposed -distance achieves the goal of this section: it is a (seemingly) tractable function that precisely describes the correction capability of a code. In particular, the basic result of classical coding theory—that the Hamming distance precisely describes the error correction capability of a code—follows from the fact that the Hamming distance (as a discrepancy function) is normal. As we shall see, much of our effort in the next sections reduces to showing that a specified discrepancy function is normal.

Note that, for normal discrepancy functions, we actually have , so Theorem 3 may also be regarded as providing an alternative (and more tractable) expression for .

Example 3

To give a nontrivial example, let us consider a binary vector channel that introduces at most erasures (arbitrarily chosen by an adversary). The input alphabet is given by , while the output alphabet is given by , where denotes an erasure. We may define , where

The fan-out sets are then given by . In order to compute , observe the minimization in (8). It is easy to see that we should choose when , and when . It follows that . Note that is normal. It follows from Theorem 3 that a code can correct all the erasures introduced by the channel if and only if . This result precisely matches the well-known result of classical coding theory.

It is worth clarifying that, while we call a “distance,” this function may not necessarily be a metric. While symmetry and non-negativity follow from the definition, a -distance may not always satisfy “” or the triangle inequality. Nevertheless, we keep the terminology for convenience.

Although this is not our main interest in this paper, it is worth pointing out that the framework of this section is also useful for obtaining results on error detection. Namely, the -distance gives, in general, a lower bound on the discrepancy detection capability of a code under a bounded discrepancy-correcting decoder; when the discrepancy function is normal, then the -distance precisely characterizes this detection capability (similarly as in classical coding theory). For more details on this topic, see Appendix A.

Iv Coherent Network Coding

Iv-a A Worst-Case Model and the Rank Metric

The basic channel model for coherent network coding with adversarial errors is a matrix channel with input , output , and channel law given by (1), where is fixed and known to the receiver, and is arbitrarily chosen by an adversary. Here, we make the following additional assumptions:

  • The adversary has unlimited computational power and is omniscient; in particular, the adversary knows both and ;

  • The matrix is arbitrarily chosen by the adversary.

We also assume that (more precisely, we should assume ); otherwise, the adversary may always choose , leading to a trivial communications scenario.

The first assumption above allows us to use the approach of Section III. The second assumption may seem somewhat “pessimistic,” but it has the analytical advantage of eliminating from the problem any further dependence on the network code. (Recall that, in principle, would be determined by the network code and the choice of links in error.)

The power of the approach of Section III lies in the fact that the channel model defined above can be completely described by the following discrepancy function

(9)

The discrepancy represents the minimum number of error packets that the adversary needs to inject in order to transform an input into an output , given that the transfer matrix is . The subscript in is to emphasize the dependence on . For this discrepancy function, the minimum-discrepancy decoder becomes

(10)

Similarly, the -distance induced by is given by

(11)

for .

We now wish to find a simpler expression for and , and show that is normal.

Lemma 4
(12)
{proof}

Consider as given by (9). For any feasible triple , we have . This bound is achievable by setting and letting be a full-rank decomposition of .

Lemma 5
{proof}

From (11) and Lemma 4, we have . Since the rank metric satisfies the triangle inequality, we have . This lower bound can be achieved by choosing, e.g., .

Note that is a metric if and only if has full column rank—in which case it is precisely the rank metric. (If , then there exist such that .)

Theorem 6

The discrepancy function is normal.

{proof}

Let and let . Then . By performing a full-rank decomposition of , we can always find two matrices and such that , and . Taking , we have that and .

Note that, under the discrepancy , a -discrepancy-correcting code is a code that can correct any packet errors injected by the adversary. Using Theorem 6 and Theorem 3, we have the following result.

Theorem 7

A code is guaranteed to correct any packet errors if and only if .

Theorem 7 shows that is indeed a fundamental parameter characterizing the error correction capability of a code in our model. Note that, if the condition of Theorem 7 is violated, then there exists at least one codeword for which the adversary can certainly induce a decoding failure.

Note that the error correction capability of a code is dependent on the network code through the matrix . Let be the column-rank deficiency of . Since , it follows from (3) that

and

(13)

Thus, the error correction capability of a code is strongly tied to its minimum rank distance; in particular, if . While the lower bound may not be tight in general, we should expect it to be tight when is sufficiently large. This is indeed the case for MRD codes, as discussed in Section IV-C. Thus, a rank deficiency of will typically reduce the error correction capability of a code.

Taking into account the worst case, we can use Theorem 7 to give a correction guarantee in terms of the minimum rank distance of the code.

Proposition 8

A code is guaranteed to correct packet errors, under rank deficiency , if .

Note that the guarantee of Proposition 8 depends only on and ; in particular, it is independent of the network code or the specific transfer matrix .

Iv-B Reinterpreting the Model of Yeung et al.

In this subsection, we investigate the model for coherent network coding studied by Yeung et al. in [1, 2, 3, 4], which is similar to the one considered in the previous subsection. The model is that of a matrix channel with input , output , and channel law given by

(14)

where and are fixed and known to the receiver, and is arbitrarily chosen by an adversary provided . (Recall that is the number of edges in the network.) In addition, the adversary has unlimited computational power and is omniscient, knowing, in particular, , and .

We now show that some of the concepts defined in [4], such as “network Hamming distance,” can be reinterpreted in the framework of Section III. As a consequence, we can easily recover the results of [4] on error correction and detection guarantees.

First, note that the current model can be completely described by the following discrepancy function

(15)

The -distance induced by this discrepancy function is given by

where the last equality follows from the fact that , achievable if .

Let us now examine some of the concepts defined in [4]. For a specific sink node, the decoder proposed in [4, Eq. (2)] has the form

The definition of the objective function requires several other definitions presented in [4]. Specifically, , where , , and . Substituting all these values into , we obtain

Thus, the decoder in [4] is precisely a minimum-discrepancy decoder.

In [4], the “network Hamming distance” between two messages and is defined as , where . Again, simply substituting the corresponding definitions yields

Thus, the “network Hamming distance” is precisely the -distance induced by the discrepancy function . Finally, the “unicast minimum distance” of a network code with message set [4] is precisely .

Let us return to the problem of characterizing the correction capability of a code.

Proposition 9

The discrepancy function is normal.

{proof}

Let and let . Let be a solution to the minimization in (15). Then and . By partitioning , we can always find two matrices and such that , and . Taking , we have that and . Since , it follows that and .

It follows that a code is guaranteed to correct any packet errors if and only if . Thus, we recover theorems 2 and 3 in [4] (for error detection, see Appendix A). The analogous results for the multicast case can be obtained in a straightforward manner.

We now wish to compare the parameters devised in this subsection with those of Section IV-A. From the descriptions of (1) and (14), it is intuitive that the model of this subsection should be equivalent to that of the previous subsection if the matrix , rather than fixed and known to the receiver, is arbitrarily and secretly chosen by the adversary. A formal proof of this fact is given in the following proposition.

Proposition 10
{proof}

Consider the minimization

For any feasible , we have . This lower bound can be achieved by taking

where is a full-rank decomposition of . This proves the first statement. The second statement follows from the first by noticing that and . The third statement is immediate.

Proposition 10 shows that the model of Section IV-A is indeed more pessimistic, as the adversary has additional power to choose the worst possible . It follows that any code that is -error-correcting for that model must also be -error-correcting for the model of Yeung et al.

Iv-C Optimality of MRD Codes

Let us now evaluate the performance of an MRD code under the models of the two previous subsections.

The Singleton bound of [2] (see also [9]) states that

(16)

where is the size of the alphabet222This alphabet is usually assumed a finite field, but, for the Singleton bound of [2], it is sufficient to assume an abelian group, e.g., a vector space over . from which packets are drawn. Note that in our setting, since each packet consists of symbols from . Using Proposition 10, we can also obtain

(17)

On the other hand, the size of an MRD code, for , is given by

(18)
(19)

where (19) follows from (13). Since , both (16) and (17) are achieved in this case. Thus, we have the following result.

Theorem 11

When , an MRD code achieves maximum cardinality with respect to both and .

Theorem 11 shows that, if an alphabet of size is allowed (i.e., a packet size of at least bits), then MRD codes turn out to be optimal under both models of sections IV-A and IV-B.

Remark: It is straightforward to extend the results of Section IV-A for the case of multiple heterogeneous receivers, where each receiver experiences a rank deficiency . In this case, it can be shown that an MRD code with achieves the refined Singleton bound of [9].

Note that, due to (17), (18) and (19), it follows that for an MRD code with . Thus, in this case, we can restate Theorem 7 in terms of the minimum rank distance of the code.

Theorem 12

An MRD code with is guaranteed to correct packet errors, under rank deficiency , if and only if .

Observe that Theorem 12 holds regardless of the specific transfer matrix , depending only on its column-rank deficiency .

The results of this section imply that, when designing a linear network code, we may focus solely on the objective of making the network code feasible, i.e., maximizing . If an error correction guarantee is desired, then an outer code can be applied end-to-end without requiring any modifications on (or even knowledge of) the underlying network code. The design of the outer code is essentially trivial, as any MRD code can be used, with the only requirement that the number of -symbols per packet, , is at least .

Remark: Consider the decoding rule (10). The fact that (10) together with (12) is equivalent to [8, Eq. (20)] implies that the decoding problem can be solved by exactly the same rank-metric techniques proposed in [8]. In particular, for certain MRD codes with and minimum rank distance , there exist efficient encoding and decoding algorithms both requiring operations in per codeword. For more details, see [15].

V Noncoherent Network Coding

V-a A Worst-Case Model and the Injection Metric

Our model for noncoherent network coding with adversarial errors differs from its coherent counterpart of Section IV-A only with respect to the transfer matrix . Namely, the matrix is unknown to the receiver and is freely chosen by the adversary while respecting the constraint . The parameter , the maximum column rank deficiency of , is a parameter of the system that is known to all. Note that, as discussed above for the matrix , the assumption that is chosen by the adversary is what provides the conservative (worst-case) nature of the model. The constraint on the rank of is required for a meaningful coding problem; otherwise, the adversary could prevent communication by simply choosing .

As before, we assume a minimum-discrepancy decoder

(20)

with discrepancy function given by

(21)

Again, represents the minimum number of error packets needed to produce an output given an input under the current adversarial model. The subscript is to emphasize that is still a function of .

The -distance induced by is defined below. For , let

(22)

We now prove that is normal and therefore characterizes the correction capability of a code.

First, observe that, using Lemma 4, we may rewrite as

(23)

Also, note that

(24)

where the last equality follows from the fact that , achievable by choosing, e.g., .

Theorem 13

The discrepancy function is normal.

{proof}

Let and let . Let be a solution to the minimization in (24). Then . By performing a full-rank decomposition of , we can always find two matrices and such that , and . Taking , we have that and . Since , it follows that and .

As a consequence of Theorem 13, we have the following result.

Theorem 14

A code is guaranteed to correct any packet errors if and only if .

Similarly as in Section IV-A, Theorem 14 shows that is a fundamental parameter characterizing the error correction capability of a code in the current model. In contrast to Section IV-A, however, the expression for (and, consequently, ) does not seem mathematically appealing since it involves a minimization. We now proceed to finding simpler expressions for and .

The minimization in (23) is a special case of a more general expression, which we give as follows. For , and , let

The quantity defined above is computed in the following lemma.

Lemma 15
{proof}

See Appendix B.

Note that is independent of , for all valid . Thus, we may drop the subscript and write simply .

We can now provide a simpler expression for .

Theorem 16
{proof}

This follows immediately from Lemma 15 by noticing that .

From Theorem 16, we observe that depends on the matrices and only through their row spaces, i.e., only the transmitted and received row spaces have a role in the decoding. Put another way, we may say that the channel really accepts an input subspace and delivers an output subspace . Thus, all the communication is made via subspace selection. This observation provides a fundamental justification for the approach of [7].

At this point, it is useful to introduce the following definition.

Definition 2

The injection distance between subspaces and in is defined as

(25)

The injection distance can be interpreted as measuring the number of error packets that an adversary needs to inject in order to transform an input subspace into an output subspace . This can be clearly seen from the fact that . Thus, the injection distance is essentially equal to the discrepancy when the channel is influenced only by the adversary, i.e., when the non-adversarial aspect of the channel (the column-rank deficiency of ) is removed from the problem. Note that, in this case, the decoder (20) becomes precisely a minimum-injection-distance decoder.

Proposition 17

The injection distance is a metric.

We delay the proof of Proposition 17 until Section V-B.

We can now use the definition of the injection distance to simplify the expression for the -distance.

Proposition 18
{proof}

This follows immediately after realizing that .

From Proposition 18, it is clear that is a metric if and only if