# Vector Network Coding Based on Subspace Codes Outperforms Scalar Linear Network Coding

###### Abstract

This paper considers vector network coding solutions based on rank-metric codes and subspace codes. The main result of this paper is that vector solutions can significantly reduce the required field size compared to the optimal scalar linear solution for the same multicast network. The multicast networks considered in this paper have one source with messages and the vector solution is over a field of size with vectors of length . The achieved gap of the field size between the optimal scalar linear solution and the vector solution is for any and any even . If is odd, then the achieved gap of the field size is . Previously, only a gap of constant size had been shown for networks with a very large number of messages. These results imply the same gap of the field size between the optimal scalar linear and any scalar nonlinear network coding solution for multicast networks. For three messages, we also show an advantage of vector network coding, while for two messages the problem remains open. Several networks are considered, all of them are generalizations and modifications of the well-known combination network. The vector network codes that are used as a solution for those networks are based on subspace codes and in particular subspace codes obtained from rank-metric codes. Some of these codes form a new family of subspace codes which poses a new interesting research problem. Finally, the exposition given in this paper suggests a sequence of related problems for future research.

combination networks, field size, multicast networks, rank-metric codes, scalar network coding, subspace codes, vector network coding.

## I Introduction

Network coding has been attracting increasing attention over the last fifteen years. The trigger for this interest was Ahlswede et al.’s seminal paper [1] which revealed that network coding increases the throughput compared to simple routing. This gain is achieved since in network coding, the nodes are allowed to forward a function of their received packets while in routing packets can only be forwarded. The network coding problem can be formulated as follows: given a network with a set of sources (where each one has a set of messages), for each edge find a function of the packets received at the starting node of the edge, such that each receiver can recover all its requested information from its received packets. Such an assignment of a function to each edge is called a solution for the network. Therefore, the received packets on an edge can be expressed as a function of the messages of the sources. If these functions are linear, we obtain a linear network coding solution, else we speak about a nonlinear solution. In linear network coding, each linear function on an edge consists of coding coefficients for each incoming packet. If the coding coefficients and the packets are scalars, it is called a scalar network coding solution. Throughout this paper, we use the short-hand terms scalar linear solution and scalar nonlinear solution. In [21], Kötter and Médard provided an algebraic formulation for the linear network coding problem and its scalar solvability.

Vector network coding as part of fractional network coding was mentioned in [2, 8]. A solution of the network is called an fractional vector network coding solution, if the edges transmit vectors of length , but the message vectors are of length . The case corresponds to a scalar solution. Ebrahimi and Fragouli [9] have extended the algebraic approach from [21] to vector network coding. Here, all packets are vectors of length and the coding coefficients are matrices. A set of coding matrices for which all receivers can recover their requested information, is called a vector network coding solution (henceforth, it will be called vector solution). Notice that vector operations imply linearity over vectors, therefore, a vector solution is always a (vector) linear solution. In terms of the achievable rate, vector network coding outperforms scalar linear network coding [28, 7]. In [28], an example of a network which is not scalar linear solvable, but vector routing solvable was shown. Furthermore, in [5] it was shown that not every solvable network has a vector solution. In particular, a network which has no vector solution exists over any finite field and any vector dimension. The work of [24] shows the hardness of finding a capacity achieving vector solution for a general instance of the network coding problem.

The field size of the network coding solution is an important parameter that directly influences the complexity of the calculations at the network nodes and edges. In practical applications, it is usually desired to work over small binary finite fields in order to represent bits and bytes. The minimum required field size has been a subject for comprehensive research throughout the last fifteen years of research on network coding, e.g., see Langberg’s and Sprintson’s tutorial [23].

Throughout this paper, we consider only (single source) multicast networks. An up-to-date survey on the fundamental properties of network coding for multicast networks can be found in [14]. A multicast network has one source which has messages and all receivers want to receive all the messages simultaneously. Notice that such a network can easily be transformed to a network with sources, where each source has and transmits one message and therefore, our results equivalently hold for that case. Jaggi et al. [19] have shown a deterministic algorithm for finding a scalar linear solution for multicast networks whose field size is the least prime power that is at least the number of receivers . The algorithm from [22] reduces the complexity to find such a solution for the network. Lehmann and Lehmann [25] proved that there are networks where the linear and nonlinear scalar solutions both require a field of size in the order of . In general, finding the minimum required field size of a (linear or nonlinear) scalar network code for a certain multicast network is NP-complete [25].

Clearly, for a given network, a vector solution can be transformed to a nonlinear scalar solution. Dougherty et al. have investigated in [6] several differences between scalar linear and scalar nonlinear solutions. For example, they showed that for any , there exists a non-multicast network with messages that has a binary nonlinear solution, but no binary linear solution. Further, they also showed that a network that has a scalar solution over some alphabet might not have a scalar solution over a larger alphabet (which might not be a finite field). In [36], two multicast networks were given: one which is solvable over the finite field , but not over ; and one which is solvable over , but not over . They provided the so-called Swirl network which is solvable over , but not over any , where is a Mersenne prime.

In a scalar linear solution, each coding coefficient can be chosen from values (if the solution is over a field of size ). In vector network coding over a field of size and dimension , each coefficient is a matrix and can be chosen from possibilities. Therefore, vector network coding offers more freedom in choosing the coding coefficients than scalar linear coding for equivalent field sizes and a smaller field size might be achievable [9].

This paper considers a widely studied family of networks, the combination networks, and several generalizations and modifications of it. We analyze the scalar linear and vector solutions of these networks. The proposed vector solutions are based on rank-metric codes and subspace codes. The main result of our paper is that for several of the analyzed networks, our vector solutions significantly reduce the required field size. In one subfamily of these networks, the scalar linear solution requires a field size , for even , where denotes the number of messages, while we provide a vector solution of field size and dimension . Therefore, the achieved gap between the field size of the optimal scalar linear solution and our vector solution is for any even . Notice that throughout this paper whenever we refer to such a gap, we mean the difference between the smallest field size for which a scalar linear solution exists and the smallest field size for which we can construct a vector solution. For odd , the achieved gap is . To our knowledge, so far Sun et al. [34, 35] has been the only work which presents such a gap, but only of constant size. We improve significantly upon [34]. Further, the network of [34] has a large number of messages whereas our results are based on simple networks and hold for any number of messages . For three messages and certain parameters, we provide a network in which the vector solution outperforms the optimal scalar linear solution, but with a smaller gap. For two messages, the problem is open and we conjecture that there is no advantage in the field size when vector network coding is used. Finally, in the framework of [9], the coding matrices for the vector solutions have to be commutative, while in our solutions they are not necessarily commutative.

The rest of this paper is structured as follows. Section II provides notations and definitions of finite fields, network coding, and discusses rank-metric codes and subspaces codes. Section III considers the combination network and derives its optimal scalar linear solution and a vector solution based on rank-metric codes. Although our vector solution does not provide an improvement in terms of the field size for the unmodified combination network, it helps to understand the principle of our vector solution. Section IV gives an overview of the generalized combination networks for which a gain in the field size will be shown. In Section V, we present scalar linear and vector solutions for some generalized combination networks with additional direct links from the source to each receiver. Further, the nodes in this network are connected via parallel links. Our vector solutions for those networks are based on rank-metric codes. For these networks, the required field size is significantly reduced. In particular, the largest achieved gap between scalar and vector network coding is for any even integer messages. In Section VI, we show that the constructions which are based on rank-metric code can be seen as constructions based on subspace codes. Moreover, using subspace codes, some results can be improved. Section VII analyzes and compares more generalized combination networks and the achieved gap in the field size of the optimal scalar linear solution and our vector solution. The vector solutions for these networks are based on subspaces codes. In particular, a gap between scalar and vector network coding of size is obtained for any odd integer messages. A network where the vector solution also improves the field size compared to the optimal scalar solution for messages is shown in Section VIII. This network as well as other similar network pose a new interesting problem related to subspace codes. Finally, concluding remarks are given in Section IX and several open problems for future research are outlined.

## Ii Preliminaries

### Ii-a Finite Fields

Let be a power of a prime, denote the finite field of order , and its extension field of order . We use for the set of all matrices over . Let denote the identity matrix and the all-zero matrix. The triple denotes a linear code over of length , dimension , and minimum Hamming distance .

### Ii-B Network Coding

A network will be modeled as a finite directed acyclic multigraph, with a set of source nodes and a set of receivers. The sources have sets of disjoint messages, which are symbols or vectors over a given finite field. To unify the description, we refer to packets for both cases, symbols and vectors. Each receiver demands (requests) a subset of the messages. Each edge in the network has unit capacity and it carries a packet which is either a symbol from (in scalar network coding) or a vector of length over (in vector network coding). Note that the assumption of unit capacity does not restrict the considered networks since edges of larger capacity can be represented by multiple parallel edges of unit capacity. The incoming and outgoing edges of a node are denoted by and .

A network code is a set of functions of the packets on the edges of the network. For a source , the edges in carry functions of the messages of . For any vertex , which is not a source or a receiver, the edges in carry functions of the packets on the edges of . The network code is called linear if all the functions are linear and nonlinear otherwise. A network code is a solution for the network if each receiver can reconstruct its requested messages from the packets on its incoming edges. The network code is called a scalar network code if the packets are scalars from and thus, each edge carries a scalar from . The network code is called a vector network code if the packets are vectors and each edge carries a vector of length with entries from a field .

A (single source) multicast network is such a network with exactly one source, where all receivers demand all the messages simultaneously. This is possible if there are edge disjoint paths from the source to each receiver. This is equivalent to saying that the min-cut between the source and each receiver is , where the min-cut is the minimum number of edges that have to be deleted to disconnect all sources from the receiver. The network coding max-flow/min-cut theorem for multicast networks states that the maximum number of messages transmitted from the source to each receiver is equal to the smallest min-cut between the source and any receiver. In the sequel, we will only write network instead of multicast network since this paper considers only multicast networks.

To formalize this description, let denote the source messages for scalar linear network coding. Each edge in , for any given vertex , builds a linear combination of the symbols obtained from . The coefficient vector of such a linear combination is called the local coding vector. Clearly, from all the functions on the paths leading to , the packet of can be written as a linear combination of the messages. The coefficients of this linear combination are called global coding vector. Each receiver finally obtains several linear combination of the message symbols (its global coding vectors). Thus, any receiver , , has to solve the following linear system of equations:

where and is a transfer matrix which contains the global coding vectors on the edges of and the symbols on are . In scalar linear network coding, we therefore want to find edge coefficients such that the matrix has full rank for every . These coefficients should have field size as small as possible.

In vector network coding, the edges transmit vectors and therefore, the coding coefficients at each node are matrices. A vector solution is said to have dimension over a field of size if all these vectors are over a field of size and have length . Let denote source messages which are now vectors of length . Then, any receiver has to solve the following linear system of equations:

where and is a transfer matrix which contains the global coding vectors (vectors of matrices) on the edges of and the vectors on are . In vector network coding, we therefore want to find edge coefficients (which are matrices) such that the matrix has full rank for every and such that is minimized.

Clearly, when we want to compare scalar linear and vector network coding, a vector network coding solution of dimension over a field of size is equivalent to a scalar solution of size .

### Ii-C Rank-Metric Codes

Codewords of rank-metric codes will be used in some of our constructions as the coding coefficients for the vector solutions. Let be the rank of a matrix . The rank distance between is defined by . A linear rank-metric code is a -dimensional subspace of . It consists of matrices of size over with minimum rank distance:

The Singleton-like upper bound for rank-metric codes [3, 15, 31] implies that for any code, we have that . Codes which attain this bound with equality are known for all feasible parameters [3, 15, 31]. They are called maximum rank distance (MRD) codes and denoted by .

Let be the companion matrix of a primitive polynomial of degree over . The set of matrices forms an code of pairwise commutative matrices (see [31, 26]).

###### Theorem 1

Let , where is a companion matrix:

and is a primitive polynomial. Then, is an code.

The set of matrices and the finite field are isomorphic. In particular, if is a root of the primitive polynomial , then can be implemented by multiplying the corresponding two matrices , where is the companion matrix based on . Similarly, corresponds to . Notice that . This fact will be needed later for our constructions. These matrices are very useful when we design a vector network code for the combination network (see Section III).

Moreover, to prove that any network (multicast or non-multicast) has a vector solution of dimension over if there exists a scalar linear solution over , we can simply replace any coefficient by . Due to the isomorphism, addition and multiplication can be done in the matrix representation as well. Further, the matrices of the code are useful in encoding and decoding used in the network. Instead of computing in the field , we can use the related matrices of the code to obtain the vector solution and translate it to the scalar solution only at the receivers.

### Ii-D Subspace Codes

In our constructions of vector network codes, the global coding vector consists of matrices over for each edge. These matrices can be concatenated together to form a matrix which is a basis for a subspace of whose dimension is at most . In the sequel of the paper, we will see that if we use a set of subspaces spanned by such matrices with additional properties, then they can be used as our coding coefficients.

Let denote the space spanned by the rows of a matrix . Similarly, for vectors , let denote the space spanned by these vectors. The Grassmannian of dimension is denoted by and is the set of all subspaces of of dimension . The cardinality of is the well-known -binomial (also known as Gaussian coefficient):

where . The set of all subspace of is called the projective space of order and is denoted by , i.e. . For two subspaces , let denote the smallest subspace containing the union of and . The subspace distance between and is defined by . A subspace code is set of subspaces; if the subspaces in the subspace code have the same dimension, then the code is called a constant dimension code or a Grassmannian code. These codes were considered for error-correction in random linear network coding [20]. Bounds on the sizes of such codes and properties which are relevant for our exposition can be found in [10, 11, 13]. Let denote the maximum cardinality of a constant dimension code in with minimum subspace distance . The following bounds can be found in [20, 11].

###### Theorem 2

For ,

and for

## Iii The Combination Network

The combination network (where ) is shown in Fig. 1 (see also [30]). The network has three layers: in the first layer there is a source with messages. The source transmits packets to the nodes of the middle layer. Any nodes in the middle layer are connected to a receiver, and each one of the receivers demands all the messages. For vector coding, the messages are vectors of length and for scalar coding, the messages are scalars, denoted by . In the combination network, the local and the global coding vectors are the same and therefore, we will not distinguish between the two. Next, we consider the case where .

### Iii-a Scalar Solution

The combination network has a scalar linear solution of field size if and only if an MDS code exists [30]. Thus, and when and is a power of two, then is sufficient [27, p. 328]. The symbols transmitted to and from each node in the middle layer form together a codeword of the MDS code (encoded from the message symbols) and each receiver obtains symbols. Each receiver can correct erasures and therefore, can reconstruct the message symbols.

###### Corollary 1

If and is a power of two, let , else let . For the combination network, a scalar linear solution of field size exists if and only if .

### Iii-B Vector Solution

In the sequel, we present a vector solution based on MRD codes for the combination network. The case was implicitly solved similarly in [34]. Our construction uses the isomorphism between the field and the powers of the companion matrix, i.e., the set (Theorem 1). This isomorphism leads to the following theorem.

###### Theorem 3

Let be a root of the primitive polynomial of . Let be an arbitrary matrix. Define the block matrix by replacing each entry of as follows: if , , replace it by for all , where denotes the companion matrix based on . If , then replace it by the all-zero matrix.

Any set of linearly independent columns of is linearly independent over if and only if the columns of the related blocks of columns in are linearly independent over .

###### Proof:

Denote by a full-rank submatrix of . Define a corresponding matrix over by replacing by and by . Clearly, the determinant of is a function of its entries , . The determinant of has the same form as , with the only difference that each is replaced by (and if , it is replaced by ). Thus, is non-zero if and only if is non-zero and the difference and the product of any two distinct matrices and (where at least one of them is non-zero) has full rank. The second property is true since the powers of the companion matrix form a full-rank MRD code, see Theorem 1, and hence, the statement follows. ∎

The following corollary considers block Vandermonde matrices which will be used for our vector solution. Notice thereby that .

###### Corollary 2

Let be the code defined by the companion matrix (Theorem 1). Let , , be distinct codewords of . Define the following block matrix:

Then, any submatrix consisting of blocks of consecutive columns has full rank , for any .

Based on this corollary, we can now provide a vector network code.

###### Construction 1

Let be the code defined by the companion matrix (Theorem 1) and let . Consider the -combination network with message vectors . One node from the middle layer receives and transmits and the other nodes of the middle layer receive and transmit , for .

The matrices , , are the coding coefficients of the incoming and outgoing edges of node in the middle layer.

###### Theorem 4

Construction 1 provides a vector linear solution of field size and dimension to the -combination network, i.e., can be reconstructed at all receivers.

###### Proof:

Each receiver obtains vectors and has to solve one of the following two systems of linear equations:

or

for some distinct . Due to Corollary 2, in both cases, the corresponding matrix has full rank and there is a unique solution for . ∎

The decoding at each receiver consists of solving a linear system of equations of size . The following theorem analyzes the decoding complexity of this vector solution. Thereby, we use the MRD code from Theorem 1 which is formed by the companion matrix and its powers. Thus, the inverse of these matrices can be calculated with less than quadratic complexity.

###### Theorem 5

For the -combination network, a vector solution of field size and dimension exists. The decoding complexity is in over for each receiver.

###### Proof:

From Theorem 4 it is clear that such a solution exists. For the decoding complexity, it remains to prove that inverting the matrix from Corollary 2 can be done with operations over . Since the submatrices are commutative, the inverse of this block Vandermonde matrix can be calculated in the same way as the inverse of a usual Vandermonde matrix [37]. The only difference is that multiplication and addition of elements from are now replaced by multiplication and addition of the commutative code matrices. Due to the isomorphism between and , this multiplication and addition is equivalent to fast polynomial multiplication and fast polynomial addition modulo the primitive polynomial and thus in the order of over . Thus, the complexity of inverting costs times the complexity of inverting a Vandermonde matrix, which is [17]. ∎

Further, for the -combination network with three messages and when is a power of two, we can use the matrices from Construction 1 and additionally transmit to obtain a vector solution.

### Iii-C Analysis

Due to the isomorphism of and the code , both solutions are equivalent. Implementing the scalar solution can actually be done by implementing the vector solution. We can therefore construct a vector linear solution of size and dimension for the combination network, where equivalently a scalar solution from an MDS code exists for . The decoding complexity when implementing the vector solution is in the order of operations over for each receiver.

## Iv A Family of Generalized Combination Networks

This section provides an overview of the networks which are considered in this paper. All of them are generalizations and modifications of the combination network. We therefore define a generalization of the combination network, called the -direct links -parallel links network, in short the - network. All our considered networks (including the unmodified combination networks) are special cases of the - network.

We start by describing the structure of the generalized network and then we will consider the main subfamilies of this family of networks. The - network is shown in Fig. 2 and consists of three layers. In the first layer, there is a source with messages. In the middle layer, there are nodes. The source has parallel links to each node in the middle layer. From any nodes in the middle layer, there are parallel links to one receiver in the third layer. Additionally, from the source there are direct parallel links to each one of the receivers in the third layer. Therefore, each receiver has incoming links. We will assume some relations between the parameters , , , and such that the resulting network is interesting and does not have a trivial or no solution (see Theorem 8).

Notice first that the local and global coding vectors for the - network must be the same. Hence, we won’t distingish between them in the sequel. Second, it should be remarked that for certain parameters, the min-cut of the --network is larger than the number of messages . An equivalent network in which the min-cut is , which is solved with alphabet of the same size, can be constructed as follows: replace the -th receiver by a node from which there are links to vertices , . From , , there is a link to a new receiver . Similarly, we can avoid parallel links in the network. Assume there are parallel links from vertex to vertex . We can remove these links and add vertices , such that there exists a link from to , , and there exists a link from each vertex , , to . Again, the new network is solvable over the same alphabet as the old network. In the - network, this transformation can be done by simply replacing each node in the middle layer by nodes.

The following subsections list several instances of the - network. In the next section, we will analyze the scalar linear solution and the vector solution for some of these special - networks and compare them with respect to the alphabet size.

### Iv-a The Combination Network

The combination network is clearly a special case of the - network, where and . This network with was already considered in Section III. We are not aware of any set of parameters for which a vector solution outperforms the optimal scalar linear solution in the unmodified combination network with respect to the field size.

### Iv-B The Direct Links Combination Network

Another interesting family of networks that will be considered is the combination network with additional direct links, i.e. the - network. This network with and is discussed in Section VIII. It is the only network with three messages for which we obtained an advantage of a vector solution compared to the optimal scalar linear solution with respect to the alphabet size. Both, the optimal scalar linear solution and the vector solution for this subfamily of networks, motivate some interesting questions on a classic coding problem and on a new type of subspace code problem which will be discussed in the sequel. For and this network is illustrated in Fig. 3.

### Iv-C The One-Direct Link -Parallel Links Network

The - network is shown in Fig. 4. It has three layers, a source in the first layer with messages and nodes in the middle layer, where there are links from the source to each node in the middle layer. There are receivers which form the third layer, where each two nodes from the middle layer are connected to a different receiver. If a node from the middle layer is connected to a receiver , then there are links from to . There is also one direct link from the source to each receiver. The total number of edges entering a receiver is . This is the subfamily with the smallest number of direct links from the source to the receivers, for which our vector solution outperforms the optimal scalar linear solution.

### Iv-D The -Extra Links -Parallel Links Network

The second subfamily in which vector solutions outperforms scalar solutions is the - network, which is shown in Fig. 5. This is the nontrivial subfamily in which the number of direct links from the source to the receivers is the largest one. It has three layers, with a source carrying messages in the first layer. In the second layer, there are nodes and in the third layer, there are receivers. This network yields the largest gap in the alphabet size between our vector solution and the optimal scalar solution for an even number of messages. The intersection with the previous subfamily, the - network, is when and the related network is the - network shown in Fig. 6.

## V Vector Solutions Using Rank-Metric Codes which Outperform Scalar solutions

### V-a Overview

In this section, we will consider vector solutions based on rank-metric codes for some of the generalized combination networks. We will start with a basic example of the - network. The idea of the optimal scalar linear solution for this network will demonstrate the general idea of all our results with messages. The gap between scalar and vector network coding is . The more general network that we consider later is the - network for which the gap between scalar and vector network coding is for any .

### V-B Scalar Linear Solution for the - Network

For ease of understanding, the - network is shown in Fig. 6.

###### Lemma 1

There is a scalar linear solution of field size for the - network if and only if .

###### Proof:

Let be a matrix, divided into blocks of two columns, with the property that any two blocks together have rank at least three. Each one of the nodes in the middle layer of the network transmits two symbols (from one block) of (these symbols were transmitted to the node from the source). Each direct link transmits a symbol , for , which is chosen such that the related -submatrix of with the additional vector column has full rank. Clearly, there is a scalar solution over if and only if such a matrix over exists.

These blocks are defined to be any -matrix representations of all two-dimensional subspaces of . Any two blocks together form a matrix of rank at least three (since any two such subspaces are distinct).

From every node in the middle layer, there are two links to the appropriate receivers. Therefore, we associate each middle node with one block. The number of blocks is at most the number of distinct two-dimensional subspaces of , i.e. and therefore, a scalar solution exists if:

To prove the ”only if”, we need to show that there is no scheme that provides more blocks. Assume, one block is a rank-one matrix. Then, all other blocks must have rank two and the space that they span has to be disjoint to the rank-one block. Therefore, with this scheme there are at most blocks. Thus, for the largest all blocks should have rank two, and taking all two-dimensional subspaces provides the maximum number of blocks. ∎

### V-C Vector Solution for the --Network

###### Construction 2

Let be an code and let . Consider the - network with message vectors . The -th middle node receives and transmits:

The direct link from the source which ends in the same receiver as the links from two distinct nodes , of the middle layer transmits the vector , where the matrix is chosen such that

(1) |

Since

it follows that the rows of can be chosen such that the overall rank of the matrix from (1) is .

###### Theorem 6

Construction 2 provides a vector solution of field size and dimension to the - network for any .

###### Proof:

Each receiver obtains the vectors and the vector from the direct link. From these five vectors, the receiver wants to reconstruct the message vectors by solving the following linear system of equations:

The choice of from Construction 2 guarantees that this linear system of equations has a unique solution for . ∎

### V-D Comparison of the Solutions for the - Network

For the - network, we obtain a significant improvement in the field size for the vector solution compared to the optimal scalar linear solution. The field size of the vector solution is equivalent to while the field size of the optimal scalar linear solution has to satisfy . Since can be chosen to be , the size of the gap is .

The same gap in the alphabet size can be obtained for the - network, for any , by using the same approach with an code.

### V-E Solutions for the - Network

To improve the gap on the alphabet size, i.e., to show that the advantage of our vector solution compared to the optimal scalar linear solution is even more significant, we consider in this subsection the - network from Subsection IV-D.

Let us first consider the optimal scalar linear solution for this network.

###### Lemma 2

There is a scalar linear solution of field size for the - network with messages, where , if and only if .

###### Proof:

Let be a matrix, divided into blocks of columns, with the property that any two blocks together have rank at least . Each of the nodes in the middle layer of the network transmits symbols (from one block) of (these symbols were transmitted to the node from the source). Each direct link transmits a symbol , for , which is chosen such that the related submatrix of with the additional vector column has full rank. Clearly, there is a scalar solution over if and only if such a matrix over