Error Correction for Index Coding with Side Information
A problem of index coding with side information was first considered by Y. Birk and T. Kol (IEEE INFOCOM, 1998). In the present work, a generalization of index coding scheme, where transmitted symbols are subject to errors, is studied. Error-correcting methods for such a scheme, and their parameters, are investigated. In particular, the following question is discussed: given the side information hypergraph of index coding scheme and the maximal number of erroneous symbols , what is the shortest length of a linear index code, such that every receiver is able to recover the required information? This question turns out to be a generalization of the problem of finding a shortest-length error-correcting code with a prescribed error-correcting capability in the classical coding theory.
The Singleton bound and two other bounds, referred to as the -bound and the -bound, for the optimal length of a linear error-correcting index code (ECIC) are established. For large alphabets, a construction based on concatenation of an optimal index code with an MDS classical code, is shown to attain the Singleton bound. For smaller alphabets, however, this construction may not be optimal. A random construction is also analyzed. It yields another inexplicit bound on the length of an optimal linear ECIC.
Further, the problem of error-correcting decoding by a linear ECIC is studied. It is shown that in order to decode correctly the desired symbol, the decoder is required to find one of the vectors, belonging to an affine space containing the actual error vector. The syndrome decoding is shown to produce the correct output if the weight of the error pattern is less or equal to the error-correcting capability of the corresponding ECIC.
Finally, the notion of static ECIC, which is suitable for use with a family of instances of an index coding problem, is introduced. Several bounds on the length of static ECIC’s are derived, and constructions for static ECIC’s are discussed. Connections of these codes to weakly resilient Boolean functions are established.
he problem of Index Coding with Side Information (ICSI) was introduced by Birk and Kol , . During the transmission, each client might miss a certain part of the data, due to intermittent reception, limited storage capacity or any other reasons. Via a slow backward channel, the clients let the server know which messages they already have in their possession, and which messages they are interested to receive. The server has to find a way to deliver to each client all the messages he requested, yet spending a minimum number of transmissions. As it was shown in , the server can significantly reduce the number of transmissions by coding the messages.
The toy example in Figure 1 presents a scenario with one broadcast transmitter and four receivers. Each receiver requires a different information packet (we sometimes simply call it message). The naïve approach requires four separate transmissions, one transmission per an information packet. However, by exploiting the knowledge on the subsets of messages that clients already have, and by using coding of the transmitted data, the server can just broadcast one coded packet.
Possible applications of index coding include communications scenarios, in which a satellite or a server broadcasts a set of messages to a set clients, such as daily newspaper delivery or video-on-demand. Index coding with side information can also be used in opportunistic wireless networks. These are the networks in which a wireless node can opportunistically listen to the wireless channel. The client may obtain packets that are not designated to it (see ). As a result, a node obtains some side information about the transmitted data. Exploiting this additional knowledge may help to increase the throughput of the system.
The ICSI problem has been a subject of several recent studies . This problem can be viewed as a special case of the Network Coding (NC) problem , . In particular, as it was shown in , every instance of the NC problem can be reduced to an instance of the ICSI problem.
The preceding works on the ICSI problem consider scenario where the transmissions are error-free. In practice, of course, this might not be the case. In this work, we assume that the transmitted symbols are subject to errors. We extend some known results on index coding to a case where any receiver can correct up to a certain number of errors. It turns out that the problem of designing such error-correcting index codes (ECIC’s) naturally generalizes the problem of constructing classical error-correcting codes.
More specifically, assume that the number of messages that the server possesses is , and that the designed maximal number of errors is . We show that the problem of constructing ECIC of minimal possible length is equivalent to the problem of constructing a matrix which has rows and the minimal possible number of columns, such that
where is a certain subset of . Here denotes the Hamming weight of the vector , stands for a finite field with elements, and is the all-zeros vector. If , this problem becomes equivalent to the problem of designing a shortest-length linear code of given dimension and minimum distance.
In this work, we establish an upper bound (the -bound) and a lower bound (the -bound) on the shortest length of a linear ECIC, which is able to correct any error pattern of size up to . More specifically, let be the side information hypergraph that describes the instance of the ICSI problem. Let denote the length of a shortest-length linear ECIC over , such that every can recover the desired message, if the number of errors is at most . We use notation for the length of an optimal linear error-correcting code of dimension and minimum distance over . We obtain
where is the generalized independence number and is the min-rank (over ) of .
For linear index codes, we also derive an analog of the Singleton bound. This result implies that (over sufficiently large alphabet) the concatenation of a standard MDS error-correcting code with an optimal linear index code yields an optimal linear error-correcting index code. Finally, we consider random ECIC’s. By analyzing its parameters, we obtain an upper bound on its length.
When the side information hypergraph is a pentagon, and , the inequalities in (Equation 1) are shown to be strict. This implies that a concatenated scheme based on a classical error-correcting code and on a linear non-error-correcting index code does not necessarily yield an optimal linear error-correcting index code. Since ICSI problem can also be viewed as a source coding problem , this example demonstrates that sometimes designing a single code for both source and channel coding can result in a smaller number of transmissions.
The decoding of a linear ECIC is somewhat different from that of a classical error-correcting code. There is no longer a need for a complete recovery of the whole information vector. We analyze the decoding criteria for the ECIC’s and show that the syndrome decoding, which might be different for each receiver, results in a correct result, provided that the number of errors does not exceed the error-correcting capability of the code.
An ECIC is called static under a family of instances of the ICSI problem if it works for all of these instances. Such an ECIC is interesting since it remains useful as long as the parameters of the problem vary within a particular range. Bounds and constructions for static ECIC’s are studied in Section 8. Connections between static ECIC’s and weakly resilient vectorial Boolean functions are also discussed.
The problem of error correction for NC was studied in several previous works. However, these results are not directly applicable to the ICSI problem. First, there is only a very limited variety of results for non-multicast networks in the existing literature. The ICSI problem, however, is a special case of the non-multicast NC problem. Second, the ICSI problem can be modeled by the NC scenario , yet, this requires that there are directed edges from particular sources to each sink, which provide the side information. The symbols transmitted on these special edges are not allowed to be corrupted. By contrast, for error-correcting NC, symbols transmitted on all edges can be corrupted.
The paper is organized as follows. Basic notations and definitions, used throughout the paper, are provided in Section 2. The problem of index coding with and without error-correction is introduced in Section 3. Some basic results are presented in that section. The -bound and the -bound are derived in Section 4. The Singleton bound is presented in Section 5. Random codes are discussed in Section 6. Syndrome decoding is studied in Section 7. A notion of static error-correcting index codes is presented in Section 8. Several bounds on the length of such codes are derived, and connections to resilient function are shown in that section. Finally, the results are summarized in Section 9, and some open questions are proposed therein.
In this section we introduce some useful notation. Here is the finite field of elements, where is a power of prime, and is the set of all nonzero elements of .
Let . For the vectors and , the (Hamming) distance between and is defined to be the number of coordinates where and differ, namely,
If and is a set of vectors (or a vector subspace), then the last definition can be extended to
The support of a vector is defined to be the set . The (Hamming) weight of a vector , denoted , is defined to be , the number of nonzero coordinates of . Suppose . We write whenever .
A -dimensional subspace of is called a linear code over if the minimum distance of ,
is equal to . Sometimes we may use the notation for the sake of simplicity. The vectors in are called codewords. It is easy to see that the minimum weight of a nonzero codeword in a linear code is equal to its minimum distance . A generator matrix of an code is a matrix whose rows are linearly independent codewords of . Then . The parity-check matrix of is an matrix over such that . Given , , and , let denote the length of the shortest linear code over which has dimension and minimum distance .
We use to denote the unit vector, which has a one at the th position, and zeros elsewhere. For a vector and a subset of , where , let denote the vector .
For an matrix , let denote its th row. For a set , let denote the matrix obtained from by deleting all the rows of which are not indexed by the elements of . For a set of vectors , we use notation to denote the linear space spanned by the vectors in . We also use notation for the linear space spanned by the columns of the matrix .
Let be a graph with a vertex set and an edge set . The graph is called undirected if every edge , , and . A graph is directed if every edge is an ordered pair , . A directed graph is called symmetric if
There is a natural correspondence between undirected graph and directed symmetric graph defined as
Let be an undirected graph. A subset of vertices is called an independent set if , . The size of the largest independent set in is called the independence number of , and is denoted by . The graph is called the complement of if
A coloring of using colors is a function , such that
The chromatic number of is the smallest number such that there exists a coloring of using colors, and it is denoted by . By using the correspondence (Equation 2), the definitions of independence number, graph complement and chromatic number are trivially extended to directed symmetric graphs.
3Index Coding and Error Correction
3.1Index Coding with Side Information
Index Coding with Side Information problem considers the following communications scenario. There is a unique sender (or source) , who has a vector of messages in his possession. There are also receivers , receiving information from via a broadcast channel. For each , has side information, i.e. owns a subset of messages , where . Each , , is interested in receiving the message (we say that requires ), where the mapping satisfies for all . Hereafter, we use the notation . An instance of the ICSI problem is given by a quadruple . It can also be conveniently described by a directed hypergraph .
Each side information hypergraph can be associated with the directed graph in the following way. For each directed edge there will be directed edges , for . When and for all , the graph is, in fact, the side information graph, defined in .
The goal of the ICSI problem is to design a coding scheme that allows to satisfy the requests of all receivers in the least number of transmissions. More formally, we have the following definition.
Hereafter, we assume that is known to . Moreover, we also assume that the code is known to each receiver , . In practice this can be achieved by a preliminary communication session, when the knowledge of the sets for and of the code are disseminated between the participants of the scheme.
Observe that generalizes the over of the side information graph, which was defined in . More specifically, when and for all , becomes the side information graph, and . The of an undirected graph was first introduced by Haemers  to bound the Shannon capacity of a graph, and was later proved in  to be the smallest number of transmissions in a linear index code.
3.2Error-Correcting Index Code with Side Information
Due to noise, the symbols received by , , may be subject to errors. Consider an ICSI instance , and assume that broadcasts a vector . Let be the error affecting the information received by , . Then actually receives the vector
instead of . The following definition is a generalization of Definition ?.
The definitions of the length, of a linear index code, and of the matrix corresponding to an index code are naturally extended to an error-correcting index code. Note that if is an -IC, then it is a -ECIC, and vice versa.
Consider an instance of the ICSI problem described by . We define the set of vectors
For all , we also define
Then the collection of supports of all vectors in is given by
The necessary and sufficient condition for a matrix to be the matrix corresponding to some -ECIC is given in the following lemma.
For each , we define
the set of all vectors resulting from at most errors in the transmitted vector associated with the information vector . Then the receiver can recover correctly if and only if
for every pair satisfying:
(Observe that is interested only in the bit , not in the whole vector .)
Therefore, corresponds to a -ECIC if and only if the following condition is satisfied: for all and for all such that and , it holds
Denote . Then, the condition in (Equation 4) can be reformulated as follows: for all and for all such that and , it holds
The equivalent condition is that for all ,
Since for we have
the condition ( ?) can be restated as
for all and for all choices of nonzero , .
The next corollary follows from Lemma ? in a straight-forward manner. It is not hard to see that the conditions stated in Lemma ? and in the corollary below are, in fact, equivalent.
The next corollary also follows directly from Lemma ? by considering an error-free setup, i.e. . It is easy to verify that the conditions stated in this corollary and in Lemma ? are equivalent, as expected.
Observe however, that for general , changing the order of rows in can lead to ECIC’s with different error-correcting capabilities. Therefore, the problem of designing an optimal linear ECIC is essentially the problem of finding the matrix corresponding to that code. However, the minimum distance of the code generated by the rows of is not necessary a valid indicator for goodness of an ECIC. Sometimes, as Example ? shows, matrix with redundant rows yields a good ECIC.
4The -Bound and the -Bound
Let be an instance of the ICSI problem, and let be the corresponding side information hypergraph. Next, we introduce the following definitions for the hypergraph .
When and for all , the generalized independence number of is equal to the maximum size of an acyclic induced subgraph of , which was introduced in . In particular, when is symmetric, is the independence number of . We prove the latter statement in the Appendix.
Next, we present a lower bound on the length of a -ECIC. We call this bound -bound.
Consider an matrix , which corresponds to a -ECIC. Let be a maximum generalized independent set in . Then, every subset satisfies . Therefore,
for all , , and for all choices of , . Hence, the rows of , namely , form a generator matrix of an code. Therefore,
Next, we assume the existence of a matrix satisfying the properties stated in the theorem. Let be a generator matrix of some code, where . We construct the matrix as follows. For , let
For every and for all choices of , , we have
where the last transition is due to the existence of such that
and the fact that ’s are linearly independent nonzero codewords of a code of minimum distance .
We conclude that the index code based on is capable of correcting errors. Therefore, .
The following proposition is based on the fact that concatenation of a -error-correcting code with an optimal (non-error-correcting) -IC yields a -ECIC.
Let , which is an matrix, correspond to an optimal -IC over . Denote
Let be a generator matrix of an optimal code , where
Consider a scheme where broadcasts the vector . If less than errors occur, then each receiver is able to recover by using . Hence each is able to recover . Therefore, for the index code based on ,
each receiver is capable to recover if the number of errors is less or equal to . The length of the corresponding ECIC is . Therefore,
By combining the results in Theorem ? and in Proposition ?, we obtain the following corollary.
It is shown in the example below that the inequalities in Corollary ? can be strict. In particular, it follows that mere application of an error-correcting code on top of an index code may fail to provide us with an optimal linear ECIC. This fact motivates the study of ECIC’s in Sections Section 3–Section 7.
When the graph is undirected (or symmetric), the following theorem holds (see, for instance, ).
When and for all , we have that and . Moreover, if the graph is symmetric and satisfies , then from Corollary ? we have
for all , and the corresponding bounds in Corollary ? are tight.
Perfect graphs include families of graphs such as trees, bipartite graphs, interval graphs, and chordal graphs. If , for all , and is perfect, then the bounds in Corollary ? are tight. For the full characterization of perfect graphs, the reader can refer to .
5The Singleton Bound
The following bound is analogous to Singleton bound for classical linear error-correcting codes.
Let be the matrix corresponding to some optimal -ECIC. Let be the matrix obtained by deleting any columns from .
By Lemma ?, satisfies
for all and all choices of , . We deduce that the rows of also satisfy
By Corollary ?, corresponds to a linear -IC. Therefore, by Lemma ?, part 2, has at least columns. We deduce that
which concludes the proof.
The following corollary from Proposition ? and Theorem ? demonstrates that, for sufficiently large alphabets, a concatenation of a classical MDS error-correcting code with an optimal (non-error-correcting) index code yields an optimal ECIC. However, as it was illustrated in Example ?, this does not hold for the index coding schemes over small alphabets.
From Theorem ?, we have
On the other hand, from Proposition ?,
for (by taking doubly-extended Reed-Solomon (RS) codes). Therefore, for these , ( ?) holds.
In this section we prove an inexplicit upper bound on the optimal length of the ECIC’s. The proof is based on constructing a random ECIC and analyzing its parameters.
We construct a random matrix over , row by row. Each row is selected independently of other rows, uniformly over . Define vector spaces
for all . We also define the following events:
The event represents the situation when the receiver cannot recover . Then, by Corollary ?, the event is equivalent to . Therefore,
For a particular event , ,
There exists a matrix that corresponds to a -ECIC if . It is enough to require that the right-hand side of (Equation 6) is smaller than . By plugging in the expression in (Equation 7), we obtain a sufficient condition on the existence of a -ECIC over :
Consider the -ECIC based on a matrix . Suppose that the receiver , , receives the vector
where is the codeword transmitted by , and is the error pattern affecting this codeword.
In the classical coding theory, the transmitted vector , the received vector , and the error pattern are related by . Therefore, if is known to the receiver, then there is a one-to-one correspondence between the values of unknown vectors and . For index coding, however, this is no longer the case. The following theorem shows that, in order to recover the message from using (Equation 8), it is sufficient to find just one vector from a set of possible error patterns. This set is defined as follows:
We henceforth refer to the set as the set of relevant error patterns.
From (Equation 8), we have
If knows , then it is also able to determine
Since has a knowledge of , it is also able to determine the whole .
Suppose that knows a vector
for some . We show that is able then to determine . Indeed, we re-write (Equation 9) as
The receiver can find some solution of the equation
This equality implies that (otherwise, by Corollary ?, the sum in the right-hand side will have nonzero weight). Hence, is able to determine , as claimed.
We now describe a syndrome decoding algorithm for linear error-correcting index codes. From (Equation 9), we have
Let , and let be a parity check matrix of . We obtain that
Let be a column vector defined by
Observe that each is capable of determining . Then we can re-write (Equation 12) as
This leads us to the formulation of the following decoding procedure for .
Input: , , .
Step 1: Compute the syndrome
Step 2: Find the lowest Hamming weight solution of the system
Step 3: Given that , solve the system for :
By Lemma ?, it is sufficient to prove that . Indeed, since
Hence, , and therefore,
for some and , .
Since is a solution of (Equation 14), and , we deduce that as well. Hence,
Therefore, by Corollary ?, . Hence, , as desired, and therefore .
8Static Codes and Related Problems
8.1Static Error-Correcting Index Codes
In the previous sections we focused on linear -error-correcting index codes for a particular instance of the ICSI problem. When some of the parameters , , , and are variable or not known, it is very likely that an error-correcting index code for the instance with particular values of these parameters can not be used for the instances with different values of some of these parameters. Therefore, it is interesting to design an error-correcting index code which will be suitable for a family of instances of the ICSI problem.
Recall that an instance can be described by the side information hypergraph . For a set of instances , let
where is defined as in (Equation 3). We also define
The proof follows from Definition ? and Lemma ?.
Please notice that when is used for an instance with , then the last rows of are simply discarded.
One particular family of interest is , the family that contains all instances where each receiver owns at least messages as its side information. More formally,
A -error-correcting index code which is static under will provide successful communication between the sender and the receivers under the presence of at most errors, despite a possible change of the collection of the side information sets , a change of the set of receivers, and a change of the demand function, as long as each receiver still possesses at least messages.
In the rest of this section, we assume that , and .
Let be an matrix that satisfies the -Property. We show that this is equivalent to the condition that corresponds to a -error-correcting linear index code, which is static under . By Lemma ?, it suffices to show that is the collection of all nonempty subsets of , whose cardinalities are not greater than .
Consider an instance . For all , we have and , and thus we deduce that
Hence by (Equation 3), the cardinality of each set in is at most
Therefore, due to (Equation 17), every set in has at most elements.
It remains to show that every nonempty subset of whose cardinality is at most belongs to . Consider an arbitrary -subset of , with . Consider an instance with and . Since
The proof follows.
8.2Application: Weakly Resilient Functions
In this section we introduce the notion of weakly resilient functions. Hereafter, we restrict the discussion to the binary alphabet.
The applications of resilient functions can be found in fault-tolerant distributed computing, quantum cryptographic key distribution , privacy amplification  and random sequence generation for stream ciphers . Connections between linear error-correcting codes and resilient functions were established in .
Below we introduce a definition of a -weakly -resilient function, which is a weaker version of a -resilient function.
Suppose that satisfies the -Property. Take any -subset . By Definition ?, the submatrix of is a generating matrix of the error-correcting code with the minimum distance . By Theorem ?, the function defined by is -resilient. Since is an arbitrary -subset of , the function is -weakly -resilient.
Conversely, assume that the function is -weakly -resilient. Take any subset , . Then the function defined by is -resilient. Therefore, by Theorem ?, is a generating matrix of a linear code with minimum distance . Since is an arbitrary -subset of , by Proposition ? satisfies the -Property.
8.3Bounds and Constructions
In this section we study the problem of constructing a matrix satisfying the -Property. Such with the minimal possible number of columns is called optimal. First, observe that from Proposition ? we have
is the set of all nonempty subsets of of cardinality at most . Next, consider an instance satisfying
where is the side information hypergraph corresponding to that instance. Such an instance can be constructed as follows. For each subset (, we introduce a receiver which requests the message , and has a set as its side information. It is straightforward to verify that indeed we obtain an instance satisfying (Equation 18). The problem of designing an optimal matrix satisfying the -Property then becomes equivalent to the problem of finding an optimal -ECIC. Thus, is equal to the number of columns in an optimal matrix which satisfies the -Property.
The corresponding -bound and -bound for can be stated as follows.
The first inequality follows from the -bound and from the fact that , which is due to (Equation 18).
For the second inequality, it suffices to show that . By Corollary ?, an matrix corresponds to an -IC if and only if is linearly independent for every . Since is the set of all nonempty subsets of cardinality at most , this is equivalent to saying that every set of at most rows of is linearly independent. This condition is equivalent to the condition that