Structural Properties of Index Coding Capacity This paper was presented in part at the IEEE International Symposium on Information Theory, Hong Kong, June 14–19, 2015, and at the IEEE Information Theory Workshop, Jeju Island, Korea, Oct 11–15, 2015.

Structural Properties of Index Coding Capacity thanks: This paper was presented in part at the IEEE International Symposium on Information Theory, Hong Kong, June 14–19, 2015, and at the IEEE Information Theory Workshop, Jeju Island, Korea, Oct 11–15, 2015.

Fatemeh Arbabjolfaei and Young-Han Kim
Department of Electrical and Computer Engineering
University of California, San Diego
Email: {farbabjo, yhk}@ucsd.edu
Abstract

The index coding capacity is investigated through its structural properties. First, the capacity is characterized in three new multiletter expressions involving the clique number, Shannon capacity, and Lovász theta function of the confusion graph, the latter notion introduced by Alon, Hassidim, Lubetzky, Stav, and Weinstein. The main idea is that every confusion graph can be decomposed into a small number of perfect graphs. The clique-number characterization is then utilized to show that the capacity is multiplicative under the lexicographic product of side information graphs, establishing the converse to an earlier result by Blasiak, Kleinberg, and Lubetzky. Second, sufficient and necessary conditions on the criticality of an index coding instance, namely, whether side information can be removed without reducing the capacity, are established based on the notion of unicycle, providing a partial answer to the question first raised by Tahmasbi, Shahrasbi, and Gohari. The necessary condition, along with other existing conditions, can be used to eliminate noncritical instances that do not need to be investigated. As an application of the established multiplicativity and criticality, only 10,634 (0.69%) out of 1,540,944 nonisomorphic six-message index coding instances are identified for further investigation, among which the capacity is still unknown for 119 instances.

I Introduction

The index coding problem is a canonical problem in network information theory in which a server has a tuple of messages , , and is connected to receivers via a noiseless broadcast channel. Suppose that receiver is interested in message and has a set of other messages as side information. Assuming that the server knows side information sets , one wishes to characterize the minimum amount of information the server needs to broadcast and to find the optimal coding scheme that achieves this minimum.

More precisely, a index code is defined by

  • an encoder that maps -tuple of messages to an -bit index and

  • decoders that maps the received index and the side information back to for .

Thus, for every ,

A code is written as a code. A rate tuple is said to be achievable for the index coding problem if there exists a index code such that

The capacity region of the index coding problem is defined as the closure of the set of achievable rate tuples. The symmetric capacity (or the capacity in short) of the index coding problem is defined as

and its reciprocal is referred to as the broadcast rate, which can be equivalently defined as

(1)

where the equality follows by Fekete’s lemma [1] and the subadditivity

The goal is to characterize the capacity region or the symmetric capacity for the general index coding problem and to determine the coding scheme that can achieve it.

Any instance of the index coding problem is fully determined by the side information sets , and is represented compactly as . For example, the 3-message index coding problem with , and is represented as

The problem can be equivalently specified by a directed graph with vertices, commonly referred to as the side information graph. Each vertex of the side information graph corresponds to a receiver (and its associated message) and there is a directed edge if and only if (iff) receiver knows message as side information, i.e., (see Fig. 1). Throughout the paper, we identify an instance of the index coding problem with its side information graph and often write “index coding problem .” We also denote the broadcast rate and the capacity region of problem with and respectively.

Fig. 1: The graph representation for the index coding problem with , and .

The problem of broadcasting to multiple receivers with different side information traces back to the work by Celebiler and Stette [2], Wyner, Wolf, and Willems [3, 4], Yeung [5], and Birk and Kol [6, 7]. The current problem formulation is due to the last. This problem has been shown to be closely related to many other important problems in network information theory such as network coding [8, 9, 10], locally recoverable distributed storage [11, 12, 13], guessing games on directed graphs [8, 14, 13], and zero-error capacity of channels [15]. In addition, index coding has its own applications in diverse areas ranging from satellite communication [2, 3, 4, 5, 6, 7] and multimedia distribution [16] to interference management [17] and coded caching [18, 19]. Due to this significance, the index coding problem has been broadly studied over the past two decades in several disciplines including graph theory, coding theory, and information theory, and various bounds have been established on the capacity region and the broadcast rate. Despite all these efforts, however, the problem is still open in general and the capacity in a computable (single-letter) expression is known only for a handful of special cases (see, for example, [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 17, 30, 14, 31]).

Deviating from the common approach of finding the capacity by establishing tight upper and lower bounds, we take a more direct attack at the capacity itself by its structural properties using several graph-theoretic tools. The main contributions are summarized as follows:

  • A new multiletter characterization of the capacity (Theorem 2). Paralleling the multiletter characterization of the capacity (broadcast rate) via the chromatic number of the confusion graph [32], we establish a multiletter characterization via the clique number of the confusion graph. As a corollary, we establish a nonasymptotic upper bound on the broadcast rate via the Lovász theta function of the confusion graph that can be computed more efficiently than the existing upper bound using the chromatic number.

  • Multiplicativity of the capacity under the lexicographic product (Theorem 5). As another corollary of the aforementioned clique-number characterization, we show that if the side information graph is the lexicographic product of two graphs, the capacity is the product of the capacities of the two component graphs, completing an earlier result by Blasiak, Kleinberg, and Lubetzky [33]).

  • Conditions on the criticality of an index coding instance (Theorem 6 and Proposition 10). Providing a partial answer to the question raised by Tahmasbi, Shahrasbi, and Gohari [34], we establish conditions under which the removal of an edge reduces the capacity. Both sufficient and necessary conditions are based on the notion of unicycle that is closely related to the maximum acyclic induced subgraph bound on the capacity.

The rest of the paper is organized as follows. Sections II and III review graph-theoretic preliminaries and some of the previously known bounds on the capacity, respectively. In Section IV, we introduce the notion of confusion graph associated with a given index coding problem and establish several properties including a tight bound on the chromatic number of a confusion graph in terms of its clique number. In Section V, we characterize the broadcast rate of a general index coding problem via asymptotic expressions involving the clique number, Shannon capacity, and Lovász theta function of the confusion graph. Nonasymptotic upper bounds on the broadcast rate are also established in terms of the Shannon capacity and Lovász theta function of the confusion graph. Based on the clique-number characterization, we prove in Section VI that the broadcast rate is multiplicative under the lexicographic product of side information graphs. In Section VII, we investigate the criticality problem and present sufficient and necessary conditions based on the notion of unicycle. Section VIII concludes the paper with an application of the established structural properties in computing the capacity for index coding problems with six messages.

Ii Mathematical Preliminaries

Throughout the paper, a graph (without a qualifier) means a directed, finite, and simple graph, where is the set of vertices (nodes) and is the set of directed edges. A graph is said to be unidirectional if implies . Similarly, is said to be bidirectional if implies . Given , its associated undirected graph is defined by identifying and . A bidirectional graph is sometimes identified with its undirected graph. The complement of the graph is defined by and iff . For any , denotes the subgraph induced by , i.e., and .

An independent set of a graph is a set of vertices with no edge among them. The independence number is the size of the largest independent set of the graph . A clique of a graph is a set of vertices such that there is a (directed) edge from every vertex in to every other vertex in . Thus, is a clique of iff it is an independent set of . The clique number is the size of the largest clique of the graph . It is easy to see that

(2)

for any directed or undirected graph . A Hamiltonian cycle of a graph is a cycle that visits each vertex exactly once. A graph possessing a Hamiltonian cycle is said to be Hamiltonian.

Ii-a Chromatic Number

A (vertex) coloring of an undirected (finite simple) graph is a mapping that assigns a color to each vertex such that no two adjacent vertices share the same color. The chromatic number is the minimum number of colors such that a coloring of the graph exists. More generally, a -fold coloring assigns a set of colors to each vertex such that no two adjacent vertices share the same color. The -fold chromatic number is the minimum number of colors such that a -fold coloring exists. The fractional chromatic number of the graph is defined as

where the limit exists since is subadditive. Consequently,

(3)

Let be the collection of all independent sets in . The chromatic number and the fractional chromatic number are also characterized via the following optimization problem

When the optimization variables , , take integer values in , then the (integral) solution is the chromatic number. If this constraint is relaxed and , then the (rational) solution is the fractional chromatic number [35]. The (fractional) chromatic number can be related to the independence and clique numbers.

Lemma 1 (Scheinerman and Ullman [35]).

For any undirected graph with vertices,

Lemma 2.

For any graph we have

An undirected graph is said to be perfect if for every induced subgraph , , the clique number equals the chromatic number, i.e., . Perfect graphs can be characterized as follows.

Proposition 1 (Chudnovsky, Robertson, Seymour, and Thomas [36]).

A graph is perfect iff no induced subgraph of is an odd cycle of length at least five (odd hole) or the complement of one (odd antihole).

Let be an undirected graph with . For each clique of , the incidence vector is an -dimensional vector whose th component is equal to 1 if and 0 otherwise. The clique polytope of is defined as

(4)

Another (convex) polytope associated with is defined as

(5)

Since every incidence vector of a clique satisfies for an independent set , for every . Lovász’s perfect graph theorem states that equality holds iff is perfect.

Lemma 3 (Lovász [37]).

For any graph the following statements are equivalent:

  • is perfect.

  • .

  • is perfect.

We now state a result on chromatic numbers that will be useful later. The chromatic number of a graph can be upper bounded by decomposing it into smaller graphs. The following decomposition result will be proved in Appendix A.

Lemma 4.

Let and be two undirected graphs on the set of vertices . Consider the graph defined on the same vertex set in which each edge either belongs to or . Then

Ii-B Graph Products

Generally speaking, a graph product is a binary operation on two graphs that produces a graph on the Cartesian product of the original vertex sets with the edge set constructed from the original edge sets according to certain rules. In the following, denotes that there exists an edge between and .

Given two undirected graphs and , the disjunctive product [38, 35] is defined as and iff

Throughout the paper, denotes the disjunctive product of copies of .

Given two undirected graphs and , the strong product [39] is defined as and iff

or
or

Throughout the paper, denotes the strong product of copies of . The following lemma elucidates the relation between the disjunctive product and the strong product.

Lemma 5.

For any two undirected graphs and

Given two graphs and , the lexicographic product [39] is defined as and iff

The lexicographic product can be thought of as replacing each vertex with a copy of . Therefore, the edges among the vertices of each copy of remain the same as in and there exists a directed edge from every vertex in copy of to every vertex in copy of iff (see Fig. 2 for an example).

Fig. 2: Graphs and and their lexicographic product . The bold arrows indicate that there is an edge from every vertex in the circle attached to the tail of the arrow to every vertex in the circle attached to the head of the arrow.

The lexicographic product can be generalized as follows. Let be a graph with vertices and be graphs with vertices respectively. The generalized lexicographic product is defined to be a graph on vertices in which vertex is replaced with , i.e., the edges among the vertices of remain the same as before and there is a directed edge from every vertex of to every vertex of iff .

Ii-C Shannon Capacity of a Graph and Lovász Function

Consider a graph whose vertices represent input symbols of a noisy channel and two vertices are connected iff the corresponding channel inputs are confusable as they may result in the same channel output. The goal is to find the zero-error capacity of the channel represented by the graph . If we are limited to use the channel only once, then we can send up to bits without an error. However, if we are allowed to use the channel times, then we can construct the following graph to capture the confusabilities. Assign each -tuple of the input symbols to a vertex and the vertices for two tuples and connect iff for every , or in . We can easily check that the resulting graph is the strong product . Thus, by using the channel times, we can send bits without an error. Based on this observation [40], the Shannon capacity of a graph is defined as

(6)

In other words, indicates the number of bits per input symbol that can be sent through the channel without error. By definition,

(7)

Shannon [40] showed that for perfect graphs . The equality does not hold in general, however. In fact, computing the Shannon capacity of a general graph is a very hard problem. Lovász [41] derived an upper bound on the Shannon capacity referred to as the Lovász theta function, which is easily computable and results in determining the Shannon capacity of some graphs. Before defining the Lovász theta function, we need the following definition. An orthonormal representation of an undirected graph with vertices is a set of unit vectors such that if and are nonadjacent vertices of , then and are orthogonal, i.e., . For example, a set of pairwise orthogonal unit vectors is an orthonormal representation of any undirected -node graph. The value of an orthonormal representation is defined as

The unit vector attaining the minimum is referred to as the handle of the representation. The Lovász theta function of , denoted as , is defined to be the minimum value over all orthonormal representations of . A representation is said to be optimal if it attains this minimum.

Lemma 6 (Lovász [41]).

For any undirected graph ,

By (2), (7), Lemma 6, and Theorem 10 in [41], the Lovász theta function is sandwiched by other graph-theoretic quantities that are NP-hard to compute.

Lemma 7.

For any undirected graph ,

However, the Lovász theta function is polynomially computable in [42].

Iii Bounds on the Capacity

The simplest approach to index coding is a coding scheme by Birk and Kol [6] that partitions the side information graph by cliques and transmits the binary sums (parities) of all the messages in each clique.

Proposition 2 (Clique covering bound).

Let be the minimum number of cliques that partition , or equivalently, the chromatic number of , which is the solution to the integer program

(8)

where is the collection of all cliques in . Then for any index coding problem , .

This bound, which is achieved by time division over a clique partition, has been extended in several directions. First, Birk and Kol [6] showed that one can use an MDS code over a finite field and perform time division over arbitrary subgraphs (partial cliques) instead of cliques. The number of parity symbols needed for a subgraph is characterized by the difference between the number of vertices in and the minimum indegree within .

Proposition 3 (Partial clique covering bound).

If partition , then the optimal broadcast rate is upper bounded by

(9)

and thus by

where the minimum is over all partitions.

Remark 1.

If the graph with vertices is Hamiltonian, then the minimum indegree is at least one and thus , or equivalently, the symmetric rate is achievable for problem .

By the standard time-sharing argument, Blasiak, Kleinberg, and Lubetzky [22] extended the clique covering bound to the fractional clique covering bound, which is equivalent to the fractional chromatic number of , namely, the solution to the linear program obtained by relaxing the integer constraint in (8) to .

Remark 2.

The integral, partial, and fractional clique covering bounds can be readily extended to the corresponding inner bounds on the capacity region. For example, by fractional clique covering, a rate tuple is achievable for the index coding problem , if there exists such that

(10)

Tighter bounds can be found in [25, 26, 43, 28]. In this paper, we only need the simpler integral, partial, and fractional clique covering bounds.

As for bounding the broadcast rate from below, Bar-Yossef, Birk, Jayram, and Kol [20] proposed the following.

Proposition 4 (Maximum acyclic induced subgraph (MAIS) bound).

For any index coding problem

Remark 3.

Since every independent set is acyclic, Proposition 4 implies that for any , .

Remark 4.

When is bidirectional (undirected) and perfect we have . Hence, the upper bound of Proposition 2 matches the lower bound of Remark 3 and the broadcast rate is known [20].

Remark 5.

The MAIS bound can be generalized to an outer bound on the capacity region [25] as follows. If a rate tuple is achievable for index coding problem , then

(11)

for all such that is acyclic. This bound is a special case of the polymatroidal outer bound [44, 45, 33].

Remark 6.

When is bidirectional (undirected), the polytope associated with in (5) is equivalent to the MAIS outer bound in (11). It is also easy to see that the rate tuple given by each incidence vector of cliques in is achievable by clique covering and thus the polytope associated with in (II-A) is achievable by fractional clique covering. Therefore, by Lemma 3, if is bidirectional and perfect, then the capacity region is equal to the MAIS outer bound in (11), which is achieved by fractional clique covering [14].

Iv Confusion Graphs

The notion of confusion graph for the index coding problem was originally introduced by Alon, Hassidim, Lubetzky, Stav, and Weinstein [32]. In the context of guessing games, an equivalent notion was introduced independently by Gadouleau and Riis [46]. Consider a directed graph with . Let , , and let be a length- integer tuple. Two -ary -tuples are said to be confusable at position of node if and for all .

Given a directed graph and a length- integer tuple , the confusion graph at position of node  is an undirected graph with vertices such that every vertex corresponds to a -ary tuple and two vertices are connected iff the corresponding -ary tuples are confusable at position of receiver .

Aggregating over all positions, we say that are confusable if they are confusable at some position of some node . The confusion graph is defined as before based on confusion between each pair of vertices, or equivalently,

(12)

If , then is simply denoted by . Fig. 3 shows , , and as well as corresponding to for in Fig. 1.

Fig. 3: Confusion graphs for the directed graph shown in Fig. 1 corresponding to the integer tuple . (a) . (b) . (c) . (d) .

By Lemma 4 and (12), the chromatic number of can be upper bounded by those of its components.

Proposition 5.

.

Each component confusion graph has the following properties.

Lemma 8.

does not have any chordless cycle of length greater than four.

Lemma 9.

The complement of does not have any chordless cycle of length greater than four.

The proofs of the lemmas are given in Appendices B and C. By Proposition 1, Lemma 8, and Lemma 9, the following is immediate.

Proposition 6.

is perfect.

As the main contribution of this section, we now establish an upper bound on the chromatic number of a confusion graph in terms of its clique number.

Theorem 1.

Given a directed graph , a length- integer tuple , and a positive integer , the confusion graph satisfies

(13)

Proof: Consider

(14)
(15)
(16)

where (14) follows by Proposition 5, (15) follows by Proposition 6, and (16) follows by (12). ∎

V Multiletter Characterizations of the Capacity

Consider an index coding problem . Using the notion of confusion graph introduced in Section IV, Alon, Hassidim, Lubetzky, Stav, and Weinstein [32] showed that

(17)

To prove this, consider a coloring of the vertices of the confusion graph with colors. This partitions the vertices of into independent sets. By the definition of the confusion graph, no two message tuples in each independent set are confusable and therefore assigning a unique index to each independent set yields a valid index code. The total number of codewords of this index code is , which requires bits to be broadcast. Hence, . Conversely, consider any index code that assigns (at most) distinct indices to message tuples. By definition, all the message tuples mapped to an index form an independent set of the confusion graph . Moreover, every message tuple is mapped to some index so that these independent sets partition . Thus, , or equivalently, , and hence .

Based on (17), Alon, Hassidim, Lubetzky, Stav, and Weinstein [32] established the following upper bound on the broadcast rate

(18)

for every positive integer , and established a multiletter characterization of the broadcast rate as

(19)

In our earlier work [47], this characterization was strengthened using the fractional chromatic number as

(20)

We now further strengthen this result and characterize the broadcast rate in terms of the clique number of the confusion graph.

Theorem 2.

For any side information graph ,

(21)

Proof: By setting in Theorem 1 and recalling Lemma 2, we have

(22)

Hence,

(23)

which, combined with (19), completes the proof. ∎

Note that since for any graph , Equation (20) can be derived as a corollary of Theorem 2. Combining (7), Lemma 6, and Lemma 7, we have for any positive integer

(24)

Thus, we can characterize the broadcast rate in terms of the Shannon capacity and the Lovász theta function of the complement of the confusion graph.

Corollary 1.

In summary, the broadcast rate can be characterized as the first order in the exponent of six well-known graph theoretic quantities associated with and its complement, namely, , , , , , and .

In the following, we present nonasymptotic upper bounds on the broadcast rate in terms of the Shannon capacity and the Lovász theta function that hold for every positive integer and, due to (24), are tighter than the upper bound in (18).

Theorem 3.

For any side information graph and any positive integer ,

(25)

Proof: Consider

(26)

where the inequality holds since the set of edges of contains the set of edges of , and the last equality follows by Lemma 5. Now for any ,

(27)
(28)
(29)
(30)

where (27) follows by Theorem 2, (28) holds since the limit of a subsequence is equal to the limit of the sequence, (29) follows by (26), and (30) follows by the definition of the Shannon capacity in (6). ∎

Corollary 2.

For any side information graph and any positive integer ,

(31)
Remark 7.

Unlike the upper bounds in (18) and (25) in terms of the chromatic number and the Shannon capacity, the upper bound in (31) can be computed in polynomial time in the number of vertices of the confusion graph (see [42]).

Remark 8.

Equation (20) can be generalized to characterize the capacity region of the index coding problem as the closure of all rate tuples such that

(32)

for some [47]. By a sandwich argument similar to (22), can be also characterized in terms of asymptotically as .

Remark 9.

Similar to the index coding problem, the optimal rate region of the locally recoverable distributed storage problem with recovery graph [11, 12] is characterized as the closure of all rate tuples such that

(33)

for some [11, 13]. Based on the vertex transitivity of which, inter alia, implies that , the relationship between the index coding capacity region in (32) and the distributed storage optimal rate region in (33) can be made precise. See [13] for the details.

Vi Lexicographic Product of Side Information

We first establish an upper bound on the broadcast rate of the index coding problem whose side information graph is a general lexicographic product (recall the definition in Section II).

Theorem 4.

Let be a directed graph with vertices and be directed graphs with . Then

(34)

The proof of the theorem is given in Appendix D.

Remark 10.

For the special case in which has two vertices, the upper bound in Theorem 4 is tight [46, 34, 47]. In particular, if has either no edges or one edge (see Fig. 4(a) and 4(b)), then , and if is a complete graph on two vertices (see Fig.  4(c)), then .

Fig. 4: Graph examples with (a) no interaction, (b) one-way interaction, and (c) complete interaction among its two parts ( indicates that there is a bidirectional edge between every vertex on the left and every vertex on the right).

The following states another special case for which the bound in Theorem 4 is tight.

Theorem 5.

For any two directed graphs and ,

In words, the broadcast rate is multiplicative under the lexicographic product of index coding side information graphs. Achievability was shown by Blasiak, Kleinberg, and Lubetzky [33]. It also follows from Theorem 4 by setting . The proof of the converse is based on the clique-number characterization of the broadcast rate in Theorem 2 and the following.

Lemma 10.

For any index code for problem ,

The proof of the lemma is relegated to Appendix E.

Proof of the converse for Theorem 5: Consider

(35)
(36)

where (35) follows by Lemma 10, and (36) follows by Theorem 2. ∎

Example 1.

The graph shown in Fig. 5(a) can be considered as the lexicographic product of two smaller graphs and shown in Fig. 5(b) and 5(c) respectively with and = 2. By Theorem 5, instead of directly computing the broadcast rate for this six-message problem, we can use the known broadcast rates of smaller problems and get . Note that although this six-message problem has a certain symmetric structure, it does not fall into the class of cyclically symmetric index coding problems studied by Maleki, Cadambe, and Jafar [29].

Fig. 5: (a) A 6-node graph that is the lexicographic product of two smaller graphs and . (b) The 3-node graph . (c) The 2-node graph .

The bound in Theorem 4 is not tight in general, as illustrated by the following.

Example 2.

Consider the following 7-message index coding problem

for which [17]. Let be 1-message problems and be the 2-message problem

Then Theorem 4 yields

This bound is not tight since the composite coding scheme [25, 26] achieves the tighter upper bound of 10/3 on .

Remark 11.

The upper bound on the broadcast rate in Theorem 4 can be generalized to an inner bound on the capacity region as follows. Denoting the capacity regions of the index coding problems , , and by , , and respectively, we have

(37)

For the special case in which has two vertices, the inner bound in (37) is tight [34], [47], generalizing the results in Remark 10. If has either no edge or only one edge, then

In other words, in this case, the capacity region of is achieved by time division between the optimal coding schemes for two subproblems and . If is a complete graph on two vertices, then

In other words, the capacity region of is achieved by simultaneously using the optimal coding schemes for and .

Vii Critical Index Coding Instances

As Remark 11 suggests, if an edge of the side information graph belongs to a directed cut, removing does not reduce the capacity region. The Farkas lemma [48, Th. 2.2] states that each edge in a directed graph either lies on a directed cycle or belongs to a directed cut but not both. Hence, if edge does not lie on any directed cycle, it can be removed from without affecting the capacity region. This was first observed by Tahmasbi, Shahrasbi, and Gohari [34], who then asked for general conditions under which an edge of the side information graph can be removed without reducing the capacity region.

Let be an edge of side information graph . We denote the graph resulting from removing from by , i.e.,

Given the index coding problem , the edge is said to be critical if , or in other words, if the removal of from strictly reduces the capacity region. The index coding problem itself is said to be critical if every is critical. Thus, each critical graph (= index coding problem) cannot be made “simpler” into another one of the same capacity region.

Remark 11 can be paraphrased into the following necessary condition for criticality.

Proposition 7 (Union-of-cycles condition [34]).

If is critical, then every edge belongs to a directed cycle.

This simple condition, however, is not sufficient. For the index coding problem shown in Fig. 1, although the edge lies on a directed cycle, it can be shown that the capacity region is characterized by

with or without this edge.

To observe another simple necessary condition for criticality, consider an index coding problem with side information sets . These sets are said to be degraded if there exist such that and . In this case, the edge can be removed since can be recovered at node . This observation leads to the following necessary condition.

Proposition 8 (Nondegradedness condition).

If is critical, then side information sets must be nondegraded.

Satisfying the above two necessary conditions at the same time is still not sufficient for criticality. As an example, it can be checked that the side information graph shown in Fig. 6 satisfies both union-of-cycles and nondegradedness conditions. However, it is not a critical graph as the capacity region is characterized by

with or without the edge .

Fig. 6: A 5-message index coding problem. The edge lies on a directed cycle and . However, removing this edge does not affect the capacity region. The capacity region is achieved by the composite coding scheme [25] with or without this edge.

In order to find a tighter necessary condition, we now focus on a sufficient condition. Given a graph , the vertex induced subgraph is referred to as a unicycle if its set of edges is a (chordless) Hamiltonian cycle over . Note that if the subgraph is a unicycle, then cannot be a unicycle for any that is a proper subset or superset of . As an example, in Fig. 7(a), is a unicycle, but itself is not a unicycle. As another example, for the graph in Fig. 7(b), and are both unicycles.

Fig. 7: (a) is a unicycle, but is not a unicycle. (b) and are both unicycles.

The following states a sufficient condition for the criticality of a problem.

Theorem 6 (Union-of-unicycles condition).

If every edge of belongs to a vertex induced subgraph that is a unicycle, then is critical.

Proof: It suffices to show that removing each edge of that belongs to a unicycle strictly reduces the capacity region. Let be an edge of , where and is a unicycle. The rate tuple such that

(38)

is achievable for index coding problem by partial clique covering (see Proposition 3 and Remark 1). The vertex-induced subgraph