Storage Capacity of Repairable Networks

Storage Capacity of Repairable Networks

Arya Mazumdar  The author is with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, email: Part of this work was presented in the IEEE International Symposium on Information Theory, 2014 [31] and in the Allerton Conference, 2014 [30]. This work was supported in part by the National Science Foundation CAREER award under grant no. CCF 1453121.

In this paper, we introduce a model of a distributed storage system that is locally recoverable from any single server failure. Unlike the usual local recovery model of codes for distributed storage, this model accounts for the fact that each server or storage node in a network is connectible to only some, and not all other, nodes. This may happen for reasons such as physical separation, inhomogeneity in storage platforms etc. We estimate the storage capacity of both undirected and directed networks under this model and propose some constructive schemes. From a coding theory point of view, we show that this model is approximately dual of the well-studied index coding problem.

Further in this paper, we extend the above model to handle multiple server failures. Among other results, we provide an upper bound on the minimum pairwise distance of a set of words that can be stored in a graph with the local repair guarantee. The well-known impossibility bounds on the distance of locally recoverable codes follow from our result.

I Introduction

Recently, the local repair property of error-correcting codes is the center of a lot of research activities. In a distributed storage system, a single server failure is the most common error-event, and in the case of a failure the aim is to reconstruct the content of the failed server from as few other servers as possible (or by downloading minimal amount of data from other servers). The study of such regenerative storage systems was initiated in [16] and then followed up in several recent works. In [22], a particularly neat characterization of the local repair property is provided. It is assumed that, each symbol of an encoded message is stored at a different node in the storage-network (since the symbol alphabet is unconstrained, a symbol could represent a packet or block of bits of arbitrary size). Accordingly, [22] investigates code-families that allow any single coordinate of a codeword to be recovered from at most a constant number of other coordinates of the codeword, i.e., from a number of coordinates that does not grow with the length of the code.

The work of [22] is then further generalized to several directions and a number of impossibility results and constructions of locally repairable codes were presented in [34, 44, 10, 42, 24, 43] among others. The central result of this body of works is that for any code of length , dimension and minimum distance ,


where is such that any single coordinate can be recovered from at most other coordinates [22].

However, the topology of the network of distributed storage system is missing from the above definition of local repairability. Namely, all servers are treated equally irrespective of their physical positions, proximities, and connections. Here, in this paper, we take a step to include the network topology into consideration. We study the case when the architecture of the storage system is fixed and the network of storage is given by a graph. In our model, the servers are represented by the vertices of a graph, and two servers are connected by an edge if it is easier to establish up-or-down link between them, for reasons such as physical locations of the servers, architecture of the distributed system or homogeneity of softwares, etc. It is reasonable to assume that the storage-graph is directed, because there may be varying difficulties in establishing an up or down link between two servers. Under this model, we impose the local recovery or repair condition in the following way: the content of any failed server must be reconstructible from the neighboring servers on the storage graph.

Assuming the above model, the main quantity of interest is the amount of information that can be stored in the graph. We call this quantity the storage capacity of the graph. Finding this capacity exactly, as well as to construct explicit schemes that achieve this capacity, are both computationally hard problems for an arbitrary graph. However, we show that good approximation schemes are possible – and for some special classes of graphs we can even compute this capacity exactly with constructive schemes. In particular, for any undirected graph, the storage capacity is sandwiched between the maximum matching and the minimum vertex cover, two quantities within a factor of two of each other. Similar statement, albeit concerning different properties, is possible for directed graphs.

It turns out that, our model is closely related to the popular index coding problem on a side information graph. In the index coding problem, a set of users are assigned bijectively to a set of variable that they want to know. However, instead of knowing the assigned variable, each knows a subset of other variables. This scenario can be depicted by a so-called side-information graph where each vertex represents a user and there is an edge between users A and B, if A knows the variable assigned to B. Given this graph, how much information should a broadcaster has to transmit, such that each vertex can deduce its assigned variable?

The above problem of index coding was introduced in [5] (it has a predecessor in [6]), and since then is a subject of extensive research. It was shown in [19] that any network coding problem can be reduced to an index coding problem, and the index coding capacity is among the computationally hardest problems of all network coding [27, 7]. A prominent work in the index coding literature is [1], that studies the broadcast rate for index coding. It turns out that an auxiliary quantity (called ) used in [1] is exactly the storage capacity111Actually, is the storage capacity that we introduce here. that we introduce and study in this paper (attachment of to any quantity of practical use was absent in [1]). Recently K. Shanmugam [39] pointed out to the author that this quantity has also been studied as graph entropy222In literature, the term “graph entropy” usually refers to a different quantity [26]. in [37] by Riis.

Using the results of [1] it is possible to show a connection between the broadcast rate of index coding and the storage-capacity when the side-information graph and the storage graph are the same. Indeed, we show that there exists a duality between a storage code and an index code on the same graph. This observation, which also connects the complementary index coding rate of [11] with the storage capacity, is further explored in this paper.

The local repairability property on a graphical model of storage can be extended to several directions. One may ask for protection against catastrophic failures, and therefore also impose a minimum distance condition on codes, which is a common fixture of the local recovery literature. In this scenario, we obtain a general bound that include previous results such as eq. (1) as special cases. Moreover such bounds can also be made dependent on the size of the alphabet (size of storage node).

Furthermore, instead of a single node local repairability, multiple failures can also be considered. Such multiple failures and the corresponding cooperative local recovery model in distributed storage was recently introduced in [35]. In this paper we generalize this model on graphs.

The storage coding problem of our model is a very fundamental network coding problem, and one of our main observation is that reasonable approximation schemes are possible for storage coding. While the index coding rate is very hard to approximate (see, [27]) it is possible to have good approximation constructively for storage capacity with linear (explained in Section II) codes. This should be put into contrast with results, such as [7, Thm. 1.2], which show that a rather large gap must exist between vector linear and nonlinear index coding (or general network coding) rates.

Apart from the approximation guarantee, there are other evidences to the fact that index coding and our storage coding are two very different problems by nature. For example, for two disconnected graphs, the total storage capacity is the sum of the capacities of the individual graphs. But the index coding length for the union of two disconnected graphs may be smaller than the sum of individual code lengths of the graphs (see, Thm. 1.1-1.4 and the accompanying discussions in [1]).

In a parallel independent work [40], one of our initial results, namely, the duality between storage and index codes (see Prop. 1) is proved for vector linear codes. The authors of [40] further use that observation to give an upper bound on the optimal linear sum rate of the multiple unicast network coding problem. In this paper we have a completely different focus.

I-a Results and organization

The paper is organized in the following way.

  • Model of a repairable distributed storage: In Section II, we introduce formally the model of a recoverable distributed storage system and the notion of an optimal storage code given a graph. This section also introduces the quantities of our interest, namely the capacity of storage.

  • Relation to index coding: In Section III, we explore the relation of an optimal storage code to the optimal index code. We provide an algorithmic proof of a duality relation between the index code and distributed storage code. Our proof is based on a covering argument of the Hamming space, and rely on the fact that for any given subset of the Hamming space there exist several translations of the set, that have very small overlaps with the original subset.

  • Bounds and algorithms: In Section IV-A, we provide constructive schemes that achieves a storage-rate within half of what is maximum possible for any undirected graph (the scheme is optimum for bipartite graphs). Some other existential results are also proved in this section. Next, we extend the approximation schemes towards directed graphs in Section IV-B. It turns out to be a harder problem for directed graphs and we provide a scheme with a logarithmically (with graph size) growing approximation factor.

  • Bounds on minimum distance and other multiple failure models: In the last section, Section V, we generalize the notions of local recovery on graphs to include the minimum distance criterion and cooperative local recovery. In both of these cases, we provide fundamental converse bounds and outline some constructive schemes. In particular, the well-known impossibility results on the minimum distance of a locally repairable codes, such as eq. (1) or the ones presented in [10], simply follow from Thm. 14 and Prop. 15.

Ii Recoverable distributed storage systems

In this section, we introduce the basic notion of a single-failure recoverable storage system. Consider a network of distributed storage, for example, one of Fig. 1, where several servers (vertices) are connected via network links (edges). As mentioned in the introduction, the property of two servers connected by an edge is based on the ease of establishing a link between the servers333One might consider a nonnegative weight on each edge, which would be a natural generalization. If the data of any one server is lost, we want to recover it from the nearby servers, i.e., the ones with whom it is easy to establish links. This notion is formalized below. It is also possible (and sensible, perhaps) to model this as a directed graph (especially when uplink and downlink constructions have varying difficulties). In the rest of the paper the definitions, claims and arguments hold for both directed and undirected graphs, unless otherwise specified.

Suppose, the graph represents the network of storage. For any , define to be the neighborhood of . Each element of represents a server, and in the case of a server failure (say, is the failed server) one must be able to reconstruct its content from its neighborhood .

Given this constraint what is the maximum amount of information one can store in the system? Without loss of generality, assume and the variables respectively denote the content of the vertices, where,

Definition 1

A recoverable distributed storage system (RDSS) code with storage recovery graph is a set of vectors in together with a set of deterministic recovery functions, for such that for any codeword


The decoding functions depend on G. The log-size of the code, , is called the dimension of , or . Given a graph the maximum possible dimension of an RDSS code is denoted by .

Note that, in this paper, is expressed in -ary units. To convert it to bits we need to multiply with .

As an example, if is a complete graph then . This is possible because in vertices we can store arbitrary values, and in the last vertex we can store the sum (modulo ) of the stored values.

Fig. 1: Example of a distributed storage graph

As another example, consider the graph of Fig. 1 again. Here, . The recovery sets of each vertex (or storage nodes) are given by:

Suppose, the contents of the nodes are respectively, where, Moreover,

Assume, the functions , in this example are linear. That is, for

This implies, must belong to the null-space (over ) of

The dimension of the null-space of is minus the rank of . At this point the following definition is useful.

Suppose, be an matrix over . It is said that fits over if for all and whenever and .

Definition 2

The minrank [23] of a graph over is defined to be,


Notice, in the example above, fits the graph . Hence, it is evident that the dimension of the RDSS code is (see, Defn. 2). From the above discussion, we have,


and, is the maximum possible dimension of an RDSS code when the recovery functions are all linear.

Linear RDSS codes are not optimal all the time. This is shown in the following example.

Example 1

This example is present in [1], and the distributed storage graph, a pentagon, is shown in Fig. 2. For this graph, a maximum-sized binary RDSS code consists of the codewords . The recovery functions are given by,

Here bits. If all the recovery functions are linear, we could not have an RDSS code with so many codewords. Indeed, since minrank of this graph over is 3, we could have had only codewords with linear recovery functions.

Literatures of distributed storage often considers vector codes and vector linear codes. In our case, in a vector code, instead of a symbol, a vector is stored in each of the vertices. In the context of general nonlinear codes, vector codes do not bring any further technical novelty and can just be thought of as codes over a larger alphabet. The capacity of storage can only increase when we consider codes over larger alphabet.

Definition 3

Define the vector capacity of a graph to be,


Recall that is measured in -ary units above. The limit of (5) exists and is equal to This follows from the superadditivity,

and Fekete’s lemma.

A vector linear RDSS code, on the other hand, is quite different from simple linear codes. Each server stores a vector of symbols, say. Now in the event of a node failure, each of the lost symbols must be recoverable by a linear function of all the symbols stored in the neighboring vertices. In other words, if -ary symbols are stored in the vertices, then the recovery functions are over , and not over (for nonlinear recovery, this does not make any difference).

The Shannon capacity [41] of a graph is a well-known quantity and it is known to be upper bounded by minrank [23]. Any concrete reasoning relating Shannon capacity to is of interest, but has not been pursued in this paper. It is to be noted that for the pentagon of Fig. 2, the Shannon capacity is while .

Iii Relation with Index Coding

We start this section with the definition of an index coding problem. The main objective of this section is to establish and explore the relation of the index coding rate and , given a graph .

In the index coding problem, a possibly directed side information graph is given. Each vertex represents a receiver that is interested in knowing a uniform random variable . The receiver at knows the values of the variables . How much information should a broadcaster transmit, such that every receiver knows the value of its desired random variable? Let us give the formal definition from [5], adapted for -ary alphabet here.

Definition 4

An index code for with side information graph is a set of codewords in together with:

  1. An encoding function mapping inputs in to codewords, and

  2. A set of deterministic decoding functions such that for every .

The encoding and decoding functions depend on G. The integer is called the length of , or . Given a graph the minimum possible length of an index code is denoted by .

It is not very difficult to deduce the connection between the length of an index code to the minrank of the graph – and it was shown in [5] that,


The above inequality can be strict in many cases [1, 29]. However, is the minimum length of an index code on when the encoding function, and the decoding functions are all linear. The following proposition is also immediate.

Proposition 1

The null-space of a linear index code for is a linear RDSS code for the same graph .


All the vectors that are mapped to zero by the encoding function of an index code form an RDSS code, as any symbol stored at a vertex can be recovered by the corresponding index coding decoding function for that vertex. On the other hand the cosets of an RDSS code partition the space and the set of cosets is isomorphic to the null-space of the RDSS code. Hence an index code can be formed that encodes a vector to the coset it belongs to. \qed

Note that, it is not true that , although Eq. (6) and Eq. (4) suggest a similar relation. This is shown in the graph of Example 1. There, the minimum length of an index code for this graph is , i.e., , and this is achieved by the following linear mappings. The broadcaster transmit and The decoding functions are,

Although in general , these two quantities are not too far from each other. In particular, for large enough alphabet, the left and right hand sides can be arbitrarily close. This is reflected in Thm. 2 below.

Iii-a Implication of the results of [1]

At this point we cast a result of [1] in our context. In [1], the problem of index coding was considered and to characterize the optimal size of an index code, the notion of a confusion graph was introduced. Two input strings, are called confusable if there exists some , such that , but for all . In the confusion graph of , the total number of vertices are , and each vertex represents a different -ary-string of length . There exists an edge between two vertices if and only if the corresponding two strings are confusable with respect to the graph . The maximum size of an independent set of the confusion graph is denoted by .

The confusion graph and in [1] were used as auxilaries to characterize the the rate of index coding; they were not used to model any practical problem. From our definition of RDSS codes (see Def. 1), it is evident that this notion of confusable strings fits perfectly to the situation of local recovery of a distributed storage system. Namely, , in our problem becomes the largest possible size of an RDSS code for a system with storage-graph given by .

We restate one of the main theorems of [1] using the terminology we have introduced so far.

Theorem 2

Given a graph , we must have,


This result is purely graph-theoretic, the way it was presented in [1]. In particular, the size of maximum independent set of the confusion graph, can be identified as the size of the RDSS code, and its relation to the chromatic number of the confusion graph, which represents the size of the index code, was found. Namely the proof was dependent on the following two crucial steps.

  1. The chromatic number of the graph can only be so much away from the fractional chromatic number (see, [1] for detailed definition).

  2. The confusion graph is vertex transitive. This implies that the maximum size of an independent set is equal to the number of vertices divided by the fractional chromatic number.

A proof of the first fact above can be found in [28]. In what follows, we give a simple coding theoretic proof of Thm. 2, where the technique is same as [1]; but it bypasses the graph-theoretic notations. However, our proof will expose some further nuances in the relation of index coding and RDSS codes (see, Sec. III-C and Lemma. 7). Because of the derandomization of Lemma. 7, we can get rid of a look-up table to decode the index code that is ‘dual’ of a given RDSS code.

Iii-B The proof of the duality

We prove Theorem 2 with the help of following two lemmas. The first of them is immediate and can be proved by a simple averaging argument.

Lemma 3

If there exists an index code of length for a side information graph on vertices, then there exists an RDSS code of dimension at least for the distributed storage graph .


Suppose, the encoding and decoding functions of the index code are and . There must exists some such that . Let, be the RDSS with recovery functions,


The second lemma might be of more interest as it is a bit less obvious.

Lemma 4

If there exists an RDSS code of dimension for a distributed storage graph on vertices, then there exists an index code of length for the side information graph .

Combining these two lemmas we get the proof of Theorem 2 immediately.

To prove Lemma 4 , we need the help of two other lemmas. First of all notice that, translation of any RDSS code is an RDSS code.

Lemma 5

Suppose, is an RDSS code. Then any known translation of is also an RDSS code of same dimension. That is, for any , is an RDSS code of dimension .


Let, Also assume, , and . We know that, there exist recovery functions such that, Now, . \qed

The proof of Lemma 4 crucially use the existence of a covering of the entire , by translations of an RDSS code. Indeed, we have the following result.

Lemma 6

Suppose, is an RDSS code for a graph . There exists vectors , such that



Suppose, are randomly and independently chosen from . Now, the expected number of points in the space not covered by any of the translations is at most when we set in the above expression (see [2, Prop. 3.12]).

If, instead we set then the expected number of points, that do not belong to any of the translations is at most . To cover all these remaining points we need at most other translations. Hence, there must exists a covering such that translations suffice. \qed

Using Lemmas 5 and 6 we now prove Lemma 4.


Lemmas 5 and 6 show that there exist, all of which are RDSS codes of dimension such that


where . Indeed, can set to be equal to , which is an RDSS code by Lemma 5.

Now, any must belong to at least one of the s. Suppose, and . Then, the encoding function of the desired index code is simply given by, . If the recovery functions of are , then, the decoding functions of are given by:

Clearly the length of the index code is . \qed

The most crucial step in the proof of Thm. 2 is Lemma 6, that show existence of a desired set of points in : we need to show the existence of a covering of the entire , by translations of an RDSS code. Next we show that stronger statements in lieu of Lemma 6 is possible: the translations themselves form a linear subspace. This leads to a derandomization and ease of decoding of the index code in each of the receivers.

Iii-C Refinements of Lemma 6 and decoding of index code

In this section, we show that the points whose existence is guaranteed by Lemma 6 can be made to satisfy some extra properties. In particular, when , any randomly chosen linear subspace of dimension suffices for our purpose with high probability.

Definition 5

Given a set of vectors from , define the binary span of the set to be

Lemma 7

Suppose, is an RDSS code for a graph . There exists a set of vectors, whose binary span is such that


To prove this lemma we construct a greedy algorithm that chooses about vectors recursively instead of random vectors of Lemma 6. The proof is deferred to the appendix. The greedy covering argument that we employ in the proof was used to show the existence of good linear covering codes in [15] (see, also, [21, 13, 32]). We can use Lemma 7 instead of Lemma 6 to complete the proof of Lemma 4. Lemma 7 gives some algorithmic advantage in decoding an index code that we explain next.

Fig. 2: A distributed storage graph (the pentagon) that shows .

Suppose is an RDSS code with known recovery functions. Let be the set of vectors promised in Lemma 6 such that . Consider next the corresponding index code constructed in the proof of Lemma 4. Given any as input, the encoding of this index code finds a such that A bijection that maps to a -ary vector of length completes the encoding of the index coding (here is the length of the index code444We assume to be an integer, which not necessarily is the case. The argument remains the same when is not an integer, except for the fact that we have to deal with ceiling and floor functions.). In short, the encoding of the index code maps to for an Now for decoding of this index code, one first needs to map back any given encoded vector to , and then use the recovery functions of the RDSS code . The recovery functions of RDSS code is known, as they are known for the RDSS code .

In the above decoding of index code, we must maintain a look-up table of size exponential in , that stores the bijective map between to . This map tells us recovery function of which RDSS code to use (among all the translations). However, using Lemma 7 this constraint can be removed.

Assume is an arbitrary polynomial time bijective mapping that produces a binary sequence from a -ary sequence. There are many such mappings that can be trivially constructed. Using Lemma 7, is the binary span of vectors such that . Then the decoding of the obtained index code can be performed from in two steps. First, suppose . Next, we compute . For the decoding of the index code, we now use the recovery functions of . The map from to defines in this case. Hence, we no longer need to maintain a look-up table, and the required RDSS code, that we need to decode, can be found in polynomial time.

Remark 1

Note that, a random subset of , generated as the binary span of random and uniformly chosen vectors from , satisfies Eq. (9) with high probability. This can be proved along the line of [8, 14] where it was shown almost all linear codes are good covering codes.

Given an RDSS code, our derandomization benefits only the decoding of the obtained index code, and not the encoding. But also notice that, encoding is performed by the broadcaster in one place, while decoding is performed in every receiver (that is likely to have less computational power compared to the broadcaster).

Iv Algorithmic results and constructions of RDSS codes

In this section we provide some constructions of RDSS codes, both for directed and undirected graphs. First, note that, existential results similar to Gilbert-Varshamov bound for codes can be provided for RDSS codes.

Theorem 8

For the graph , define,



Recall that, any RDSS code can be found as an independent set of the confusion graph. The confusion graph is regular with degree exactly equal to . Indeed, if for , for some , then and both cannot be part of an RDSS code without violating the repair condition. Now, using Turán’s Theorem, there must exist an RDSS code of size


can be bounded from above in a number of ways if some properties of the graph is known. We give an example next.

Example 2 (Degree distribution)

Using a simple union bound for counting, we get the following:

where is the number of vertices with degree . This shows that, the capacity of is at least,

For a large class of networks such as the internet, world-wide-web and social networks, the empirical degree distributions have been estimated (most of the times it follows a power-law decay). Using these, the achievable storage-capacity of the networks can be approximated.

For general graphs, the union bound can be quite loose and it might be difficult to compute . However, it is possible to construct codes and compute via deterministic algorithms using more sophisticated ways than above. We consider the cases of undirected and directed graphs separately as different algorithms are needed in these scenarios. For impossibility results, however, the technique is same: we show that there exists a large enough subset of vertices that cannot store any information on top of what the rest of the vertices already store.

Iv-a Undirected graph

In this section, we show that for an undirected graph , an RDSS code can be constructed in polynomial time that achieves a rate within half of what is optimal for . In particular, if is bipartite, then the optimal code achieving a rate equal to can be constructed. Hence, for undirected graph it is relatively easy to compute or approximate .

To achieve the above goal, start with the following lemma first. Recall that, a vertex cover of a graph is a subset such that either or or both.

Lemma 9

For any undirected graph , and any ,


where is the size of the minimum vertex cover of .


Suppose, is an independent set in . Any vertex has Hence, . Notice, is a vertex cover of . When is the largest independent set, we have, \qed

Iv-A1 Construction of code

A matching in a graph is a set of edges such that no two edges share a common vertex. The size of the largest possible matching of the graph is denoted by below. Polynomial time algorithms to find the maximum matching is well-known [18].

To store information in the graph, first we find a maximum matching . Then for any , we store the same variable in both and . In this way we will be able to store amount of information. Whenever one vertex fails we can go to only one other vertex to retrieve the information. Hence, .

Surprisingly, this simple constructive scheme is optimum for bipartite graphs, within a factor 2 of optimum storage for arbitrary graphs and is very unlikely to get improved upon via any other constructive scheme.

First of all, we need the following well-known lemma [45].

Lemma 10

For any graph ,

The proof is straight-forward. To cover all the edges one must include at least one vertex from the edges of any matching. On the other hand, if both the endpoints of the edges of a maximal matching is deleted, no two other vertices can be connected (from the maximality of the matching).

Now using Lemmas 9, 10, and the discussion above, we have,


Hence, for any graph , we can store via a constructive procedure amount of information. Indeed, for a -approximation, we do not even need to find the maximum matching; a maximal matching, that can be found by a simple greedy algorithm, is sufficient.

It is unlikely that anything strictly better than the matching-code above can be found for an arbitrary graph in polynomial-time, because that would imply a better-than-2-approximation for the minimum vertex cover. Khot and Regev [25] have shown that if the unique game conjecture is true then such algorithm is not possible. Inapproximability of minimum vertex cover under milder assumptions appear in the famous paper of Dinur and Safra [17].

However for some particular classes of graphs we can do much better. Specifically if the graph is bipartite then König’s theorem asserts . Hence for a bipartite graph , and an RDSS code can be designed in polynomial time.

Other special graphs, such as planar graphs [4, 3], that have better approximation algorithms for minimum vertex cover, might also allow us to approximate better. We left that exercise as future work.

Iv-B Directed graphs

Next we attempt to extend the above techniques to construct RDSS codes for directed graphs. The following proposition is a simple result that proves to be an useful converse bound.

Proposition 11

For any graph , and any ,


where is the minimum number of vertices to be removed to make acyclic (also called the minimum feedback vertex set.

Note that, results of [5] or [11] imply that for any directed graph , is at least the size of the maximum acyclic induced subgraph of . From this, and from Thm. 2, we can deduce that . The above proposition is stronger in the sense that we get rid of the term.


Suppose, is such that the subgraph induced by is acyclic. We first claim that, the dimension of any RDSS code in must be at most . Let us prove this claim with a simple reasoning that appear in [43]. Suppose is such that all edges in that are outgoing from has the other end in . As the induced subgraph from is acyclic, there will always exist such vertex. Hence, whatever we store in , must be a function of what are stored in vertices of . Now, consider the subgraph induced by . As this subgraph is also acyclic, there must exist a vertex whose content is a function of the the contents of vertices of . Proceeding as this, we deduce that, no more than amount of information can be stored in the graph .

Now consider the maximum induced acyclic subgraph of . If the vertex set of such subgraph is , then Hence, . \qed

It is not possible to construct a code by a matching, as in the case of undirected graph. In the undirected graph we could do that because, if , then just by replicating the symbol of in we can guarantee recovery for both and . In the case of directed graph, such recovery is possible, if we have a directed cycle: , where and for all . We can just store one symbol in and then replicate this symbol over all vertices of the cycle. Whenever one node fails we can go to the next node in the cycle to recover what we lost.

Two cycles in the graph will be called vertex-disjoint if they do not have a common vertex.

Suppose, is a set of vertex-disjoint cycles of the graph . Then it is possible to store symbols in the graph. Hence


where is the maximum number of vertex-disjoint cycles in the graph .

At this point it would be helpful to establish a relation between and . Such relation appear in the work of Erdös and Pósa [20]. Namely, for any undirected graph, it was shown that There are two bottlenecks of using this result for our purpose. First, this only holds for undirected graphs. Second, computing the optimal vertex-disjoint cycle packing is a computationally hard problem even for undirected graphs.

There are a number of efforts towards generalizing the Erdös and Pósa theorem for directed graphs culminating in [36] that shows that for directed graph there exists an increasing function such that,

However, the function implied in [36] can be super-exponential. Hence, for our purpose it is not of much interest.

In what follows, we show that a fractional vertex-disjoint cover also lead to an RDSS code. Albeit the code is vector-linear as opposed to the scalar codes we have been considering so far. We need the following fractional vertex-disjoint packing result of Seymour [38]. Suppose, is the set of all directed cycles of . Suppose, assigns a rational number to every directed cycle. Let denote the vertices of the cycle . We impose a condition that must satisfy,

for all . Under this condition we maximize the value of over all functions . Suppose this value is . Then [38] asserts,

We will now show a construction of RDSS codes using Seymour’s result.

Theorem 12

Suppose in each vertex of the directed graph it is possible to store a vector of length , i.e., from , for a large enough integer . Then, for any , it is possible to store constructively -ary symbols in the graph, such that content of any vertex can be recovered from its neighbors, and

Remark 2

We can use the method of [11], where the complementary index coding problem, i.e., maximization of is studied, to prove this theorem. Perhaps their result cannot be used as a blackbox as that would lead to an extra additive error term of , due to the gap between and . By a direct analysis, we can avoid this error term. However, the analysis of [11] is more complicated than the proof below. To find a vertex-disjoint packing in polynomial time the authors of [11] first constructs a so-called vertex-split graph and converts the vertex disjoint packing in to a edge-disjoint packing problem and then converts it back. It also uses crucially a result of [33] to find a fractional edge-disjoint packing. Below we follow a much simpler path.


Suppose, is the set of all directed cycles of , and is a function such that

  1. , for all .

  2. where .

We know such function exists from [38] and Prop. 11. Without loss of generality, we can assume for all , , and is a positive integer.

Suppose we want to store a vector In each vertex we store a vector of length at most , i.e., content of each vertex belong to . These vectors are decided in the following way. We partition the coordinates of , that is , in to parts. Each cycle is assigned coordinates to it. We can do such partition, because . For any , the coordinates assigned to are stored in for all . Hence the length of the vector need to be stored in is which is consistent with our assumption.

Now if the content of any vertex is needed to be restored, we can use the contents of the neighboring vertices. If , then the symbols stored in can be restored from the copy stored in the vertex where is an edge in . This holds true for all such that .

The function can be found by solving a linear program: maximize , subject to , for all . The number of variables in the linear program is equal to the number of cycles in the graph . The dual problem is given by means of finding a function that minimizes such that for every directed cycle . Although the number of constraints in this dual linear program can be exponentially large, there exists a separation oracle that can differentiate between a feasible solution and an infeasible one. For example, given any , one can just calculate the shortest weight cycle, , in polynomial time and check whether that is greater than or not. If such separation oracle exists, then the dual linear program can be solved in polynomial time [45, p. 102]– and at the same time a primal optimal solution can also be found (by using say, ellipsoid method).

Hence, it is possible to explicitly construct the above-mentioned vector RDSS code. \qed

Subsequently, we consider multiple node failures in our storage model.

V Multiple failures

In this section, we describe two possible generalizations of the quantity that are consistent with the distributed storage literature and take care of the situation when more than one server-nodes simultaneously fail.

V-a Collaborative Local Repair on Graphs

The notion of cooperative local repair was introduced as a generalization of the definition of local recovery in [35]. In this definition, instead of one server failure, provisions for multiple server failures are kept. Next we extend this notion to distributed storage on graphs.

Given a graph , we use each vertex to store a -ary symbol. A code is called cooperative -RDSS code if for any set of connected vertices , there exist deterministic functions such that for any codeword , for all . This means that if any set of or less connected vertices fail, then one should be able to recover them from the neighbors of that set.

Note that, it is necessary in the definition to consider all sets of size less than as well, because the local recovery of any set does not imply that all proper subsets of are locally recoverable (i.e., not all neighbors of are neighbors of a given vertex in ).

The reason it is sufficient to consider connected sets for the definition is that two disconnected sets of vertices of total size are locally recoverable as any set less than size is.

We below consider as example only the special case of for undirected graphs. In this case, apart from being a usual RDSS code, the code must also be able to deal with the case when both vertices of an edge fail. Hence the construction based on matching of Sec. IV-A will not work. Instead, for our first result, we will need the following definition.

A -path in a graph is a set of vertices such that is an edge in the graph for all . A subset of vertices, such that for any -path of the graph at least one of s must belong to , is called a -path vertex cover [9].

Proposition 13

Suppose, given an undirected graph , is the smallest -path vertex cover. Then the dimension of any cooperative -RDSS code is at most .


Assume, is such that every vertex in the the induced subgraph of has degree or . Such sets are called dissociation set and the size of smallest dissociation set is called the dissociation number [47]. From the definition of cooperative -RDSS codes, content of any vertex of can be reconstructed from vertices outside of . Then the dimension of any cooperative -RDSS code is at most . On the other hand, is such that for any : , at least one of or is in . \qed

In other words, the dimension of any cooperative -RDSS code is at most minus the dissociation number. It is possible to find all vertex-disjoint -paths in a graph in polynomial time [46]. Note that the smallest -path vertex cover must contain at least one vertex from any -path. This allows us to construct a cooperative -RDSS code that has dimension at least one-third of what is optimal possible. Indeed, we just repeat the same variable in all three vertices of a -path.

To generalize the above procedure beyond erasures becomes cumbersome and also leads to substantial loss in the dimension of RDSS codes. Instead, in the following, we consider the usual scenario where a provision of recovery from catastrophic failures is included via minimum distance of the code.

V-B Considerations for Minimum distance

Inclusion of the minimum distance as a necessary parameter in a locally repairable code is the norm in distributed storage [22]. In this subsection, on the RDSS codes, we further impose the constraint of minimum distance between the codewords. Given a graph an RDSS code with distance is an RDSS code such that for any , the Hamming distance between them, .

By abusing notations slightly, for any graph and any , define to be the set of all vertices in that has at least one (incoming) edge from . We have the following proposition.

Theorem 14

For any graph , suppose there exists an RDSS code with distance and dimension . Then,


where for an undirected graph is the set of all independent sets of and for directed graphs is the set of vertex-sets of all induced acyclic subgraphs of .

When no local recovery property is required, the graph can be thought as a complete graph. In that case, the above bound reduces to the well-known Singleton bound of coding theory. When no distance property is required (i.e., ), the bound reduces to


We claim that this implies Equations (10) or (12) (for the cases of undirected and directed graphs respectively). Let us show this for the case of undirected graphs as the case of directed graph is analogous. Assume that (15) is satisfied but . However this means that for the largest independent set , . Hence, from (15), we have , which is a contradiction. Hence, .

Finally, when the graph is regular with degree , the bound of (14) becomes (1), as an independent set (or acyclic induced subgraph) of size is guaranteed to exist via Turán’s theorem. Indeed, Turán’s theorem guarantees existence of an independent set of size . Hence . We therefore have, . This guarantees existence of the independent set . Note that, as the graph has degree .


The proof follows a generalization of the proof of Eq. 1 from [43, 10]. Below we provide the proof for undirected graphs which extends straightforwardly to directed graphs.

Let be an RDSS code with distance and dimension for the graph . For any , let denote the restriction of codewords of to the vertices of .

Suppose, is the largest independent set such that . Let be the sized subset that is formed by the union of and any arbitrary vertices. Hence,

which imply must be at most . On the other hand . This proves the theorem. \qed

The bound of (14) can be made to be dependent on per node storage, or the alphabet size . Indeed, we can have the following proposition.

Proposition 15

For any -ary RDSS code on with distance and dimension ,


where, is the maximum size of a -ary error-correcting code of length and distance , and is defined in Thm. 14.


As before, let be an RDSS code with distance and dimension for the graph . We have, for any ,

Hence, there must exist an , such that