Cooperative Repair of Multiple Node Failures in Distributed Storage Systems
Abstract
Cooperative regenerating codes are designed for repairing multiple node failures in distributed storage systems. In contrast to the original repair model of regenerating codes, which are for the repair of single node failure, data exchange among the new nodes is enabled. It is known that further reduction in repair bandwidth is possible with cooperative repair. Currently in the literature, we have an explicit construction of exactrepair cooperative code achieving all parameters corresponding to the minimumbandwidth point. We give a slightly generalized and more flexible version of this cooperative regenerating code in this paper. For minimumstorage regeneration with cooperation, we present an explicit code construction which can jointly repair any number of systematic storage nodes.
1 Introduction
In a distributed storage system, a data file is distributed to a number of storage devices that are connected through a network. The data is encoded in such a way that, if some of the storage devices are disconnected from the network temporarily, or break down permanently, the content of the file can be recovered from the remaining available nodes. A simple encoding strategy is to replicate the data three times and store the replicas in three different places. This encoding method can tolerate a single failure out of three storage nodes, and is employed in largescale cloud storage systems such as Google File System [1]. The major drawback of the triplication method is that the storage efficiency is fairly low. The amount of backup data is two times that of the useful data. As the amount of data stored in cloud storage systems is increasing in an accelerating speed, switching to encoding methods with higher storage efficiency is inevitable.
The ReedSolomon (RS) code [2] is a natural choice for the construction of highrate encoding schemes. The RS code is not only optimal, in the sense of being maximaldistance separable, it also has efficient decoding algorithms (see e.g. [3]). Indeed, Facebook’s storage infrastructure is currently employing a highrate RS code with data rate 10/14. This means that four paritycheck symbols are appended to every ten information symbols. Nevertheless, not all data in Facebook’s clusters is currently protected by RS code. This is because the traditional decoding algorithms for RS code do not take the network resources into account. Suppose that the 14 encoded symbols are stored in different disks. If one of the disks fails, then a traditional decoding algorithm needs to download 10 symbols from other storage nodes in order to repair the failed one. The amount of data traffic for repairing a single storage node is 10 times the amount of data to be repaired. In a largescale distributed storage system, disk failures occur almost everyday [4]. The overhead traffic for repair would be prohibitive if all data were encoded by RS code.
In view of the repair problem, the amount of data traffic for the purpose of repair is an important evaluation metric for distributed storage systems. It is coined as the repair bandwidth by Dimakis et al. in [5]. An erasurecorrecting code with the aim of minimizing the repair bandwidth is called a regenerating code. Upon the failure of a storage node, we need to replace it by a new node, and the content of the new node is recovered by contacting other surviving nodes. The parameter is sometime called the repair degree, and the contacted nodes are called the helper nodes or simply the helpers. The repair bandwidth is measured by counting the number of data symbols transmitted from the helpers to the new node. If the data file can be reconstructed from any out of storage nodes, i.e., if any disk failures can be recovered, then we say that the reconstruction property is satisfied. The design objective is to construct regenerating codes for storage nodes, satisfying the reconstruction, and minimizing the repair bandwidth, for a given set of code parameters , and .
We note that the requirement of reconstruction property is more relaxed than the condition of being maximaldistance separable (MDS). A regenerating code is an MDS erasure code only if the number of symbols contained in any nodes is exactly equal to the number of symbols in the data file. In a general regenerating code, the total number of coded symbols in any nodes may be larger than the total number of symbols in a data file.
There are two main categories of regenerating codes. The first one is called exactrepair regenerating codes, and the second one is called functionalrepair regenerating codes. In the first category of exactrepair regenerating codes, the content of the new node is the same as in the old one. In functionalrepair regenerating codes, the content of the new node may change after a node repair, but the reconstruction property is preserved. For functionalrepair regenerating code, a fundamental tradeoff between repair bandwidth and storage per node is obtained in [5]. This is done by drawing a connection to the theory of network coding. Following the notations in [5], we denote the storage per node by and the amount of data downloaded from a surviving node by . The repair bandwidth is thus equal to . A pair is said to be feasible if there is a regenerating code with storage and repair bandwidth . It is proved in [5] that, for regenerating codes functionally repairing one failed node at a time, is feasible if and only if the file size, denoted by , satisfies the following inequality,
(1) 
If we fix the file size , the inequality in (1) induces a tradeoff between storage and repair bandwidth.
There are two extreme points on the tradeoff curve. Among all the feasible pairs with minimum storage , the one with the smallest repair bandwidth is called the minimumstorage regenerating (MSR) point,
(2) 
On the other hand, among all the feasible pairs with minimum bandwidth , the one with the smallest storage is called the minimumbandwidth regenerating (MBR) point,
(3) 
Existence of linear functionalrepair regenerating codes achieving all points on the tradeoff curve is shown in [6]. Explicit construction of exactrepair regenerating codes, called the productmatrix framework, achieving all code parameters corresponding to the MBR point is given in [7]. Explicit construction of regenerating codes for the MSR point is more difficult. At the time of writing, we do not have constructions of exactrepair regenerating codes covering all parameters pertaining to the MSR point. Due to space limitation, we are not able to comprehensively review the literature on exactrepair MSR codes, but we mention below some constructions which are of direct relevance to the results in this paper.
The MISER code (which stands for MDS, Interferencealigning, Systematic ExactRegenerating code) is an explicit exactrepair regenerating code at the MSR point [8] [9]. The code parameters are . It is shown in [8] and [9] that every systematic node, which contains uncoded data, can be repaired with storage and repair bandwidth attaining the MSR point in (2). This result is extended in [10], which shows that, with the same code structure, every paritycheck node can also be repaired with repair bandwidth meeting the MSR point. The productmatrix framework in [7] also gives a family of MSR codes with parameters . All of the MSR codes mentioned above have code rate no more than . For highrate exactrepair MSR code, we refer the readers to three recent papers [11], [12] and [13], and the references contained therein.
We remark that the interior points on the tradeoff curve between storage and repair bandwidth for functionalrepair regenerating codes are in general not achievable by exactrepair regenerating codes (see e.g. [14] and [15]).
All of the regenerating codes mentioned in the previous paragraphs are for the repair of a single node failure. In largescale distributed storage system, it is not uncommon to encounter multiple node failures, due to various reasons. Firstly, the events of nodes failure may be correlated, because of power outage or aging. Secondly, we may not detect a node failure immediately when it happens. A scrubbing process is carried out periodically by the maintenance system, to scan the hard disks one by one and see whether there is any unrecoverable error. As the volume of the whole storage system increases, it will take a longer time to run the scrubbing process and hence the integrity of the disks will be checked less frequently. A disk error may remain dormant and undetected for a long period of time. If more than one errors occur during this period, we will detect multiple disk errors during the scrubbing process. Lastly, in some commercial storage systems such as TotalRecall [16], the repair of a failed node is deliberately deferred. During the period when some storage nodes are not available, degraded read is enabled by decoding the missing data in real time. A repair procedure is triggered after the number of failed nodes reaches a predetermined threshold. This mode of repair reduces the overhead of performing maintenance operations, and is called lazy repair.
A naive method for correcting multiple node failures is to repair the failed nodes one by one, using methods designed for repairing single node failure. A collaborative recovery methodology for repairing multiple failed nodes jointly is suggested in [17] and [18]. The repair procedure is divided into two phases. In the first phase, the new nodes download some repair data from some surviving nodes, and in the second phase, the new nodes exchange data among themselves. The enabling of data exchange is the distinctive feature. We will call this the cooperative or collaborative repair model.
The minimumstorage regime for collaborative repair is considered in [17] and [18]. It is shown that further reduction in repair bandwidth is possible if data exchange among the new nodes is allowed. Optimal functionrepair minimumstorage regenerating codes are also presented in [18]. The results are extended by LeScouarnec et al. to the opposite extreme point with minimum repair bandwidth in [19] and [20]. The storage and repair bandwidth per new node on the minimumstorage collaborative regenerating (MSCR) point are denoted by and , respectively, while the storage and repair bandwidth per new node on the minimumbandwidth collaborative regenerating (MBCR) point are denoted by and , respectively. The MSCR and MBCR points for functional repair are
(4)  
(5) 
We note that when , the operating points in (4) and (5) reduce to the ones in (2) and (3).
The vertices on the tradeoff curve between storage and repair bandwidth for collaborative repair are characterized in [21]. It is shown in [21] that for all points on the cooperative functionalrepair tradeoff curve can be attained by linear regenerating codes over a finite field. A numerical example of tradeoff curves for singleloss regenerating code and cooperative regenerating code is shown in Figure 1. We see that cooperative repair requires less repair bandwidth in compare to singlefailure repair.
Explicit exactrepair codes for the MBCR point for all legitimate parameters were presented by Wang and Zhang in [22]. The construction in [22] subsumes earlier constructions in [23] and [24]. In contrast, there are not so many explicit construction for MSCR code. The parameters of existing explicit constructions are summarized in Table 1. A construction of exact repair for and is given in [25]. This is extended to an MSCR code with and in [26]. Indeed, a connection between MSCR codes which can repair node failures and noncooperative MSR code is made in [27]. Using this connection, the authors in [27] are able to construct MSCR code with from existing MSR codes. However, there is no explicit construction for exactrepair MSCR code of any failed nodes at the time of writing.
Practical implementations of distributed storage systems which can correct multiple node failures can be found in [28] to [31].
Type  Code Parameters  Ref. 

MBCR  , ,  [22] 
MBCR  , ,  [23] 
MBCR  , ,  [24] 
MSCR  ,  [25] 
MSCR  , , ,  [26] 
MSCR  , , ,  [26] 
(repair of systematic nodes only) 
The rest of this paper is organized as follows. In Section 2, we formally define linear regenerating codes for distributed storage systems with collaborative repair. In Section 3, we give a slight generalization of the cooperative regenerating codes in [22]. The generalized version also achieves all code parameters of the MBCR point, but the building blocks of the construction only need to satisfy a more relaxed condition. In Section 4, we give a simplified description of the repair method in [26], and illustrate how to repair two or more systematic nodes collaboratively in the MISER code. Some concluding remarks are listed in Section 5.
2 A Collaborative Repair Model for Linear Regenerating Code
We will use the following notations in this paper:
: file size.
: the total number of storage nodes.
: the number of storage nodes from which a data collector can decode the original file.
: The number surviving nodes contacted by a new node.
: the number of new nodes we want to repair collaboratively.
: the amount of data stored in a node.
: the amount of data downloaded from a helper node to a new node during the first phase of repair.
: the amount of data exchanged between two new node during the second phase of repair.
: the repair bandwidth per new node.
: finite field of size , where is a prime power.
We describe in this section a mathematical formulation of linear collaborative exact repair. For the problem formulation for the nonlinear case, we refer the readers to [21].
A data file consists of symbols. We let be the vector space . We regard a data file as a vector in , and call it the source vector .
The source vector is mapped to finite field symbols, and each node stores of them. The mapping from the source vector to an encoded symbol is a linear functional on . Following the terminology of network coding, we will call these linear mappings the encoding vectors associated to the encoded symbols. Formally, a linear functional is an object in the dual space of , , which consists of all linear transformations from to . More precisely, an encoding vector should be called an encoding covector instead, but we will be a little bit sloppy on this point and simply use the term “vector”.
The content of a storage node can be described by a subspace of , spanned by the encoding vectors of the encoded symbols stored in this node. For , we let denote the subspace of pertaining to node . The dimension of is no more than ,
for all .
We want to distribute the data file to the storage nodes in such a way that any of them are sufficient in reconstructing the source vector . The reconstruction property requires that the encoding vectors in any storage nodes span the dual space , hence it is required that
for any subset of . Here denotes the sum space of ’s. It will be a direct sum if the regenerating code is MDS.
Suppose that the storage nodes with indices , fail, and we need to replace them by new nodes. For , new node contacts available nodes, and download symbols from each of them. The storage nodes which participate in the repair process are called the helpers. Different new nodes may download repair data from different sets of helpers. Let be the index set of the helpers contacted by new node . Thus, we have
and for all . The downloaded symbols are linear combination of the symbols kept by the helpers. The encoding vector of a symbol downloaded from node is thus contained in . For , let be the subspace of spanned by the encoding vectors of the symbols sent to new node . We have
for all and .
In the second phase of the repair, new node computes and sends finite field symbols to new node , for and . The computed symbols are linear combinations of the symbols which are already received by new node in the first phase of repair. Let be the subspace of spanned by the encoding vectors of the symbols sent from node to node during the second phase. We have
For , new node should be able to recover the content of the failed node . In terms of the subspaces, it is required that
The repair bandwidth per new node is equal to
Any linear code satisfying the above requirements is called a cooperative regenerating code or collaborative regenerating code.
3 Cooperative Regenerating Codes with Minimum Repair Bandwidth
In this section we give a slight generalization of the construction of minimumbandwidth cooperative regenerating codes in [22]. The number of failed nodes, , to be repaired jointly can be any positive integer. The code parameters which can be supported by the construction to be described below is the same as those in [22], i.e., , and satisfy
The file size of the regenerating code is
and each storage node stores symbols. In contrast to the polynomial approach in [22], the construction below depends on the manipulation of a bilinear form (to be defined in (6)).
Encoding. We need a matrix and a matrix for the encoding. Partition and as
where and are submatrices of size . We will choose the matrices and such that the following conditions are satisfied:

any submatrix of is nonsingular;

any submatrix of is nonsingular;

any submatrix of is nonsingular;

any submatrix of is nonsingular.
We can obtain matrices and by Vandermonde matrix or Cauchy matrix. If we use Vandermonde matrix, we can set the th column of to
for . If are distinct elements in , then the resulting matrix satisfies the first and third conditions listed above. We can use Vandermonde matrix for the matrix similarly. Existence of such matrices is guaranteed if the field size is larger than or equal to . Anyway, the correctness of the code construction only depends on the four conditions above.
For , we denote the th column of by , and the th column of by .
We arrange the source symbols in a partitioned matrix
where , and are submatrices of size , and , respectively. The total number of entries in the three submatrices is
We will call the source matrix.
The source matrix induces a bilinear form defined by
(6) 
for and . We distribute the information to the storage nodes in such a way that, for , node is able to compute the following two linear functions,
The first one is a linear mapping from to , and the second is from to . Node can store the entries in the vector , and compute the first function by taking the inner product of the input vector and ,
For the second function , node can store the entries in the vector , and compute by
Since the components of and satisfy a simple linear equation,
(7) 
we only need to store finite field elements in node , in order to implement the function and . Hence, each storage node is only required to store
finite field elements.
Repair procedure. Without loss of generality, suppose that nodes 1 to fail. For , the th new node downloads some repair data from a set of surviving nodes, which can be chosen arbitrarily. Let be the index set of the surviving nodes contacted by node . We have and for all . The helper with index computes two finite field elements
and transmits them to new node . In the first phase of repair, a total of symbols are transmitted from the helpers.
For , the th new node can recover from the following dimensional vector with the components indexed by .
where is the matrix obtained by stacking the row vectors for . Since this matrix is nonsingular by construction, the th new node can obtain . At this point, the th new node is able to compute the function .
In the second phase of the repair procedure, node calculates , for , and sends the resulting finite field symbol to the th new node. Furthermore, node can compute , using the information already obtained from the first phase of repair. Node can now calculate from
using the property that the vectors , for , are linearly independent over . The repair of node is completed by storing components in the vectors and , which are necessary in computing and .
We remark that the total number of transmitted symbols in the whole repair procedure is , and therefore the repair bandwidth per new node is
File recovery. Suppose that a data collector connects to nodes , with
The data collector can download the vectors
for . From the last of the components in , for , we can recover the submatrix in the source matrix , because any submatrix of is nonsingular by assumption. Similarly, from the last components in , we can recover the submatrix , using the property that any submatrix is nonsingular. The remaining source symbols in can be decoded either from the first components of vectors , or the first components of the vectors .
Example. We illustrate the construction by the following example with code parameters , , . The file size is . In this example, we pick as the underlying finite field.
The source matrix is partitioned as
The entries ’s, ’s and ’s are the source symbols. Let be the bilinear form defined as in (6), mapping a pair of vectors in to an element in .
Let be the Vandermonde matrix
(8) 
and for , let be the th column of . Let be the Vandermonde matrix
(9) 
and for , let be the th column of . The th node needs to store enough information such that it can compute the functions
For instance, node can store the last 3 components in vector , and all 7 components in ,
with all arithmetic performed modulo . The missing entry of , namely, the first entry of ,
is a linear combination of . Each node only needs to store 10 finite field symbols . The storage per node meets the bound
We illustrate the repair procedure by going through the repair of nodes 5, 6 and 7. Suppose we lost the content of nodes 5, 6 and 7, and want to rebuild them by cooperative repair. For , and , node computes and and sends them to the node , in the first phase of repair. Node now have 8 symbols,
The first four of them can be put together and form a vector
Because the first four columns of matrix in (8) are linearly independent over , for , node can solve for after the first phase of repair, and is able to calculate for any vector .
The communications among nodes 5, 6 and 7 in the second phase of repair is as follows:
node 5 sends to node 6,
node 5 sends to node 7,
node 6 sends to node 5,
node 6 sends to node 7
node 7 sends to node 5,
node 7 sends to node 6.
For , node can obtain from
In the first phase, we transmit symbols, and in the second phase we transmit 6 symbols. The number of transmitted symbol per new node is thus equal to 10, which is equal to the target repair bandwidth .
To illustrate the reconstruction property, suppose that a data collector connects to nodes 1, 2 and 3. The data collector can download the following vectors
There are totally 33 symbols in these six vectors. They are not linearly independent as the original file only contains 24 independent symbols. We can decode the symbols in the data file by selecting 24 entries in the received vectors, and form a vector which can be written as the product of a lowerblocktriangular matrix and a 24dimensional vector
with denoting a nonsingualr Vandermonde matrix, and a diagonal matrix. The above matrix is invertible and we can obtain the source symbols in the data file.
4 A Class of MinimumStorage Cooperative Regenerating Codes
In this section, we give a simplified description of the the minimumstorage cooperative regenerating code presented in [26]. The code parameters are
The first nodes are the systematic nodes, while the last nodes are the paritycheck nodes. The coding structure of the cooperative regenerating codes to be described in this section is indeed the same as the MISER code [8],[9] and the regenerating code in [10]. Our objective is to show that, with this coding structure, we can repair the failure of any systematic nodes and any paritycheck nodes, for any less than or equal to , attaining the MSCR point defined in (4).
We need a nonsingular matrix and a superregular matrix , both of size . Recall that a matrix is said to be superregular if every square submatrix is nonsingular. Cauchy matrix is an example of superregular matrix, and we may let be a Cauchy matrix.
After the matrices and are fixed, we let be the inverse of and be the matrix . It can be shown that the matrix is nonsingular and is superregular. We have the following relationship among these matrices
Let be entry of , for , and be the entry of .
For , let denote the th column of , and the th column of . The columns of and the columns of will be regarded as two bases of vector space . Let be the dual basis of ’s, and let be the dual basis of ’s. The dual bases satisfy the following defining property
where is the Kronecker delta function.
The last ingredient of the construction is a superregular symmetric matrix and its inverse , satisfying
(10) 
In particular, it is required that , and are all not equal to zero in .
Encoding. A data file consists of
source symbols. For , node is a systematic node and stores source symbols. We can perform the encoding in two essentially the same ways. In the first encoding function, the first nodes store the source symbols and the last nodes store the paritycheck symbols. Let be the dimensional vector whose components are the symbols stored in node . For , node is a paritycheck node, and stores the components of vector
(11) 
where denotes the identity matrix. We note that the matrix within the parenthesis in (11) is the sum of a rank1 matrix and an identity matrix.
In the second encoding function, which is the dual of the first one, nodes store the source symbols and nodes to store the paritycheck symbols. Let be the dimensional vector stored in node . For , node stores the vector
(12) 
This duality relationship is first noted in [10].
We will give a proof of Prop. 1 in terms of matrices. The matrix formulation is also useful in simplifying the description of the repair and decode procedure. Let (resp. , and ) be the matrix whose columns are (resp. , , and ) for . We have
In terms of these matrices, the first encoding function can be expressed as
(13) 
Indeed, the th column of is
Similarly, the second encoding function defined by (12) can be expressed as
(14) 
Proof.
Repair Procedure. Suppose that systematic nodes fail, for some positive integer . We assume without loss of generality that the failed nodes are nodes 1 to , after some appropriate node relabeling if necessary.
In the first phase of repair, each of the surviving nodes sends a symbol to each of the new node. For , the symbol sent to node is obtained by taking the inner product of with the content of the helper node.
Consider node , for some fixed index . The symbols received by node after the first phase of repair are
We make a change of variables and define
For , the th column of is
Because is a nonsingular matrix, Node can obtain the vector from , and vice versa. In terms of the new variables in , (14) becomes
(15) 
The symbol sent from node to node , namely , is the th component of vector
and is equal to
As a result, the information obtained by node after the first repair phase can be transformed to
In the second phase of the repair procedure, node sends the symbol to node , for , . The total number of symbols transmitted during the first and the second part of the repair procedure is . The number of symbol transmissions per failed node is thus
Node wants to recover the th column of , as expressed in (15). The th column of the first term on the righthand side is equal to the product of and the th column of . We note that the components of the th column of are precisely , for , and are already known to node . It remains to calculate th column of , which is .
Node computes for by
During the second phase of repair, node gets
As a result, node has a handle on for all . Since ’s are linearly independent, node can calculate by taking the inverse of matrix . This completes the repair procedure for node .
By dualizing the above arguments, we can collaboratively repair any paritycheck node failures with optimal repair bandwidth . Note that we have not used the property that matrices and are superregular yet. The correctness of the repair procedure only relies on the condition that and are nonsingular.
File Recovery. The reconstruction of the original file can be done in the same way as in [8], [9] and [10]. We give a more concise description of the file recovery procedure below.
Suppose that a data collector connects to nodes among the first nodes, and nodes among the last nodes, for some integer between 0 and . With suitable reindexing, we may assume that nodes , are contacted by the data collector, without loss of generality. Suppose that the indices of the remaining storage nodes connected to the data collector are , with
Thus, the data collector has access to