Cooperative Repair of Multiple Node Failures in Distributed Storage Systems

# Cooperative Repair of Multiple Node Failures in Distributed Storage Systems

## Abstract

Cooperative regenerating codes are designed for repairing multiple node failures in distributed storage systems. In contrast to the original repair model of regenerating codes, which are for the repair of single node failure, data exchange among the new nodes is enabled. It is known that further reduction in repair bandwidth is possible with cooperative repair. Currently in the literature, we have an explicit construction of exact-repair cooperative code achieving all parameters corresponding to the minimum-bandwidth point. We give a slightly generalized and more flexible version of this cooperative regenerating code in this paper. For minimum-storage regeneration with cooperation, we present an explicit code construction which can jointly repair any number of systematic storage nodes.

## 1 Introduction

In a distributed storage system, a data file is distributed to a number of storage devices that are connected through a network. The data is encoded in such a way that, if some of the storage devices are disconnected from the network temporarily, or break down permanently, the content of the file can be recovered from the remaining available nodes. A simple encoding strategy is to replicate the data three times and store the replicas in three different places. This encoding method can tolerate a single failure out of three storage nodes, and is employed in large-scale cloud storage systems such as Google File System [1]. The major drawback of the triplication method is that the storage efficiency is fairly low. The amount of back-up data is two times that of the useful data. As the amount of data stored in cloud storage systems is increasing in an accelerating speed, switching to encoding methods with higher storage efficiency is inevitable.

The Reed-Solomon (RS) code [2] is a natural choice for the construction of high-rate encoding schemes. The RS code is not only optimal, in the sense of being maximal-distance separable, it also has efficient decoding algorithms (see e.g. [3]). Indeed, Facebook’s storage infrastructure is currently employing a high-rate RS code with data rate 10/14. This means that four parity-check symbols are appended to every ten information symbols. Nevertheless, not all data in Facebook’s clusters is currently protected by RS code. This is because the traditional decoding algorithms for RS code do not take the network resources into account. Suppose that the 14 encoded symbols are stored in different disks. If one of the disks fails, then a traditional decoding algorithm needs to download 10 symbols from other storage nodes in order to repair the failed one. The amount of data traffic for repairing a single storage node is 10 times the amount of data to be repaired. In a large-scale distributed storage system, disk failures occur almost everyday [4]. The overhead traffic for repair would be prohibitive if all data were encoded by RS code.

In view of the repair problem, the amount of data traffic for the purpose of repair is an important evaluation metric for distributed storage systems. It is coined as the repair bandwidth by Dimakis et al. in [5]. An erasure-correcting code with the aim of minimizing the repair bandwidth is called a regenerating code. Upon the failure of a storage node, we need to replace it by a new node, and the content of the new node is recovered by contacting other surviving nodes. The parameter is sometime called the repair degree, and the contacted nodes are called the helper nodes or simply the helpers. The repair bandwidth is measured by counting the number of data symbols transmitted from the helpers to the new node. If the data file can be reconstructed from any out of storage nodes, i.e., if any disk failures can be recovered, then we say that the -reconstruction property is satisfied. The design objective is to construct regenerating codes for storage nodes, satisfying the -reconstruction, and minimizing the repair bandwidth, for a given set of code parameters , and .

We note that the requirement of -reconstruction property is more relaxed than the condition of being maximal-distance separable (MDS). A regenerating code is an MDS erasure code only if the number of symbols contained in any nodes is exactly equal to the number of symbols in the data file. In a general regenerating code, the total number of coded symbols in any nodes may be larger than the total number of symbols in a data file.

There are two main categories of regenerating codes. The first one is called exact-repair regenerating codes, and the second one is called functional-repair regenerating codes. In the first category of exact-repair regenerating codes, the content of the new node is the same as in the old one. In functional-repair regenerating codes, the content of the new node may change after a node repair, but the -reconstruction property is preserved. For functional-repair regenerating code, a fundamental tradeoff between repair bandwidth and storage per node is obtained in [5]. This is done by drawing a connection to the theory of network coding. Following the notations in [5], we denote the storage per node by and the amount of data downloaded from a surviving node by . The repair bandwidth is thus equal to . A pair is said to be feasible if there is a regenerating code with storage and repair bandwidth . It is proved in [5] that, for regenerating codes functionally repairing one failed node at a time, is feasible if and only if the file size, denoted by , satisfies the following inequality,

 B≤k−1∑i=0min{α,(d−i)β}. (1)

If we fix the file size , the inequality in (1) induces a tradeoff between storage and repair bandwidth.

There are two extreme points on the tradeoff curve. Among all the feasible pairs with minimum storage , the one with the smallest repair bandwidth is called the minimum-storage regenerating (MSR) point,

 (αMSR,γMSR)=(Bk,dBk(d+1−k)). (2)

On the other hand, among all the feasible pairs with minimum bandwidth , the one with the smallest storage is called the minimum-bandwidth regenerating (MBR) point,

 (αMBR,γMBR)=(2dBk(2d+1−k),2dBk(2d+1−k)). (3)

Existence of linear functional-repair regenerating codes achieving all points on the tradeoff curve is shown in [6]. Explicit construction of exact-repair regenerating codes, called the product-matrix framework, achieving all code parameters corresponding to the MBR point is given in [7]. Explicit construction of regenerating codes for the MSR point is more difficult. At the time of writing, we do not have constructions of exact-repair regenerating codes covering all parameters pertaining to the MSR point. Due to space limitation, we are not able to comprehensively review the literature on exact-repair MSR codes, but we mention below some constructions which are of direct relevance to the results in this paper.

The MISER code (which stands for MDS, Interference-aligning, Systematic Exact-Regenerating code) is an explicit exact-repair regenerating code at the MSR point [8] [9]. The code parameters are . It is shown in [8] and [9] that every systematic node, which contains uncoded data, can be repaired with storage and repair bandwidth attaining the MSR point in (2). This result is extended in [10], which shows that, with the same code structure, every parity-check node can also be repaired with repair bandwidth meeting the MSR point. The product-matrix framework in [7] also gives a family of MSR codes with parameters . All of the MSR codes mentioned above have code rate no more than . For high-rate exact-repair MSR code, we refer the readers to three recent papers [11], [12] and [13], and the references contained therein.

We remark that the interior points on the tradeoff curve between storage and repair bandwidth for functional-repair regenerating codes are in general not achievable by exact-repair regenerating codes (see e.g. [14] and [15]).

All of the regenerating codes mentioned in the previous paragraphs are for the repair of a single node failure. In large-scale distributed storage system, it is not uncommon to encounter multiple node failures, due to various reasons. Firstly, the events of nodes failure may be correlated, because of power outage or aging. Secondly, we may not detect a node failure immediately when it happens. A scrubbing process is carried out periodically by the maintenance system, to scan the hard disks one by one and see whether there is any unrecoverable error. As the volume of the whole storage system increases, it will take a longer time to run the scrubbing process and hence the integrity of the disks will be checked less frequently. A disk error may remain dormant and undetected for a long period of time. If more than one errors occur during this period, we will detect multiple disk errors during the scrubbing process. Lastly, in some commercial storage systems such as TotalRecall [16], the repair of a failed node is deliberately deferred. During the period when some storage nodes are not available, degraded read is enabled by decoding the missing data in real time. A repair procedure is triggered after the number of failed nodes reaches a predetermined threshold. This mode of repair reduces the overhead of performing maintenance operations, and is called lazy repair.

A naive method for correcting multiple node failures is to repair the failed nodes one by one, using methods designed for repairing single node failure. A collaborative recovery methodology for repairing multiple failed nodes jointly is suggested in [17] and [18]. The repair procedure is divided into two phases. In the first phase, the new nodes download some repair data from some surviving nodes, and in the second phase, the new nodes exchange data among themselves. The enabling of data exchange is the distinctive feature. We will call this the cooperative or collaborative repair model.

The minimum-storage regime for collaborative repair is considered in [17] and [18]. It is shown that further reduction in repair bandwidth is possible if data exchange among the new nodes is allowed. Optimal function-repair minimum-storage regenerating codes are also presented in [18]. The results are extended by LeScouarnec et al. to the opposite extreme point with minimum repair bandwidth in [19] and [20]. The storage and repair bandwidth per new node on the minimum-storage collaborative regenerating (MSCR) point are denoted by and , respectively, while the storage and repair bandwidth per new node on the minimum-bandwidth collaborative regenerating (MBCR) point are denoted by and , respectively. The MSCR and MBCR points for functional repair are

 (αMSCR,γMSCR) =(Bk,B(d+t−1)k(d+t−k)), (4) (αMBCR,γMBCR) =B(2d+t−1)k(2d+t−k)(1,1). (5)

We note that when , the operating points in (4) and (5) reduce to the ones in (2) and (3).

The vertices on the tradeoff curve between storage and repair bandwidth for collaborative repair are characterized in [21]. It is shown in [21] that for all points on the cooperative functional-repair tradeoff curve can be attained by linear regenerating codes over a finite field. A numerical example of tradeoff curves for single-loss regenerating code and cooperative regenerating code is shown in Figure 1. We see that cooperative repair requires less repair bandwidth in compare to single-failure repair.

Explicit exact-repair codes for the MBCR point for all legitimate parameters were presented by Wang and Zhang in [22]. The construction in [22] subsumes earlier constructions in [23] and [24]. In contrast, there are not so many explicit construction for MSCR code. The parameters of existing explicit constructions are summarized in Table 1. A construction of exact repair for and is given in [25]. This is extended to an MSCR code with and in [26]. Indeed, a connection between MSCR codes which can repair node failures and non-cooperative MSR code is made in [27]. Using this connection, the authors in [27] are able to construct MSCR code with from existing MSR codes. However, there is no explicit construction for exact-repair MSCR code of any failed nodes at the time of writing.

Practical implementations of distributed storage systems which can correct multiple node failures can be found in [28] to [31].

The rest of this paper is organized as follows. In Section 2, we formally define linear regenerating codes for distributed storage systems with collaborative repair. In Section 3, we give a slight generalization of the cooperative regenerating codes in [22]. The generalized version also achieves all code parameters of the MBCR point, but the building blocks of the construction only need to satisfy a more relaxed condition. In Section 4, we give a simplified description of the repair method in [26], and illustrate how to repair two or more systematic nodes collaboratively in the MISER code. Some concluding remarks are listed in Section 5.

## 2 A Collaborative Repair Model for Linear Regenerating Code

We will use the following notations in this paper:

: file size.

: the total number of storage nodes.

: the number of storage nodes from which a data collector can decode the original file.

: The number surviving nodes contacted by a new node.

: the number of new nodes we want to repair collaboratively.

: the amount of data stored in a node.

: the amount of data downloaded from a helper node to a new node during the first phase of repair.

: the amount of data exchanged between two new node during the second phase of repair.

: the repair bandwidth per new node.

: finite field of size , where is a prime power.

We describe in this section a mathematical formulation of linear collaborative exact repair. For the problem formulation for the non-linear case, we refer the readers to [21].

A data file consists of symbols. We let be the vector space . We regard a data file as a vector in , and call it the source vector .

The source vector is mapped to finite field symbols, and each node stores of them. The mapping from the source vector to an encoded symbol is a linear functional on . Following the terminology of network coding, we will call these linear mappings the encoding vectors associated to the encoded symbols. Formally, a linear functional is an object in the dual space of , , which consists of all linear transformations from to . More precisely, an encoding vector should be called an encoding co-vector instead, but we will be a little bit sloppy on this point and simply use the term “vector”.

The content of a storage node can be described by a subspace of , spanned by the encoding vectors of the encoded symbols stored in this node. For , we let denote the subspace of pertaining to node . The dimension of is no more than ,

 dim(Wi)≤α

for all .

We want to distribute the data file to the storage nodes in such a way that any of them are sufficient in reconstructing the source vector . The -reconstruction property requires that the encoding vectors in any storage nodes span the dual space , hence it is required that

 ⨁i∈KWi=L(M,Fq),

for any -subset of . Here denotes the sum space of ’s. It will be a direct sum if the regenerating code is MDS.

Suppose that the storage nodes with indices , fail, and we need to replace them by new nodes. For , new node contacts available nodes, and download symbols from each of them. The storage nodes which participate in the repair process are called the helpers. Different new nodes may download repair data from different sets of helpers. Let be the index set of the helpers contacted by new node . Thus, we have

 Hs⊆{1,2,…,n}∖{i1,i2,…,it}

and for all . The downloaded symbols are linear combination of the symbols kept by the helpers. The encoding vector of a symbol downloaded from node is thus contained in . For , let be the subspace of spanned by the encoding vectors of the symbols sent to new node . We have

 dim(Us∩Wj)≤β1,

for all and .

In the second phase of the repair, new node computes and sends finite field symbols to new node , for and . The computed symbols are linear combinations of the symbols which are already received by new node in the first phase of repair. Let be the subspace of spanned by the encoding vectors of the symbols sent from node to node during the second phase. We have

 Vs→s′⊆Us, and dim(Vs→s′)≤β2.

For , new node should be able to recover the content of the failed node . In terms of the subspaces, it is required that

 Wis′⊆Us′⊕⨁s∈{1,2,…,t}∖{s′}Vs→s′.

The repair bandwidth per new node is equal to

 γ=dβ1+(t−1)β2.

Any linear code satisfying the above requirements is called a cooperative regenerating code or collaborative regenerating code.

## 3 Cooperative Regenerating Codes with Minimum Repair Bandwidth

In this section we give a slight generalization of the construction of minimum-bandwidth cooperative regenerating codes in [22]. The number of failed nodes, , to be repaired jointly can be any positive integer. The code parameters which can be supported by the construction to be described below is the same as those in [22], i.e., , and satisfy

 n−t≥d≥k.

The file size of the regenerating code is

 B=k(2d+t−k),

and each storage node stores symbols. In contrast to the polynomial approach in [22], the construction below depends on the manipulation of a bilinear form (to be defined in (6)).

Encoding. We need a matrix and a matrix for the encoding. Partition and as

 U=[U1U2], V=[V1V2],

where and are submatrices of size . We will choose the matrices and such that the following conditions are satisfied:

1. any submatrix of is nonsingular;

2. any submatrix of is non-singular;

3. any submatrix of is nonsingular;

4. any submatrix of is nonsingular.

We can obtain matrices and by Vandermonde matrix or Cauchy matrix. If we use Vandermonde matrix, we can set the -th column of to

 [1xix2i…xd−1i]T,

for . If are distinct elements in , then the resulting matrix satisfies the first and third conditions listed above. We can use Vandermonde matrix for the matrix similarly. Existence of such matrices is guaranteed if the field size is larger than or equal to . Anyway, the correctness of the code construction only depends on the four conditions above.

For , we denote the -th column of by , and the -th column of by .

We arrange the source symbols in a partitioned matrix

 M=[ABC0],

where , and are sub-matrices of size , and , respectively. The total number of entries in the three sub-matrices is

 k2+k(d+t−k)+(d−k)k=k(2d+t−k)=B.

We will call the source matrix.

The source matrix induces a bilinear form defined by

 B(x,y):=xTMy, (6)

for and . We distribute the information to the storage nodes in such a way that, for , node is able to compute the following two linear functions,

 B(⋅,vi) and B(ui,⋅).

The first one is a linear mapping from to , and the second is from to . Node can store the entries in the vector , and compute the first function by taking the inner product of the input vector and ,

 B(x,vi)=xT(Mvi).

For the second function , node can store the entries in the vector , and compute by

 B(ui,y)=(uTiM)y.

Since the components of and satisfy a simple linear equation,

 (7)

we only need to store finite field elements in node , in order to implement the function and . Hence, each storage node is only required to store

 α=2d+t−1

finite field elements.

Repair procedure. Without loss of generality, suppose that nodes 1 to fail. For , the -th new node downloads some repair data from a set of surviving nodes, which can be chosen arbitrarily. Let be the index set of the surviving nodes contacted by node . We have and for all . The helper with index computes two finite field elements

 B(ui,vj) and B(uj,vi),

and transmits them to new node . In the first phase of repair, a total of symbols are transmitted from the helpers.

For , the -th new node can recover from the following -dimensional vector with the components indexed by .

 (uTjMvi)j∈Hi=[uTj]j∈Hi⋅(Mvi),

where is the matrix obtained by stacking the row vectors for . Since this matrix is nonsingular by construction, the -th new node can obtain . At this point, the -th new node is able to compute the function .

In the second phase of the repair procedure, node  calculates , for , and sends the resulting finite field symbol to the -th new node. Furthermore, node can compute , using the information already obtained from the first phase of repair. Node can now calculate from

 uTiMvs,  for s∈Hi∪{1,2,…,t},

using the property that the vectors , for , are linearly independent over . The repair of node is completed by storing components in the vectors and , which are necessary in computing and .

We remark that the total number of transmitted symbols in the whole repair procedure is , and therefore the repair bandwidth per new node is

 γ=2d+t−1.

File recovery. Suppose that a data collector connects to nodes , with

 1≤i1

 Mviℓ and uTiℓM,

for . From the last of the components in , for , we can recover the sub-matrix in the source matrix , because any submatrix of is nonsingular by assumption. Similarly, from the last components in , we can recover the sub-matrix , using the property that any submatrix is nonsingular. The remaining source symbols in can be decoded either from the first components of vectors , or the first components of the vectors .

Example. We illustrate the construction by the following example with code parameters , , . The file size is . In this example, we pick as the underlying finite field.

The source matrix is partitioned as

 M=⎡⎢ ⎢ ⎢ ⎢⎣a11a12a13b11b12b13b14a21a22a23b21b22b23b24a31a32a33b31b32b33b34c11c12c130000⎤⎥ ⎥ ⎥ ⎥⎦.

The entries ’s, ’s and ’s are the source symbols. Let be the bilinear form defined as in (6), mapping a pair of vectors in to an element in .

Let be the Vandermonde matrix

 U =⎡⎢ ⎢ ⎢⎣1111111123456014224101161660⎤⎥ ⎥ ⎥⎦ (8)

and for , let be the -th column of . Let be the Vandermonde matrix

 V =⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1111111123456014224101161660124421014523601111110⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (9)

and for , let be the -th column of . The -th node needs to store enough information such that it can compute the functions

 B(⋅,vi) and B(ui,⋅).

For instance, node can store the last 3 components in vector , and all 7 components in ,

 zi1 :=a21+ia22+i2a23+i3b21+i4b22+i5b23+i6b24, zi2 :=a31+ia32+i2a33+i3b31+i4b32+i5b33+i6b34, zi3 :=c11+ic12+i2c13, zi4 :=a11+ia21+i2a31+i3c11, zi5 :=a12+ia22+i2a32+i3c12, zi6 :=a13+ia23+i2a33+i3c13, zi7 :=b11+ib21+i2b31, zi8 :=b12+ib22+i2b32, zi9 :=b13+ib23+i2b33, zi10 :=b14+ib24+i2b34,

with all arithmetic performed modulo . The missing entry of , namely, the first entry of ,

 a11+ia12+i2a13+i3b11+i4b12+i5b13+i6b14 =−izi1−i2zi2−i3zi3 +zi4+izi5+i2zi6+i3zi7+i4zi8+i5zi9+i6zi10

is a linear combination of . Each node only needs to store 10 finite field symbols . The storage per node meets the bound

 αMBCR=B(2d+t−1)k(2d+t−k)=2d+t−1=10.

We illustrate the repair procedure by going through the repair of nodes 5, 6 and 7. Suppose we lost the content of nodes 5, 6 and 7, and want to rebuild them by cooperative repair. For , and , node computes and and sends them to the node , in the first phase of repair. Node now have 8 symbols,

 B(u1,vj), B(u2,vj), B(u3,vj), B(u4,vj), B(uj,v1), B(uj,v2), B(uj,v3), B(uj,v4).

The first four of them can be put together and form a vector

 ⎡⎢ ⎢ ⎢ ⎢⎣B(u1,vj)B(u2,vj)B(u3,vj)B(u4,vj)⎤⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣uT1uT2uT3uT4⎤⎥ ⎥ ⎥ ⎥ ⎥⎦Mvj.

Because the first four columns of matrix in (8) are linearly independent over , for , node can solve for after the first phase of repair, and is able to calculate for any vector .

The communications among nodes 5, 6 and 7 in the second phase of repair is as follows:

node 5 sends to node 6,

node 5 sends to node 7,

node 6 sends to node 5,

node 6 sends to node 7

node 7 sends to node 5,

node 7 sends to node 6.

For , node can obtain from

In the first phase, we transmit symbols, and in the second phase we transmit 6 symbols. The number of transmitted symbol per new node is thus equal to 10, which is equal to the target repair bandwidth .

To illustrate the -reconstruction property, suppose that a data collector connects to nodes 1, 2 and 3. The data collector can download the following vectors

 uT1M, uT2M, uT3M, Mv1, Mv2, and % Mv3.

There are totally 33 symbols in these six vectors. They are not linearly independent as the original file only contains 24 independent symbols. We can decode the symbols in the data file by selecting 24 entries in the received vectors, and form a vector which can be written as the product of a lower-block-triangular matrix and a 24-dimensional vector

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣V30V300V3000V30000V3D0000V3D00000V3D000000V3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣c11c12c13b11b21b31⋮b14b24b34a11a21a31⋮⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

with denoting a nonsingualr Vandermonde matrix, and a diagonal matrix. The above matrix is invertible and we can obtain the source symbols in the data file.

## 4 A Class of Minimum-Storage Cooperative Regenerating Codes

In this section, we give a simplified description of the the minimum-storage cooperative regenerating code presented in [26]. The code parameters are

 n=2k, d=n−t, k≥t≥2.

The first nodes are the systematic nodes, while the last nodes are the parity-check nodes. The coding structure of the cooperative regenerating codes to be described in this section is indeed the same as the MISER code [8],[9] and the regenerating code in [10]. Our objective is to show that, with this coding structure, we can repair the failure of any systematic nodes and any parity-check nodes, for any less than or equal to , attaining the MSCR point defined in (4).

We need a nonsingular matrix and a super-regular matrix , both of size . Recall that a matrix is said to be super-regular if every square submatrix is nonsingular. Cauchy matrix is an example of super-regular matrix, and we may let be a Cauchy matrix.

After the matrices and are fixed, we let be the inverse of and be the matrix . It can be shown that the matrix is non-singular and is super-regular. We have the following relationship among these matrices

 V=UP and U=VQ.

Let be -entry of , for , and be the -entry of .

For , let denote the -th column of , and the -th column of . The columns of and the columns of will be regarded as two bases of vector space . Let be the dual basis of ’s, and let be the dual basis of ’s. The dual bases satisfy the following defining property

 ^uTiuj=δij, and ^vTivj=δij,

where is the Kronecker delta function.

The last ingredient of the construction is a super-regular symmetric matrix and its inverse , satisfying

 [aeea][bffb]=[1001]. (10)

In particular, it is required that , and are all not equal to zero in .

Encoding. A data file consists of

 B=k(d+t−k)=k(n−k)=k2

source symbols. For , node is a systematic node and stores source symbols. We can perform the encoding in two essentially the same ways. In the first encoding function, the first nodes store the source symbols and the last nodes store the parity-check symbols. Let be the -dimensional vector whose components are the symbols stored in node . For , node is a parity-check node, and stores the components of vector

 yj=k∑ℓ=1(a^uℓvTj+epℓjIk)xℓ, (11)

where denotes the identity matrix. We note that the matrix within the parenthesis in (11) is the sum of a rank-1 matrix and an identity matrix.

In the second encoding function, which is the dual of the first one, nodes store the source symbols and nodes to store the parity-check symbols. Let be the -dimensional vector stored in node . For , node stores the vector

 xi=k∑ℓ=1(b^vℓuTi+fqℓiIk)yℓ. (12)

This duality relationship is first noted in [10].

###### Proposition 1 ([10]).

The regenerating code defined by (11) is the same as the one defined by (12).

We will give a proof of Prop. 1 in terms of matrices. The matrix formulation is also useful in simplifying the description of the repair and decode procedure. Let (resp. , and ) be the matrix whose columns are (resp. , , and ) for . We have

 ^U =(U−1)T=^V(Q−1)T, ^V =(V−1)T=^U(P−1)T.

In terms of these matrices, the first encoding function can be expressed as

 Y =a^UXTV+eXP. (13)

Indeed, the -th column of is

 =ak∑ℓ=1^uℓ⋅(xTℓvj)+ek∑ℓ=1xℓpℓj =ak∑ℓ=1^uℓvTjxℓ+ek∑ℓ=1xℓpℓj =k∑ℓ=1(a^uℓvTj+epℓjIk)xℓ.

Similarly, the second encoding function defined by (12) can be expressed as

 X =b^VYTU+fYQ. (14)
###### Proof.

Proof of Prop. 1 Suppose that is given as in (13). Substituting by in the right-hand side of (14), we get

 R.H.S. of (14) =b^VYTU+fYQ =b^V(a^UXTV+eXP)TU +f(a^UXTV+eXP)Q =(ab+ef)X+(be+af)^UXTU =X=L.H.S. of (???).

The last line follows from the facts that and , which follow directly from (10). Therefore, (14) is implied by (13).

By similar arguments, one can show that (13) is implied by (14). Therefore, regenerating code defined by the first encoding function in (11) is the same as the one defined by the second encoding function in (12). ∎

Repair Procedure. Suppose that systematic nodes fail, for some positive integer . We assume without loss of generality that the failed nodes are nodes 1 to , after some appropriate node re-labeling if necessary.

In the first phase of repair, each of the surviving nodes sends a symbol to each of the new node. For , the symbol sent to node is obtained by taking the inner product of with the content of the helper node.

Consider node , for some fixed index . The symbols received by node after the first phase of repair are

 uTixmfor m=t+1,t+2,…,k, and uTiyjfor j=1,2,…,k.

We make a change of variables and define

 Z:=YQ.

For , the -th column of is

 zν:=k∑ℓ=1qℓνyℓ.

Because is a non-singular matrix, Node can obtain the vector from , and vice versa. In terms of the new variables in , (14) becomes

 X=b^UZTU+fZ. (15)

The symbol sent from node to node , namely , is the -th component of vector

 uTiX=uTi(b^UZTU+fZ),

and is equal to

 bzTium+fuTizm.

As a result, the information obtained by node after the first repair phase can be transformed to

 uTizjfor j=1,2,…,k, and buTmzi+fuTizmfor % m=t+1,t+2,…,k.

In the second phase of the repair procedure, node sends the symbol to node , for , . The total number of symbols transmitted during the first and the second part of the repair procedure is . The number of symbol transmissions per failed node is thus

 γ=d+t−1.

Node wants to recover the -th column of , as expressed in (15). The -th column of the first term on the right-hand side is equal to the product of and the -th column of . We note that the components of the -th column of are precisely , for , and are already known to node . It remains to calculate -th column of , which is .

Node computes for by

 uTmzi=1b[(buTmzi+fuTizm)−fuTizm].

During the second phase of repair, node gets

 uTi′zi,for i′∈{1,2,…,t}∖{i}.

As a result, node has a handle on for all . Since ’s are linearly independent, node can calculate by taking the inverse of matrix . This completes the repair procedure for node .

By dualizing the above arguments, we can collaboratively repair any parity-check node failures with optimal repair bandwidth . Note that we have not used the property that matrices and are super-regular yet. The correctness of the repair procedure only relies on the condition that and are non-singular.

File Recovery. The reconstruction of the original file can be done in the same way as in [8], [9] and [10]. We give a more concise description of the file recovery procedure below.

Suppose that a data collector connects to nodes among the first nodes, and nodes among the last nodes, for some integer between 0 and . With suitable re-indexing, we may assume that nodes , are contacted by the data collector, without loss of generality. Suppose that the indices of the remaining storage nodes connected to the data collector are , with

 k