A game theoretic approach to a peertopeer cloud storage model
Abstract
Classical cloud storage based on external data providers has been recognized to suffer from a number of drawbacks. This is due to its inherent centralized architecture which makes it vulnerable to external attacks, malware, technical failures, as well to the large premium charged for business purposes. In this paper, we propose an alternative distributed peertopeer cloud storage model which is based on the observation that the users themselves often have available storage capabilities to be offered in principle to other users. Our setup is that of a network of users connected through a graph, each of them being at the same time a source of data to be stored externally and a possible storage resource. We cast the peertopeer storage model to a Potential Game and we propose an original decentralized algorithm which makes units interact, cooperate, and store a complete back up of their data on their connected neighbors. We present theoretical results on the algorithm as well a good number of simulations which validate our approach.
I Introduction
Cloud storage on the Internet has come to rely almost exclusively on data providers serving as trusted third parties to transfer and store the data. While the system works well enough in most cases, it still suffers from the inherent weaknesses of the trustbased model. The traditional cloud is open to a variety of security threats, including maninthemiddle attacks, and malware that expose sensitive and private consumer and corporate data. Furthermore, current cloud storage applications are charging large premiums on data storage facilities to business clients. Moreover, these cloud storage providers may have technical failures that can cause data breaches and unavailability, much to the distress of the users and applications that depend on them.
To address the aforementioned shortcomings, a decentralized peertopeer cloud storage model would be the right answer. On the wake of the successful peertopeer file sharing model of applications like BitTorrent and its lookalike, the same philosophy may well be leveraged on a different but very similar application service like storage. Indeed a slew of fledgling and somehow successful startups are entering in this market niche. Among the most noteworthy examples are:

Storj: www.youtube.com/channel/UCcTEqWwZV5Rlh0RZsp2Qw,

BitTorrent Sync: www.getsync.com/,

Ethos: www.youtube.com/watch?v=qUftGCQ5dqo,

SpaceMonkeys: www.spacemonkey.com/.
Clearly, a completely decentralized peertopeer model must account for some challenging technical difficulties that are absent in a centralized cloud model. Firstly, security and privacy must be carefully implemented by ensuring endtoend encryption resistant to attackers. In addition, the model must account for the latency, performance, and downtime of average user devices.
Albeit the above technical issues are challenging, they can be addressed with the right tools and architectures available at current state of the art technology and will not be considered in this paper. What remains an open question up to now is how to endow the system with the right incentives for the end users to collaborate and share their storage commodity with each other. We believe that the answer to that question comes from the formal framework of gametheory which provides the mathematical tools to ensure successful cooperation among users / players. There are (at least) two ways in which gametheory, with its plenty folktheorems, can be applied to the real world. One is as a tool to study an ongoing phenomenon. This is the typical setting of social and psychological sciences. A second more engineering approach is to leverage gametheory to design specific mechanisms, i.e. set rules of the game that will bring the interaction to the most desired outcome, which is basically the maximum global welfare, or Pareto dominated equilibrium of the game. This work follows this second approach.
In this paper, we consider a network of units (PC’s but possibly also smartphones or other devices possessing storage capabilities) which need to store externally a back up of their data and, at the same time, can offer space available to store data of other connected units. In this set up, we cast the peertopeer storage model to an allocation Potential Game and we propose an original decentralized algorithm which make units interact, cooperate, and store a complete back up of their data on their connected neighbors.
Units are assumed to be connected through a network and, autonomously, at random time, to activate and allocate or move their data pieces among the neighboring units. Formally, each unit has a utility function which gives a value to their neighbors on the basis of their reliability, their current congestion (resources have bounded storage capabilities), and the amount of data the unit has already stored in them. Following classical evolutionary game theory [6], we propose an algorithm based on a noisy best response action: each time a unit activates, it decides the neighbor to use on the basis of a Gibbs probability distribution having its peak on the maxima of the utility function.
In the remaining part of this section, we formally define the storage allocation problem and we show its equivalence with classical matching problem on a graph. This allows us to use celebrated Hall’s theorem and give a necessary and sufficient condition for the allocation problem to be solvable. Section II is devoted to cast the problem to a potential game theoretic framework [2] and to propose a distributed algorithm which is an instance of a noisy best response dynamics [6]. We claim a fundamental result, which will be proven in a forthcoming paper, which says that in the double limit when time goes to infinity and the noise parameter goes to , the algorithm converges to a Nash equilibrium which is, in particular, a global maximum of the potential function. This guarantees that the solution will indeed be close to the global welfare of the community. Finally, Section III is devoted to the presentation of an extensive set of simulations. A conclusions section ends the paper.
Ia Related Work
Though allocation games have been considered before in the literature, they all substantially differ from the model we propose and study in this paper. In [4] the authors consider an allocation problem casted to a pure congestion game. Utility functions of units measure the congestion of a resource simply as a function of the number of units currently using it, but they do not impose any strict storage limitation. The algorithm they propose is a classical best response algorithm and is shown to achieve Nash equilibrium. Our model differs considerably as we also consider reliability of the resources and data fragmentation in the utility functions and, moreover, we impose strict storage limitations. A crucial consequence of this is that classical best response algorithms would not work in our case: Example 3 shows a situation where such an algorithm would halt before allocation is completed. Allocation games are also considered in [1] where, however, the proposed algorithm units are not interacting through a graph but rather through a device that acts like a leader selecting which resources can be used. A related context where congestion games have been used is that of networking routing [3].
IB The model
Consider a set of units which play the double role of users who have to allocate externally a back up of their data, as well resources where data from other units can be allocated. Generically, an element of will be called a unit, while the terms user and resource will be used when the unit is considered in the two possible roles of, respectively, a source or a recipient of data. We assume units to be connected through a directed graph where a link means that unit is allowed to storage data in unit . We denote by
respectively, the out and the inneighborhood of a node. Note the important different interpretation in our context: represents the set of resources available to unit while is the set of units having access to resource . If , we put and .
We imagine the data possessed by the units to be quantized atoms of the same size. Each unit is characterized by two non negative integers:

is the number of data atoms that unit needs to back up into his neighbors,

is the number of data atoms that unit can accept and store from his neighbors.
The numbers and will be assembled into two vectors denoted, respectively, and . We also define
Given the triple , we define an allocation as any map satisfying the properties expressed below.

Graph constraint for all and ;

Storage limitation For every ,
The fact that means that user has allocated the data atom into resource .
We will say that the allocation problem is solvable if an allocation exists. We denote by the set of allocations. We will also need to consider partial allocations, namely maps where satisfying, where defined, conditions (C1) and (C2). We denote by the set of partial allocations.
In the following section we study the conditions under which the allocation problem is solvable, namely conditions under which is non empty.
IC The allocation problem as a matching problem
Define
Consider now the bipartite graph where iff . An allocation naturally induces a matching on which is complete on . To this aim, notice that, from and using condition (C2), we can construct an injective mapping such that for every and for all . We then define
It is clear that this procedure can be inverted and that from any matching of complete on we can associate an allocation for . This equivalence allows to use classical results like the Hall’s marriage theorem to characterize the existence of allocations. Precisely we have the following result
Theorem 1
Given , there exists an allocation iff the following condition is satisfied:
(1) 
By Hall’s theorem, the existence of a matching in complete on is equivalent to the condition
(2) 
where is the outneighborhood of in . Given let be the union of those ’s for which . By the way the bipartite graph has been defined, it follows that , so that it is sufficient to restrict condition (2) to subsets such that yield . Given such an , if we consider , we immediately obtain that (1) coincides with (2).
In general, it is not necessary to check the validity of (1) for every subset . We say that is maximal if for any , it holds . We say that are independent if and is called irreducible if it can not be decomposed into the union of two non empty independent subsets. Clearly, it is sufficient to verify (1) for the subclass of maximal irreducible subsets.
Example 1
If is complete, we have that while for all such that . Hence, the only maximal irreducible subsets are the singletons and the set . Condition (1) in this case reduces to
(3) 
In general, the class of maximal irreducible subsets can be large and grow more than linearly in the size of , as the following example shows.
Example 2
If is a line graph ( and ) it can be checked that the maximal irreducible subsets are those of the form .
In case when and are both constant, something more can be said.
Proposition 2
Given , where is regular, and for all , there exists an allocation iff .
By Theorem 1, we simply have to show that for every subset . Let be the set of edges having one of the nodes in . If is the degree of the nodes in , we have that
From the practical point of view, the equivalence of our problem with a classical matching problem, is, however, of little utility, as the number of nodes of is of the size which will in general be very large.
Moreover, in case allocations exist, we want to be able to construct one in a distributed way without the need of any supervision. The algorithm must be iterative in order to cope with possible time modifications of the units, of their interconnection and of their data storage needs and limitations. Also the possibility that units leave and enter the community must be considered.
Also we want to have the possibility to find solution possessing certain extra features:

(Reliability) resources will often have different reliability properties and we want to give preference, in the allocation, to more reliable resources;

(Congestion) equally reliable resources should be equally used, avoiding congestion phenomena;

(Aggregation) users prefer to use as few resources as possible to allocate their back up data.
The reason for this last feature comes from the fact that an exceeding fragmentation of the back up data will cause a blow up in the number of communications among the units both in the storage and recover phases. This feature should be considered against another feature which in this paper is not going to be addressed, which is that of diversification of back ups: in real applications units will need to back up multiple copies of their data in order to cope with security and possible failure phenomena. In that case, these multiple copies will need to be stored in different units. This issue will be analyzed in a subsequent paper.
The above desired features may be contradictory in general and we want to have tunable parameters to make the algorithm converge towards a desired compromised solution.
The proposed algorithm will be fully distributed: units will iteratively allocate and move their data among the neighbors on the basis of the space available and trying to maximize a utility function. There will be an underlying game theoretic structure inspired by the desired features described above. Our algorithm will be analyzed with the techniques of evolutionary game theory and it will be shown to yield a reversible Markov process converging to a Nash equilibrium of the game.
Ii The game theoretic setup and the algorithm
Given a partial allocation , consider the matrix where is the number of atomic data that has copied inside under the allocation , namely,
(4) 
Clearly satisfies the following conditions

for all and whenever .

for all .

for all .
It is immediate to see that, conversely, if there exists satisfying these properties (such a is called a partial allocation state), then, from it, we can construct a partial allocation such that . Clearly, under this correspondence, we have that iff satisfies (P2) with equality for all . In this case is called an allocation state. The set of partial allocation states and the set of allocation states are denoted, respectively, with the symbols and .
It is clear that two partial allocations and such that , only differ for a permutation of the data atoms of the various units and for many purpouses can be considered as equivalent. All the quantity of interest for the game theoretic setting will be defined at the level of .
We are now ready to define the game theoretic model. We first define utilities: under a (possibly partial) allocation state , the utility of a unit in using resource is given by
(5) 
The first term encodes possible reliability differences among resources, the second term is instead a congestion term which takes into consideration the level of use of the resource, and, finally, the third term depends on both and and pushes a unit to allocate in those resources where it has already allocated. are two nonnegative parameters to tune the effect of the congestion and of the aggregation terms, respectively.
The choice of this particular utility function has been made on the basis of simplicity considerations (notice that the state enters linearly in it) and on the fact that, as exploited below, this leads to a potential game. In principle, different terms in the utility function can be introduced in order to make units to take into considerations other desired features (e.g multiple back up).
Define
(6) 
and notice that if are such that , then,
(7) 
In other terms, under the state allocation , when user moves a data atom from to , it experiences a variation in utility given by . is called a potential of the game. Given and , put
the set of resources still available for under the current allocation state .
An allocation state (and also any such that ) is called a Nash equilibrium if, for every , for every such that , for every , it holds
Maxima of are clearly Nash equilibria while, in general, the converse is not true. Considering that is defined on a finite set, a maximum, and thus a Nash equilibrium, always exists. Under a Nash equilibrium, a unit whose goal is to maximize its utility, has no advantage in moving their allocated data, under the standing assumption that only one data atom at a time can be moved. Notice that data atoms are to be interpreted as aggregations of data and the decision of their size is part of the design of the algorithm. Clearly different levels of granularity will give rise to different game models including different Nash equilibria.
For any , we define the Gibbs probability distribution over as
where and where
is a normalizing factor.
Iia The algorithm
The algorithm we are proposing below is a distributed and asynchronous algorithm where units activate at random independent times and either allocate or move their atoms, undertaking a relaxed stochastic version of the utility maximization.
The algorithm is mathematically described as a continuous time Markov process on the set of partial allocations . Precisely, units are assumed to be equipped with independent internal Poisson clocks with possibly different clicking rates. We denote by the clicking rate of unit . When a unit activates it can either allocate a further data atom (if allocation is not completed yet) or move a data atom from one resource to another. The choice of the resource where either allocate or move the data atom is done according to the Gibbs probability: this is a classical choice in evolutionary game theory and will be amenable to a fairly complete theoretical analysis. The details of the algorithm are described below. We put .

Assume activates at time . With probabilities
the resource will make, respectively, an allocation or a distribution move as explained below. Of course we are assuming that

(ALLOCATION MOVE)

Choose uniformly at random in

Choose according to the Gibbs probability

Put and by


(DISTRIBUTION MOVE)

Choose according to the probability .

Choose uniformly at random in

Choose according to the Gibbs probability

Put

The fact of using a noisy algorithm is crucial in our setting. In the following example we show a situation where a classical best response algorithm would remain stacked without completing the allocation.
Example 3
We are considering a line graph of four users as depicted below.
Each user has . Reliabilities are instead while for . Assume we are in the partial allocation state given by
Clearly, this partial allocation state could be reached by the group zero allocation state with positive probability (the key point is that unit activates before unit and chooses the most reliable resource). It is also clear that under a best response strategy () this state allocation is an equilibrium: unit can not allocate, unless unit moves its data to but this will never happen because .
IiB Theoretical results
In this section we analyze the behavior of the algorithm introduced above. We will essentially show two results. First, we prove that if the set of allocation is not empty (i.e. condition (1) is satisfied), the algorithm above will find one in bounded time with probability . Second, we will show that, under a slightly stronger assumption than (1), in the double limit and then , the Markov process induced by the algorithm will always converge, in law, to a Nash equilibrium which is a global maximum of the potential function .
In order to prove such results, it will be necessary to go through a number of intermediate technical steps. First of all, it will be convenient to work directly with the process which is also Markovian because of the way the transitions have been defined, considering that the results we are claiming, can all be expressed and established at this simpler level.
Given , we denote by the transition probability from to of the Markov chain underlying the process . denotes the graph on where an edge is present if and only if .
Our strategy will be to show that from any element there is a path in to some element . This, by standard Markov chain arguments, leads to the result that allocation will be achieved in bounded time with probability . After, we will show that restricted to the set of allocations is irreducible and this will then yield the asymptotic result.
First we consider the problem of finding a path to an allocation state. Given we define the following subsets of units
Units in are called fully allocated: these units have completed the allocation of their data under the state . Units in are called saturated: they have not yet completed their allocation, however, under the current state , they can not make any action, neither allocate, nor distribute. Finally, define
It is clear that from any , some allocation move can be performed. Instead, if we are in a state , only possibly fully allocated units can make a distribution move. Notice that, because of condition (1), for sure there exist resources such that and these resources are indeed exclusively connected to fully allocated units. The key point is to show that in a finite number of distribution moves it is always possible to move some data atoms from resources connected to saturated units to resources with available space: this will then make possible a further allocation move.
For any fixed , we can consider the following graph structure on thought as set of resources: . Given , there is an edge from to if and only if there exists for which
The edge from to will be indicated with the symbol (to also recall the unit involved). The presence of the edge means that the two resources and are in the neighborhood of a common unit which is using under . This indicates that can in principle move some of its data currently stored in into resource if this last one is available. We have the following technical result
Lemma 3
Suppose satisfies (1). Fix and let be such that there exists with . Then, there exists a sequence
(8) 
satisfying the following conditions

Both families of the ’s and of the ’s are each made of distinct elements;

for every ;

for every , and .
Let be the subset of nodes which can be reached from in . Preliminarily, we prove that there exists such that . Let
and notice that, by the way and have been defined,
(9) 
Suppose now that, contrarily to the thesis, for all . Then,
(10) 
where the first inequality follows from (9) and (1), the first equality from the contradiction hypothesis, the second equality from the definition of , the third equality again from (9) and, finally, last inequality from the existence of . This is clearly absurd and thus proves our claim.
Consider now a path of minimal length from to in :
and notice that the sequence will automatically satisfy properties (Sa) to (Sc).
We are now ready to prove the first main result.
Theorem 4
Assume that

for every such that ,

if ,

satisfies (1).
Then, with probability , the Markov process will be, after a finite number of jumps, in the set of allocations .
In order to prove the claim, it will be sufficient to show that from any there is a path in (the graph underlying the possible transitions of the process ) to some element . We will prove it by a double induction process. To this aim we consider two indices associated to any . The first one is defined by
To define the second, consider any . We can apply Lemma 3 to and any and obtain that we can find a sequence of agents satisfying the properties (Sa), (Sb), and (Sc) above. Among all the possible choices of , and of the corresponding sequence, assume we have chosen the one minimizing and denote such minimal by . The induction process will be performed with respect to the lexicographic order induced by the pair .
In the case when , it means we can find , such that . Therefore, under the allocation state , the unit can allocate a further data atom to . Considering that the activation of the unit has positive probability because of assumptions 1. and 2., this shows that is connected to a such that . In case , this means that .
Consider now any such that . Let , and the sequence satisfying the properties (Sa), (Sb), and (Sc) above. In the allocation state , the unit , if activated (and this again has positive probability to happen because of assumptions 1. and 2.), can thus move an atomic piece of data from to . The new allocation state is . Since , for sure . The induction argument is thus complete.
We are now left with studying the Markov process on . We start with the following
Proposition 5
, restricted to , is a time reversible Markov process. More precisely, it holds
(11) 
where
Notice that the only cases when and are not both equal to is for those pairs such that for some . Assume this to be the case. Then, it follows from the way the distribution moves of the algorithm have been defined that
Substituting in (11) and using relation (7), it is immediate to check that equality holds.
We now show that under a slight stronger assumption than (1), namely,
(12) 
the process restricted to is ergodic. Denote by the graph restricted to the set . Notice that, as a consequence of timereversibility, is an undirected graph. Ergodicity is equivalent to proving that is connected. We start with a lemma analogous to previous Lemma 3.
Lemma 6
It is sufficient to follow the steps of to the proof of Lemma 3 noticing that in (10) the first equality is now a strict inequality, while the last strict inequality becomes an equality.
If are connected through a path in , we write that . Introduce the following distance on : if
A pair is said to be minimal if
Notice that is connected if and only if for any minimal pair , it holds .
Lemma 7
Let be a minimal pair. Suppose is such that . Then, for all .
Suppose by contradiction that for some . Then, necessarily, there exists such that . Consider then . Clearly, and this contradicts the minimality assumption. Thus for all . This yields . Exchanging the role of and we obtain the thesis.
Proposition 8
If condition (12) holds true, the graph is connected.
Let be any minimal pair. We will prove that and are necessarily identical. Consider any resource . It follows from Lemma 6 that we can find a sequence satisfying the same (Sa), (Sb), and (Sc) with respect to the state allocation . Among all the possible sequences, choose one with minimal for given . We will prove by induction on that for all .
If , it means that . It then follows from Lemma 7 that for all . Suppose now that the claim has been proven for all minimal pairs and any for which (w.r. to ) and assume that satisfyies the properties (Sa), (Sb), and (Sc) with respect to . Notice that the unit can move a data atom from resource into resource under the state allocation and obtain . Consider now and notice that Lemma 7 yields . Define
Clearly, and this implies that also is a minimal pair. Notice that satisfies (Sa), (Sb), and (Sc) with respect to . Therefore, by the induction hypotheses, it follows that for all . Since and , result follows immediately.
Corollary 9
Assume that (12) holds true. Then , restricted to , is an ergodic time reversible Markov process whose unique invariant probability measure is given by
Remark: Notice that when , the invariant probability converges to the probability concentrated on the set of state allocations maximizing the potential and given by, for ,
Thus, if is small, the distribution of the process for sufficiently large will be close to Nash equilibria.
In this paper we will assume that for some , namely that units activation rates is proportional to the amount of data they need to back up. We will consider two possibilities for the allocation and distribution probabilities and :
In the first case, the probability of an allocation move is proportional to the amount yet to be allocated while in the second case, no distribution move takes place before full allocation is reached.
Iii Simulation
In this section we present a number of numerical simulations that validate the theoretical results and show the performance of the algorithm in terms of various parameters describing the speed of convergence, resources congestions, global utility, and complexity of the interconnections.
For the sake of readability we gather below the standing assumptions and parameters we have used.

The number of units is denoted by and assumed to be even. Most of our simulations are for but we have also studied scalability issues by considering and .

We have considered two possible interconnection topologies: the complete graph and a random regular graph of degree .

Time has been assumed to be discrete, assuming that at every time instant a unit is chosen with a probability proportional to . Moreover, allocation and distribution moves are chosen acceding to the probabilities:

Time horizon has been fixed to be so that we have allowed up to two moves per data atom (one allocation and one possible distribution). It turns out that in all experiments carried on such time horizon has been sufficient for completing the allocation of all data atoms and also getting very close to a Nash equilibrium. Denote by the final allocation state of the system after time has elapsed.

The parameter appearing in the Gibbs distribution have been chosen to be timevarying with
where is the maximum reliability of the resources. This is a typical choice done on such best response dynamics, even if the theoretical result expressed below can not directly be applied to insure convergence. In real world applications, where adaptation to a timevarying scenario (e.g. addition or deletion of units, change in the topology) is needed, must be instead kept bounded.

We have assumed units to have the same free space to offer and to have possibly different amounts to allocate.

We have assumed units to split into two subsets of equal size and characterized by two different reliability levels, respectively, and .

The congestion parameter is chosen to be while we will consider different values for .
Moving data from one resource to another one is an expensive task which must be carefully monitored in real applications. To this aim, we have introduced the index which computes the number of allocation or distribution moves per piece of data throughout the dynamics. In formula, if is the total number of moves performed by agent during the run of the algorithm, we put
The global utility of the system in the allocation state is defined as . Put .
measures how the solution found is performing with regards to the global utility (notice that the potential does not coincide with so that is not apriori a maximum of ).
Units give preference to the most trusted resource in , however, depending on the amount of data they need to allocate and on the type of resources in their neighborhood, they need to use also less trusted resources in . We define the mean and the variance of the satisfaction level as
If and are taken to be the probability that if contacted at a random time the resource is available to give access to the stored data, can be interpreted as the probability that a piece of data can be recovered when requested at some random time.
The presence of the congestion term in the utility function should insure that all resources with the same should in principle be used equally. We measure the mean and variance of the congestion level of resources in by
Finally, we consider the in and out mean degrees measuring the topological complexity of the subgraph consisting of the edges for which :