Distributed Agreement in Dynamic Peer-to-Peer Networks111A preliminary version of this paper appeared in the Proceedings of the ACM/SIAM Symposium on Discrete Algorithms (SODA), 2012, 551-569.
Motivated by the need for robust and fast distributed computation in highly dynamic Peer-to-Peer (P2P) networks, we study algorithms for the fundamental distributed agreement problem. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to design fast algorithms (running in a small number of rounds) that guarantee, despite high node churn rate, that almost all nodes reach a stable agreement. Our main contributions are randomized distributed algorithms that guarantee stable almost-everywhere agreement with high probability even under high adversarial churn in a polylogarithmic number of rounds. In particular, we present the following results:
An -round ( is the stable network size) randomized algorithm that achieves almost-everywhere agreement with high probability under up to linear churn per round (i.e., , for some small constant ), assuming that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm). Our algorithm requires only polylogarithmic in bits to be processed and sent (per round) by each node.
An -round randomized algorithm that achieves almost-everywhere agreement with high probability under up to churn per round (for some small ), where is the size of the input value domain, that works even under an adaptive adversary (that also knows the past random choices made by the algorithm). This algorithm requires up to polynomial in bits (and up to bits) to be processed and sent (per round) by each node.
Our algorithms are the first-known, fully-distributed, agreement algorithms that work under highly dynamic settings (i.e., high churn rates per step). Furthermore, they are localized (i.e., do not require any global topological knowledge), simple, and easy to implement. These algorithms can serve as building blocks for implementing other non-trivial distributed computing tasks in dynamic P2P networks.
Peer-to-peer (P2P) computing is emerging as one of the key networking technologies in recent years with many application systems, e.g., Skype, BitTorrent, Cloudmark etc. However, many of these systems are not truly P2P, as they are not fully decentralized — they typically use hybrid P2P along with centralized intervention. For example, Cloudmark  is a large spam detection system used by millions of people that operates by maintaining a hybrid P2P network; it uses a central authority to regulate and charge users for participation in the network. A key reason for the lack of fully-distributed P2P systems is the difficulty in designing highly robust algorithms for large-scale dynamic P2P networks. Indeed, P2P networks are highly dynamic networks characterized by high degree of node churn — i.e., nodes continuously join and leave the network. Connections (edges) may be added or deleted at any time and thus the topology changes very dynamically. In fact, measurement studies of real-world P2P networks [33, 40, 64, 65] show that the churn rate is quite high: nearly 50% of peers in real-world networks can be replaced within an hour. (However, despite a large churn rate, these studies also show that the total number of peers in the network is relatively stable.) We note that peer-to-peer algorithms have been proposed for a wide variety of computationally challenging tasks such as collaborative filtering , spam detection , data mining , worm detection and suppression [55, 67], and privacy protection of archived data . However, all algorithms proposed for these problems have no theoretical guarantees of being able to work in a network with a dynamically changing topology and a linear churn rate per round. This is a major bottleneck in implementation and wide-spread use of these algorithms.
In this paper, we take a step towards designing robust algorithms for large-scale dynamic peer-to-peer networks. In particular, we study the fundamental distributed agreement problem in P2P networks (the formal problem statement and model is given in Section 2). An efficient solution to the agreement problem can be used as a building block for robust and efficient solutions to other problems as mentioned above. However, the distributed agreement problem in P2P networks is challenging since the goal is to guarantee almost-everywhere agreement, i.e., almost all nodes 222In sparse, bounded-degree networks, an adversary can always isolate some number of non-faulty nodes, hence almost-everywhere is the best one can hope for in such networks . should reach consensus, even under high churn rate. The churn rate can be as much as linear per time step (round), i.e., up to a constant fraction of the stable network size can be replaced per time step. Indeed, until recently, almost all the work known in the literature (see e.g., [32, 44, 45, 46, 66]) have addressed the almost-everywhere agreement problem only in static (bounded-degree) networks and these approaches do not work for dynamic networks with changing topology. Such approaches fail in dynamic networks where both nodes and edges can change by a large amount in every round. For example, the work of Upfal  showed how one can achieve almost-everywhere agreement under up to a linear number — up to , for a sufficiently small — of Byzantine faults in a bounded-degree expander network ( is the network size). The algorithm required rounds and polynomial (in ) number of messages; however, the local computation required by each processor is exponential. Furthermore, the algorithm requires knowledge of the global topology, since at the start, nodes need to have this information “hardcoded”. The work of King et al.  is important in the context of P2P networks, as it was the first to study scalable (polylogarithmic communication and number of rounds) algorithms for distributed agreement (and leader election) that are tolerant to Byzantine faults. However, as pointed out by the authors, their algorithm works only for static networks; similar to Upfal’s algorithm, the nodes require hardcoded information on the network topology to begin with and thus the algorithm does not work when the topology changes. In fact, this work () raises the open question of whether one can design agreement protocols that can work in highly dynamic networks with a large churn rate.
1.1 Our Main Results
Our first contribution is a rigorous theoretical framework for the design and analysis of algorithms for highly dynamic distributed systems with churn. We briefly describe the key ingredients of our model here. (Our model is described in detail in Section 2.) Essentially, we model a P2P network as a bounded-degree expander graph whose topology — both nodes and edges — can change arbitrarily from round to round and is controlled by an adversary. However, we assume that the total number of nodes in the network is stable. The number of node changes per round is called the churn rate or churn limit. We consider a churn rate of up to some , where is the stable network size. Note that our model is quite general in the sense that we only assume that the topology is an expander at every step; no other special properties are assumed. Indeed, expanders have been used extensively to model dynamic P2P networks333Expander graphs have been used extensively as candidates to solve the agreement and related problems in bounded degree graphs even in static settings (e.g., see [32, 44, 45, 46, 66]). Here we show that similar expansion properties are beneficial in the more challenging setting of dynamic networks. in which the expander property is preserved under insertions and deletions of nodes (e.g., [52, 59]). Since we do not make assumptions on how the topology is preserved, our model is applicable to all such expander-based networks. (We note that various prior work on dynamic network models make similar assumptions on preservation of topological properties — such as connectivity, expansion etc. — at every step under dynamic edge insertions/deletions — cf. Section 1.3. The issue of how such properties are preserved are abstracted away from the model, which allows one to focus on the dynamism. Indeed, this abstraction has been a feature of most dynamic models e.g., see the survey of .)
We study stable, almost-everywhere, agreement in our model. By “almost-everywhere”, we mean that almost all nodes, except possibly nodes (where is the order of the churn and is a suitably small constant — cf. Section 2) should reach agreement on a common value. (This agreed value must be the input value of some node.) By “stable” we mean that the agreed value is preserved subsequently after the agreement is reached.
Our main contribution is the design and analysis of randomized distributed algorithms that guarantee stable almost-everywhere agreement with high probability (i.e., with probability , for an arbitrary fixed constant ) even under high adversarial churn in a polylogarithmic number of rounds. Our algorithms also guarantee stability once agreement has been reached. In particular, we present the following results (the precise theorem statements are given in the respective sections below):
(cf. Section 4) An -round ( is the stable network size) randomized algorithm that achieves almost-everywhere agreement with high probability under up to linear churn per round (i.e., , for some small constant ), assuming that the churn is controlled by an oblivious adversary (that has complete knowledge of what nodes join and leave and at what time, but is oblivious to the random choices made by the algorithm). Our algorithm requires only polylogarithmic in bits to be processed and sent (per round) by each node.
(cf. Section 5) An -round randomized algorithm that achieves almost-everywhere agreement with high probability under up to churn per round, for some small , that works even under an adaptive adversary (that also knows the past random choices made by the algorithm). Here refers to the size of the domain of input values. This algorithm requires up to polynomial in bits (and up to bits) to be processed and sent (per round) by each node.
(cf. Section 6) We also show that no deterministic algorithm can guarantee almost-everywhere agreement (regardless of the number of rounds), even under constant churn rate.
To the best of our knowledge, our algorithms are the first-known, fully-distributed, agreement algorithms that work under highly dynamic settings. Our algorithms are localized (do not require any global topological knowledge), simple, and easy to implement. These algorithms can serve as building blocks for implementing other non-trivial distributed computing tasks in P2P networks.
1.2 Technical Contributions
The main technical challenge that we have to overcome is designing and analyzing distributed algorithms in networks where both nodes and edges can change by a large amount. Indeed, when the churn rate is linear, i.e., say per round, in constant () number of rounds the entire network can be renewed!
We derive techniques for information spreading (cf. Section 3) for doing non-trivial distributed computation in such networks. The first technique that we use is flooding. We show that in an expander-based P2P network even under linear churn rate, it is possible to spread information by flooding if sufficiently many (a -fraction of the order of the churn) nodes initiate the information spreading (cf. Lemma 3.1). In other words, even an adaptive adversary cannot “suppress” more than a small fraction of the values. The precise statements and proofs are in Section 3.
To analyze these flooding techniques we introduce the dynamic distance, which describes the effective distance between two nodes with respect to the causal influence. We define the notions of influence sets and dynamic distance (or flooding time) in dynamic networks with node churn. (Similar notions have been defined for dynamic graphs with a fixed set of nodes, e.g., [48, 17]). In (connected) networks where the nodes are fixed, the effective diameter (e.g., ) is always finite. In the highly dynamic setting considered here, however, the effective distance between two nodes might be infinite, thus we need a more refined definition for influence set and dynamic distance.
The second technique that we use is “support estimation” (cf. Section 3.4). Support estimation is a randomized technique that allows us to estimate the aggregate count (or sum) of values of all or a subset of nodes in the network. Support estimation is done in conjunction with flooding and uses properties of the exponential distribution (similar to [26, 56]). Support estimation allows us to estimate the aggregate value quite precisely with high probability even under linear churn. But this works only for an oblivious adversary; to get similar results for the adaptive case, we need to increase the amount of bits that can be processed and sent by a node in every round.
Apart from support estimation, we also use our flooding techniques in the agreement algorithm for the oblivious case (cf. Algorithm 2) to sway the decision one way or the other. For the adaptive case (cf. Algorithm 3), we use the variance property of a certain probability distribution to achieve the same effect with constant probability.
1.3 Other Related Work
1.3.1 Distributed Agreement
The distributed agreement (or consensus) problem is important in a wide range of applications, such as database management, fault-tolerant analysis of aggregate data, and coordinated control of multiple agents or peers. There is a long line of research on various versions of the problem with many important results (see e.g., [7, 53] and the references therein). The relaxation of achieving agreement “almost everywhere” was introduced by  in the context of fault-tolerance in networks of bounded degree where all but nodes achieve agreement despite faults. This result was improved by , which showed how to guarantee almost everywhere agreement in the presence of a linear fraction of faulty nodes. Both the work of [32, 66] crucially use expander graphs to show their results. We also refer to the related results of Berman and Garay on the butterfly network .
1.3.2 Byzantine Agreement
We note that Byzantine adversaries are quite different from the adversaries considered in this paper. A Byzantine adversary can have nodes behaving arbitrarily, but no new nodes are added (i.e., no churn), whereas in our case (an external) adversary controls the churn and topology of the network but not the behavior of the nodes. Despite this difference it is worthwhile to mention that there has been significant work in designing peer-to-peer networks that are provably robust to a large number of Byzantine faults [35, 42, 57, 62]. These focus only on robustly enabling storage and retrieval of data items. The problem of achieving almost-everywhere agreement among nodes in P2P networks (modeled as an expander graph) is considered by King et al. in  in the context of the leader election problem; essentially,  is a sparse (expander) network implementation of the full information protocol of . More specifically,  assumes that the adversary corrupts a constant fraction of the processes that are under its control throughout the run of the algorithm. The protocol of  guarantees that with constant probability an uncorrupted leader will be elected and that a fraction of the uncorrupted processes know this leader. Again, we note that the failure assumption of  is quite different from the one we use: Even though we do not assume corrupted nodes, the adversary is free to subject different nodes to churn in every round. Also note that the algorithm of  does not work for dynamic networks.
In , we have developed an almost-everywhere agreement algorithm that tolerates up to churn and churn per round, in a dynamic network model.
1.3.3 Dynamic Networks
Dynamic networks have been studied extensively over the past three decades. Some of the early studies focused on dynamics that arise out of faults, i.e., when edges or nodes fail. A number of fault models, varying according to extent and nature (e.g., probabilistic vs. worst-case) and the resulting dynamic networks have been analyzed (e.g., see [7, 53]). There have been several studies on models that constrain the rate at which changes occur, or assume that the network eventually stabilizes (e.g., see [1, 31, 37]). Some of the early work on general dynamic networks include [2, 11] which introduce general building blocks for communication protocols on dynamic networks. Another notable work is the local balancing approach of  for solving routing and multicommodity flow problems on dynamic networks. Most of these papers develop algorithms that will work under the assumption that the network will eventually stabilize and stop changing.
Modeling general dynamic networks has gained renewed attention with the recent advent of heterogeneous networks composed out of ad hoc, and mobile devices. To address highly unpredictable network dynamics, stronger adversarial models have been studied by [9, 27, 58, 50] and others; see the recent survey of  and the references therein. The works of [50, 9, 27] study a model in which the communication graph can change completely from one round to another, with the only constraint being that the network is connected at each round ( and  also consider a stronger model where the constraint is that the network should be an expander or should have some specific expansion in each round). The model has also been applied to agreement problems in dynamic networks; various versions of coordinated consensus (where all nodes must agree) have been considered in . The recent work of , studies the flooding time of Markovian evolving dynamic graphs, a special class of evolving graphs.
We note that the model of  allows only edge changes from round to round while the nodes remain fixed. In this work, we introduce a dynamic network model where both nodes and edges can change by a large amount (up to a linear fraction of the network size). Therefore, the framework we introduce in Section 2 is more general than the model of , as it is additionally applicable to dynamic settings with node churn. The same is true for the notions of dynamic distance and influence set that we introduce in Section 3.1, since in our model the dynamic distance is not necessarily finite. In fact, according to , coping with churn is one of the important open problems in the context of dynamic networks. Our paper takes a step in this direction.
An important aspect of our algorithms is that they will work and terminate correctly even when the network keeps continually changing. We note that there has been considerable prior work in dynamic P2P networks (see  and the references therein) but these do not assume that the network keeps continually changing over time.
Due to the mobility of nodes, mobile ad-hoc networks can also be considered as dynamic networks. The focus of  are the minimal requirements that are necessary to correctly perform flooding and routing in highly dynamic networks where edges can change but the set of nodes remains the same. In the context of agreement problems, electing a leader among mobile nodes that may join or leave the network at any time is the focus of . To make leader election solvable in this model, Chung et al. introduce the notion of -connectedness, which ensures information propagation among all nodes that remain long enough in the network. Note that, in contrast to our model, this assumption prohibits the adversary from permanently isolating parts of the network. The recent work of  presents information spreading algorithms on dynamic networks based on network coding .
In most work on fault-tolerant agreement problems the adversary a priori commits to a fixed set of faulty nodes. In contrast,  considers an adversary that can corrupt the state of some (possibly changing) set of nodes in every round. The median rule of  provides an elegant way to ensure that most nodes stabilize on a common output value within rounds, assuming a complete communication graph. The median rule, however, only guarantees that this agreement lasts for some polynomial number of rounds, whereas we are able to retain agreement ad infinitum.
Expander graphs and spectral properties have already been applied extensively to improve the network design and fault-tolerance in distributed computing (cf. [66, 32, 16]). Law and Siu  provide a distributed algorithm for maintaining an expander in the presence of churn with high probability by using Hamiltonian cycles. In  it is shown how to maintain the expansion property of a network in the self-healing model where the adversary can delete/insert a new node in every step. In the same model,  present a protocol that maintains constant node degrees and constant expansion (both with probability ) against an adaptive adversary, while requiring only logarithmic (in the network size) messages, time, and topology changes per deletion/insertion. In , it is shown that a SKIP graph (cf. ) contains a constant degree expander as a subgraph with high probability. Moreover, it requires only constant overhead for a node to identify its incident edges that are part of this expander. Later on,  presented a self-stabilizing algorithm that converges from any weakly connected graph to a SKIP graph in time polylogarithmic in the network size, which yields a protocol that constructs an expander with high probability. In  the authors introduce the hyperring, which is a search data structure supporting insertions and deletions, while being able to handle concurrent requests with low congestion and dilation, while guaranteeing expansion and node degree. The -Flipper algorithm of  transforms any undirected graph into an expander (with high probability) by iteratively performing flips on the end-vertices of paths of length . Based on this protocol, the authors describe how to design a protocol that supports deletions and insertions of nodes. Note that, however, the expansion in  is only guaranteed with high probability however, assuming that the node degree is .
Information spreading in distributed networks is the focus of  where it is shown that this problem requires rounds in graphs with a certain conductance in the push/pull model where a node can communicate with a randomly chosen neighbor in every round.
Aspnes et al.  consider information spreading via expander graphs against an adversary, which is related to the flooding techniques we derive in Section 3. More specifically, in  there are two opposing parties “the alert” and “the worm” (controlled by the adversary) that both try to gain control of the network. In every round each alerted node can alert a constant number of its neighbors, whereas each of the worm nodes can infect a constant number of non-alerted nodes in the network. In , Aspnes et al. show that there is a simple strategy to prevent all but a small fraction of nodes from becoming infected and, in case that the network has poor expansion, the worm will infect almost all nodes.
The work of  shows that, given a network that is initially an expander and assuming some linear fraction of faults, the remaining network will still contain a large component with good expansion. These results are not directly applicable to dynamic networks with large amount of churn like the ones we are considering, as the topology might be changing and linear churn per round essentially corresponds to total churn after rounds—the minimum amount of time necessary to solve any non-trivial task in our model.
In the context of maintaining properties in P2P networks, Kuhn et al. consider in  that up to nodes can crash or join per constant number of time steps. Despite this amount of churn, it is shown in  how to maintain a low peer degree and bounded network diameter in P2P systems by using the hypercube and pancake topologies. Scheideler and Schmid show in  how to maintain a distributed heap that allows join and leave operations and, in addition, is resistent to Sybil attacks. A robust distributed implementation of a distributed hash table (DHT) in a P2P network is given by , which can withstand two important kind of attacks: adaptive join-leave attacks and adaptive insert/lookup attacks by up to adverserial peers. Note that, however, that collisions are likely to occur once the number of attacks becomes .
2 Model and Problem Statement
We are interested in establishing stable agreement in a dynamic peer-to-peer network in which the nodes and the edges change over time. The computation is structured into synchronous rounds, i.e., we assume that nodes run at the same processing speed and any message that is sent by some node to its (current) neighbors in some round will be received by the end of . To ensure scalability, we restrict the number of bits sent per round by each node to be polylogarithmic in the size of the input value domain (cf. Section 2.1). For dealing with the much more powerful adaptive adversary, we relax this requirement in Sections 3.5 and 5. We model dynamism in the network as a family of undirected graphs . At the beginning of each round we start with the network topology . Then, the adversary gets to change the network from to (in accordance to rules outlined below). As is typical, an edge indicates that and can communicate in round by passing messages. For the sake of readability, we use as a shorthand for Each node has a unique identifier and is churned in at some round and churned out at some . More precisely, for each node , there is a maximal range such that and for every , . Any information about the network at large is only learned through the messages that receives. It has no a priori knowledge about who its neighbors will be in the future. Neither does know when (or whether) it will be churned out. Note that we do not assume that nodes have access to perfect clocks, but we show (cf. Section 3.3) how the nodes can synchronize their clocks.
We make the following assumptions about the kind of changes that our dynamic network can encounter:
- Stable Network Size:
For all , , where is a suitably large positive integer. This assumption simplifies our analysis. Our algorithms will work correctly as long as the number of nodes is reasonably stable (say, between and for some suitably small constant ). Also, we assume that (or a constant factor estimate of ) is common knowledge among the nodes in the network444This assumption is important; estimating accurately in our model is an interesting problem in itself..
For each ,
where is the churn limit, which is some fixed fraction of the order of the churn ; the equality in the above equation ensures that the network size remains stable. Our work is aimed at high levels of churn up to a churn limit that is linear in , i.e., .
- Bounded Degree Expanders:
The sequence of graphs is an expander family with a vertex expansion of at least , which is a fixed positive constant.555Note that the value of determines , i.e. the fraction of churn that we can tolerate. In particular, to tolerate linear amount of churn, we require constant expansion. In principle, our results can potentially be extended to graphs with weaker expansion guarantees as well; however the amount of churn that can be tolerated will be reduced. In other words, the adversary must ensure that for every and every such that , the number of nodes in with a neighbor in is at least . Note that we do not explicitly consider the costs (communication and computation) of maintaining an expander under churn. Instead, we assume that the duration of each time step in our model are normalized to be large enough to encompass an expander maintenance protocol such as [52, 60].
A run of a distributed algorithm consists of an infinite number of rounds. We assume that the following events occur (in order) in every round :
A set of at most nodes are churned in and another set of nodes are churned out. The edges of may be changed as well, but has to have a vertex expansion of at least . These changes are under the control of the adversary.
The nodes broadcast messages to their (current) neighbors.
Nodes receive messages broadcast by their neighbors.
Nodes perform computation that can change their state and determine which messages to send in round .
Bounds on Parameters
Recall that the churn limit , where is a constant and is the churn order. When , is the fraction of the nodes churned out/in and therefore we require to be less than 1 and must adhere to Equation (1). Moreover, we require the bound regarding the right hand side of (1). However, when , can exceed 1. In the remainder of this paper, we consider to be a small constant independent of , such that
2.1 Stable Agreement
We now define the Almost Everywhere Stable Agreement problem (or just the Stable Agreement problem for brevity). Each node has an associated input value from some value domain of size ; subsequent new nodes come with value . Let be the set of all input values associated with nodes in at the start of round 1. Every node is equipped with a special decision variable (initialized to ) that can be written at most once. We say that a node decides on when assigns to its . Note that this decision is irrevocable, i.e., every node can decide at most once in a run of an algorithm. As long as , we say that is undecided. Stable Agreement requires that a large fraction of the nodes come to a stable agreement on one of the values in . More precisely, an algorithm solves Stable Agreement in rounds, if it exhibits the following characteristics in every run, for any fixed adhering to (1).
If, in some round , node decides on a value , then .
- Almost Everywhere Agreement:
We say that the network has reached strong almost everywhere agreement by round , if at least nodes in have decided on the same value and every other node remains undecided, i.e., its decision value is . In particular, no node ever decides on a value in the same run, for .
Let be the earliest round where nodes have reached almost everywhere agreement on value . We say that an algorithm reaches stability by round if, at every round , at least nodes in have decided on .
We also consider a weaker variant of the above problem that we call Almost Everywhere Binary Consensus (or simply, Binary Consensus) where the input values in are restricted to .
We consider two types of adversaries for our randomized algorithms. An oblivious adversary must commit in advance to the entire sequence of graphs . In other words, an oblivious adversary must commit independently of the random choices made by the algorithm. We also consider the more powerful adaptive adversary that can observe the entire state of the network in every round (including all the random choices made until round ), and then chooses the nodes to be churned out/in and how to change the topology of .
For the sake of readability, we treat as an integer and omit the necessary ceiling or floor operations if their application is clear from the context.
3 Techniques for Information Spreading
In this section, we first derive and analyze techniques to spread information in the network despite churn. First, we show that the adversary is unable to prevent a sufficiently large set of nodes (of size at least ) to propagate their information to almost all other nodes (cf. Lemma 3.1). Building on this result, we analyze the capability of individual nodes to spread their information. We show in Lemma 3.2 and Corollary 3.3 that at most nodes can be hindered by the adversary. Finally, we show in Lemmas 3.5 and 3.6 that there is a large set of nodes such that all nodes in are able to propagate their information to a large common set of nodes.
In Sections 3.4 and 3.5, we describe how to use the previously derived techniques on information spreading to estimate the “support” (i.e. number) of nodes that belong to a specific category (either red or blue). These protocols will form a fundamental building block for our Stable Agreement algorithms.
Due to the high amount of churn and the dynamically changing network, we use message flooding to disseminate and gather information. We now precisely define flooding. Any node can initiate a message for flooding. Messages that need to be flooded have an indicator bit bFlood set to 1. Each of these messages also contains a terminating condition. The initiating node sends copies of the message to itself and its neighbors. When a node receives a message with bFlood set to 1, it continues to send copies of that message to itself and its neighbors in subsequent rounds until the terminating condition is satisfied.
3.1 Dynamic Distance and Influence Set
Informally, the dynamic distance from node to node is the number of rounds required for a message at to reach . We now formally define the notion of dynamic distance of a node from starting at round , denoted by . When the subscript is omitted, we assume that .
Suppose node joins the network at round , and, from round onward, initiates a message for flooding whose terminating condition is: . If is churned out before , then is undefined. Suppose the first of those flooded messages reaches in round . Then, . Note that this definition allows to be infinite under two scenarios. Firstly, node may be churned out before any copy of reaches . Secondly, at each round, can be shielded by churn nodes that absorb the flooded messages and are then removed from the network before they can propagate these messages any further. The influence set of a node after rounds starting at round is given by:
Note that we require . Intuitively, we want the influence set of (in this dynamic setting) to capture the nodes currently in the network that were influenced by . Note however that the influence set of a node is meaningful even after is churned out. Analoguously, we define
for any set of nodes .
If we consider only a single node , an (adaptive) adversary can easily prevent the influence set of this node from ever reaching any significant size by simply shielding with churn nodes that are replaced in every round.666An oblivious adversary can achieve the same effect with constant probability for linear churn.
3.2 Properties of Influence Sets
We now focus our efforts on characterizing influence sets. This will help us in understanding how we can use flooding to spread information in the network. For the most part of this section we assume that the network is controlled by an adaptive adversary (cf. Section 2.1). The following lemma shows that the number of nodes that are sufficient to influence almost all the nodes in the network is given by the churn-expansion ratio (cf. Equation (1)):
Suppose that the adversary is adaptive. Consider any set (for any ) such that . Then, after
number of rounds, it holds that
When considering linear churn, i.e., , the bound becomes a constant independent of . On the other hand, when considering a churn order of , we get .
Our proof assumes that for simplicity as the arguments extend quite easily to arbitrary values of . We proceed in two parts: First we show that the nodes in influence at least nodes in some rounds. More precisely, we show that . We use vertex expansion in a straightforward manner to establish this part. Then, in the second part we show that nodes in go on to influence more than nodes. We cannot use the vertex expansion in a straightforward manner in the second part because the cardinality of the set that is expanding in influence is larger than . Rather, we use a slightly more subtle argument in which we use vertex expansion going backward in time. The second part requires another rounds. Therefore, the two parts together complete the proof when we set .
To begin the first part, consider at the start of round 1 with . In round , up to nodes in can be churned out. Subsequently, the remaining nodes in influence some nodes outside as is an expander with vertex expansion at least . More precisely, we can say that
At the start of round , the graph changes dynamically to . In particular, up to nodes might be churned out and they may all be in in the worst case. However, the influenced set will again expand. Therefore, cannot be less than . Of course, there will be more churn at the start of round 3 followed by expansion leading to: \linenomath
This cycle of churn followed by expansion continues and we get the following bound at the end of some round : \linenomath
rounds, we get
Now we move on to the second part of the proof. Let . If , we are done. Therefore, for the sake of a contradiction, assume that . Let , i.e., is the set of nodes in that were not influenced by at (or before) round . Moreover, because we have assumed that . We will start at round and work our way backward. For , let , be the set of all vertices in that, starting from round , influenced some vertex in at or before round . More precisely,
Suppose that . Then
since by (5). Consider a node . Note that was influenced by and went on to influence some node in before (or at) round . However, by definition, no node in can be influenced by any node in at or before round . We have thus reached a contradiction.
We are left with showing that . We start with and work our way backwards. We know that . We want to compute the cardinality of . We first focus on an intermediate set , which we define as
Since is an expander, . Furthermore, it is also clear that each node in could influence some node in . Notice that is the set of nodes in that were churned in only at the start of round . Therefore, \linenomath
Continuing to work our way backwards in time, we get \linenomath
Or more generally, \linenomath
We now want the value of for which
In other words, we want a value of such that
which is obtained when . Therefore, it is easy to see that if we set , we get , thereby completing the proof. ∎∎
At first glance, it might appear to be counterintuitive that the order of the bound decreases with increasing churn. When the adversary has the benefit of churn that is linear in , our bound on is a constant, but when the adversary is limited to a churn order of , we get . This, however, turns out to be fairly natural when we note that the size of the set of nodes that we start out with is in proportion to the churn limit.
We say that a node is suppressed for rounds or shielded by churn if ; otherwise we say it is unsuppressed. The following lemma shows that given a set with cardinality at least some node in that set will be unsuppressed.
Consider the adaptive adversary. Let be any subset of , , such that . Let be the bound derived in Lemma 3.1. There is at least one such that for some , is unsuppressed, i.e.,
In particular, when the order of the churn is , becomes a constant, and we have .
Before we proceed with our key arguments of the proof, we state a property of bipartite graphs that we will use subsequently.
Let be a bipartite graph in which and every vertex has at least one neighbor in . There is a subset of cardinality at most such that
(of Property 1) Consider each node in to be a unique color. Color each node in using the color of a neighbor in chosen arbitrarily. Now partition into maximal subsets of nodes with like colors. Consider the parts of the partition sorted in decreasing order of their cardinalities. We now greedily choose the first colors in the sorted order of parts of . We call the chosen colors . Observe that colors in cover at least as many nodes in as those not in . Suppose the colors in cover fewer than nodes in . Then the remaining colors will cover , but that is a contradiction. Therefore, colors in cover at least nodes in . The nodes in that have the colors in are the nodes that comprise , thereby completing our proof. ∎∎
(of Lemma 3.2) Again, our proof assumes because it generalizes to arbitrary values of quite easily. From Lemma 3.1, we know that the influence of all nodes in taken together will reach nodes in rounds. This does not suffice because we are interested in showing that there is at least one node in that (individually) influences nodes in for some .
From Lemma 3.1, we know that (collectively) will influence at least nodes in T rounds, i.e.,
From Property 1, we know that there is a set of cardinality at most such that
Recalling that , we know that . We can again use Lemma 3.1 to say that influences more than nodes in additional rounds and, by transitivity, influences more than nodes after rounds. We therefore have . Again, we can choose a set (using Property 1) that consists of nodes in such that . Subsequently applying Lemma 3.1 extends the influence set of to more than after rounds.
In every iteration of the above argument, the size of the set decreases by a constant fraction until we are left with a single node such that . ∎∎
Can (or more nodes) be suppressed for any significant number of (say, ) rounds? This is in immediate contradiction to Lemma 3.2 because any such suppressed set of nodes must contain an unsuppressed node. This leads us to the following corollary.
The number of nodes that can be suppressed for rounds is less than , even if the network is controlled by an adaptive adversary.
Consider an oblivious adversary that must commit to the entire sequence of graphs in advance. If we choose a node uniformly at random from , with probability at least , then will be unsuppressed, i.e.,
Let be the set of nodes suppressed for rounds. Under an oblivious adversary, the node chosen unformly at random from will not be in with probability , and hence, will not be suppressed with that same probability. ∎∎
The following two lemmas show that there exists a set of unsuppressed nodes, all of which can influence a large common set of nodes, given enough time.
Consider a dynamic network under linear churn that is controlled by an adaptive adversary. In some rounds, there is a set of unsuppressed nodes of cardinality more than such that
Let be any set of unsuppressed nodes, i.e., in some rounds for some constant , the influence set of each has cardinality more than . Note that, however, we cannot guarantee that, for any two vertices and in ,
Assume for simplicity that is a power of 2. Consider any pair of vertices , both members of . Recalling that , we can say that
Therefore, considering that the intersected set of nodes has cardinality at least , we can apply Lemma 3.1 leading to . We can partition into a set of pairs such that for each pair, the intersection of influence sets has cardinality more than after rounds. Similarly, we can construct a set of quadruples by disjointly pairing the pairs in . Using a similar argument, we can say that for any ,
Progressing analogously, the set will equal and we can conclude that
Since , it holds that , thus completing the proof. ∎∎
Suppose that up to nodes can be subjected to churn in any round by an adaptive adversary. In some rounds, there is a set of unsuppressed nodes of cardinality at least such that
Since we assume that , the bound of Lemma 3.1 is in . Therefore, by instantiating Corollary 3.3, we know that each of the unsuppressed nodes in (which is of cardinality at least ) will influence more than nodes in time. We can use the same argument as in Lemma 3.5 to show that in rounds, all the unsuppressed nodes have a common influence set of size at least . That common influence set will grow to at least nodes within another rounds. Thus a total of rounds is sufficient to fulfill the requirements. ∎∎
3.3 Maintaining Information in the Network
In a dynamic network with churn limit , the entire set of nodes in the network can be churned out and new nodes churned in within rounds. How do the new nodes even know what algorithm is running? How do they know how far the algorithm has progressed? To address these basic questions, the network needs to maintain some global information that is not lost as the nodes in the network are churned out. There are two basic pieces of information that need to be maintained so that a new node can join in and participate in the execution of the distributed algorithm:
the algorithm that is currently executing, and
the number of rounds that have elapsed in the execution of the algorithm. In other words, a global clock has to be maintained.
We assume that the nodes in are all synchronized in their understanding of what algorithm to execute and the global clock. The nodes in the network continuously flood information on what algorithm is running so that when a new node arrives, unless it is shielded by churn, it receives this information and can start participating in the algorithm. To maintain the clock value, nodes send their current clock value to their immediate neighbors. When a new node receives the clock information from a neighbor, it sets its own clock accordingly. Since nodes are not malicious or faulty, Lemma 3.1 ensures that information is correctly maintained in more than nodes.
3.4 Support Estimation Under an Oblivious Adversary
Suppose we have a dynamic network with nodes colored red in . is also called the support of red nodes. We want the nodes in the network to estimate under an oblivious adversary. We assume that the adversary chooses and which nodes in to color red, but it does not know the random choices made by the algorithm. Furthermore, we assume that churn can be linear in , i.e., .
Our algorithm uses random numbers drawn from the exponential distribution, whose probability density function, we recall, is parameterized by and given by for all . Furthermore, we notice that the expected value of a random number drawn from the exponential distribution of parameter is . We now present two properties of exponential random variables that are crucial to our context. Consider independent random variables , each following the exponential distribution of rate .
Property 2 (see  for example).
The minimum among all ’s, for , is an exponentially distributed random variable with parameter .
The idea behind our algorithm exploits Property 2 in the following manner. If each of the red nodes generate an exponentially distributed random number with parameter 1, then the minimum among those random numbers will also be exponentially distributed, but with parameter . Thus serves as an estimate of . To get a more accurate estimation of , we exploit the following property that provides us with sharp concentration when the process is repeated a sufficient number of times.
We now present our algorithm for estimating in pseudocode format (assuming ); see Algorithm 1.
Consider an oblivious adversary and let be a an arbitrary fixed constant . Let . By executing Algorithm 1 to estimate both and , we can estimate to within for any with probability at least .
Without loss of generality, let . Out of the red nodes up to nodes (chosen obliviously) can be suppressed, leaving us with
unsuppressed red nodes (since ). In a slight abuse of notation, we use and to denote both the cardinality and the set of red nodes and unsuppressed red nodes, respectively. We define
note that and (cf. Lemma 3.5). Let be some node in . Let
For all , . Notice that computed by in line number 6 of Algorithm 1 is based on random numbers generated by all nodes in . Therefore, at round , node is estimating using the exponential random numbers that were drawn by nodes in . Since our adversary is oblivious, the choice of is independent of the choice of the random numbers generated by each . Therefore, is an exponentially distributed random number with rate (cf. Property 2). For any , let . When parallel iterations are performed, where , the required accuracy is obtained with probability (cf. Property 3). ∎∎
3.5 Support Estimation Under an Adaptive Adversary
The algorithm for support estimation under an oblivious adversary (cf. Section 3.4) does not work under an adaptive adversary. To estimate the support of red nodes in the network, each red node draws a random number from the exponential distribution and floods it in an attempt to spread the smallest random number. When the adversary is adaptive, the smallest random numbers can easily be targeted and suppressed. To mitigate this difficulty, we consider a different algorithm in which the number of bits communicated is larger. In particular, the number of bits communicated per round by each node executing this algorithm is at most polynomial in .
Let be the support of the red nodes. Every node floods its unique identifier along with a bit that indicates whether it is a red node or not. At most nodes’ identifiers can be suppressed by the adversary for rounds leaving at least unsuppressed identifiers (cf. Corollary 3.3). Each node counts the number of unique red identifiers and non-red identifiers that flood over it and estimates to be .
This support estimation technique generalizes quite easily to arbitrary churn order. Therefore, we state the following theorem more generally.
Consider the algorithm mentioned above in which nodes flood their unique identifiers indicating whether they are red nodes or not and assume that the network is controlled by an adaptive adversary. Let be the order of the churn; we assume for simplicity that is either or . Then the following holds:
At least nodes estimate between and . Furthermore, these nodes are aware that their estimate is within and .
The remaining nodes are aware that their estimate of might fall outside .
When , it requires only rounds, but when , it requires rounds.
Let be any one of the nodes that receive at least unsuppressed identifiers (cf. Lemma 3.5 and Lemma 3.6). Let and be the number of unique identifiers from red nodes and non-red nodes, respectively, that flood over . Let . This means that estimates to be . Note that and since , is estimated between and . Furthermore, since received identifiers, it can be sure that its estimate is between and .
If a node does not receive at least identifiers, then it is aware that its estimate of might not be within .
4 Stable Agreement Under an Oblivious Adversary
In this section we will first present Algorithm 2 for the simpler problem of reaching Binary Consensus, where the input values are restricted to (cf. Section 2.1). We will then use this algorithm as a subroutine for solving Stable Agreement in Section 4.2.
Throughout this section we assume suitable choices of and such that the upper bound
4.1 Binary Consensus
A node that executes Algorithm 2 proceeds in a sequence of checkpoints that are interleaved by rounds. Each node has a bit variable that stores its current output value. At each checkpoint , node initiates support estimation of the number of nodes currently having 1 as their output bit by using the algorithm described in Section 3.4. (At checkpoint , nodes estimate both: the support of 1 and 0.) The outcome of this support estimation will be available in checkpoint where has derived the estimation . If believes that the support of 1 is small (), it sets its own output to 0; if, on the other hand, is large (), sets its output to 1. This guarantees stability once agreement has been reached by a large number of nodes. When the support of 1 is roughly the same as the support of 0, we need a way to sway the decision to one side or the other. This is done by flooding the network whereby the flooding message of node is weighted by some randomly chosen value, say . The adversary can only guess which node has the highest weight and therefore, with constant probability, the flooding message with this highest weight (i.e., smallest random number) will be used to set the output bit by almost all nodes in the network.