Stabilizing Consensus with Many Opinions

Stabilizing Consensus with Many Opinions

L. Becchetti Sapienza Università di Roma, becchett@dis.uniroma1.it, natale@di.uniroma1.it A. Clementi Università Tor Vergata di Roma, clementi@mat.uniroma2.it, pasquale@mat.uniroma2.it E. Natale Sapienza Università di Roma, becchett@dis.uniroma1.it, natale@di.uniroma1.it F. Pasquale Università Tor Vergata di Roma, clementi@mat.uniroma2.it, pasquale@mat.uniroma2.it L. Trevisan U.C. Berkeley, luca@berkeley.edu
Abstract

We consider the following distributed consensus problem: Each node in a complete communication network of size initially holds an opinion, which is chosen arbitrarily from a finite set . The system must converge toward a consensus state in which all, or almost all nodes, hold the same opinion. Moreover, this opinion should be valid, i.e., it should be one among those initially present in the system. This condition should be met even in the presence of an adaptive, malicious adversary who can modify the opinions of a bounded number of nodes in every round.

We consider the 3-majority dynamics: At every round, every node pulls the opinion from three random neighbors and sets his new opinion to the majority one (ties are broken arbitrarily). Let be the number of valid opinions. We show that, if , where is a suitable positive constant, the 3-majority dynamics converges in time polynomial in and with high probability even in the presence of an adversary who can affect up to nodes at each round.

Previously, the convergence of the 3-majority protocol was known for only, with an argument that is robust to adversarial errors. On the other hand, no anonymous, uniform-gossip protocol that is robust to adversarial errors was known for .

Keywords: Distributed Consensus, Byzantine Agreement, Gossip Model, Majority Rules, Markov Chains.

1 Introduction

We study the following probabilistic, synchronous process on a complete network of anonymous nodes: At the beginning, each node holds an “opinion” which is an element of an arbitrary finite set . We call an opinion valid if it is held by at least one node at the beginning. Then, in each round, the following happens: 1) every node pulls the opinion from three random nodes and sets its new opinion to the majority one (ties are broken arbitrarily), and 2) an adaptive dynamic adversary can arbitrarily change the opinions of some nodes. We consider -dynamic adversaries that, at every round, can change the opinions of up to nodes, possibly introducing non-valid opinions.

Let the system start from any configuration having valid opinions with for some constant and consider any -dynamic adversary with . We prove that the process converges to a configuration in which all but nodes hold the same valid opinion within rounds, with high probability. So, this bounded adversary has no relevant chances to force the system to converge to non-valid opinions.

This shows that the 3-majority dynamics provides an efficient solution to the stabilizing-consensus problem in the uniform-gossip model. Previously, this was known only for the binary case, i.e. , while for any , it has been an important open question for several years [3, 12]. Furthermore, still for any , -time convergence of the 3-majority dynamics was open even in the absence of an adversary whenever the initial bias toward some plurality opinion is not large.

In the reminder of this section, we will describe in more detail the consensus problem and various network scenarios in which it is of interest, our result in this setting, and a comparison with previous related results.

1.1 Consensus (or Byzantine agreement)

The consensus problem in a distributed network is defined as follows: A collection of agents, each holding a piece of information (an element of a set ), interact with the goal of agreeing on one of the elements of initially held by at least one agent, possibly in the presence of an adversary that is trying to disrupt the protocol. The consensus problem in the presence of an adversary (known as Byzantine agreement) is a fundamental primitive in the design of distributed algorithms [22, 24]. The goal is to design a distributed, local protocol that brings the system into a configuration that meets the following conditions: (1) Agreement: All non-corrupted nodes support the same opinion ; (2) Validity: The opinion must be a valid one, i.e., an opinion which was initially declared by at least one (non-corrupted) node; (3) Termination: Every non-corrupted node can correctly decide to stop running the protocol at some round.

Recently, there has been considerable interest in the design of consensus algorithms in models that severely restrict both communication and computation [3, 6, 12], both for efficiency consideration and because such models capture aspects of the way consensus is reached in social networks, biological systems, and other domains of interest in network science [2, 4, 8, 9, 14, 15, 16].

In particular, we assume an anonymous network in which nodes possess no unique IDs, nor do they have any static binding of their local link ports (i.e., nodes cannot keep track of who sent what). From the point of view of computation, the most restrictive setting is to assume that each node only has bits of memory available, i.e., it just suffices to store a constant number of opinions. We further assume that this bound extends to link bandwidth available in each round. Finally, communication capabilities are severely constrained and non-deterministic: Every node can communicate with at most a (small) constant number of random neighbors in each round. These constraints are well-captured by the uniform-gossip communication model [10, 18, 19]: At every round, every node can exchange a (short) message (say, bits) with each of at most random neighbors, where is a (small) absolute constant111In fact, in the standard uniform-gossip model. It is easy to verify that all our results still hold in this more restricted model at the cost of a constant slow-down in convergence time and local memory size.. A more recent, sequential variant of the uniform-gossip model is the (random) population-protocols model [3, 1, 2] in which, in each round, a single interaction between a pair of randomly selected nodes occurs.


The classic notion of consensus is too strong and unrealistic in the aforementioned distributed settings, that instead rely on weaker forms of consensus, deeply investigated in [3, 4, 5, 12]. In this paper, we consider a variant of the stabilizing-consensus problem [4] considered in [3]: There, a solution is required to converge to a stable regime in which the above three properties are guaranteed in a relaxed, still useful form222 These relaxed convergence properties are described in detail in Section 7 of [3].. More precisely:

Definition 1.1.

A stabilizing almost-consensus protocol must ensure the following properties:

- Almost agreement. Starting from any initial configuration, in a finite number of rounds, the system must reach a regime of configurations where all but a negligible “bad” subset (i.e. having size for constant ) of the nodes support the same opinion.

- Almost validity. The system is required to converge w.h.p. to an almost-agreement regime where all but a negligible bad set of nodes keep the same valid opinion.

- Non termination. In dynamic distributed systems, nodes represent simple and anonymous computing units which are not necessarily able to detect any global property.

- Stability. The convergence toward such a weaker form of agreement is only guaranteed to hold with high probability (in short, w.h.p.333According to the standard definition, we say that a sequence of events , holds with high probability if for some positive constant .) and only over a long period (i.e. for any arbitrarily-large polynomial number of rounds).

The main result of this paper is on the convergence properties of the 3-majority dynamics in the uniform-gossip model in the presence of the adaptive -dynamic adversary (defined above) and of the adaptive -static adversary. In the latter, the adversary looks at the initial configuration, then changes the opinion of up to nodes and, after that, no further adversary’s actions are allowed.

Theorem 1.2.

Let for some constant and for some constant . Starting from any initial configuration having valid opinions, the 3-majority dynamics reaches a (valid) stabilizing almost-consensus in presence of any -dynamic adversary within rounds, w.h.p.
Moreover, the same bound on the convergence time holds in the presence of any -static adversary with a larger bound on , i.e., .

In [7], an bound on the convergence-time of the 3-majority dynamics is derived (that holds even when the system starts from biased configurations): So, our bound is almost-tight whenever .

Not assuming a large initial bias of the plurality opinion considerably complicates the analysis. Indeed, the major open challenge is the analysis from (almost) uniform configurations, where the system needs to break the initial symmetry in the absence of significant drifts towards any of the initial opinions. So far, this issue has never been analyzed even in the non-adversarial case. Moreover, the phase before symmetry breaking is the one in which the adversary has more chances to cause undesired behaviours: Long delays and/or convergence towards non-valid opinions. In Section 2, after providing some preliminaries, we shall discuss the above technical challenges.

1.2 Previous results

Consensus problems in distributed systems have been the focus of a large body of work in several research areas, such as distributed computing [17], communication networks [25], social networks and voting systems [21, 27], distributed databases [10, 11], biological systems and Chemical Reaction Networks [9]. For brevity’s sake, we here focus on results that are closest in spirit to our work.

In [3], the authors show that w.h.p. agents that meet at random can reach valid stabilizing almost-consensus in pairwise interactions against an -bounded dynamic adversary. The adopted protocol is the well-studied third-state protocol [3, 23]. However, their analysis (and, thus, their result) only holds for the binary case and for the population-protocol model: At every round only one pair of nodes can interact. The authors left the existence of protocols for the multi-valued Byzantine case as a final open question [3]. In general, sequential processes are much easier to analyze than parallel ones (like those yielded by the uniform-gossip model): For instance, the resulting Markov chains are reversible [20] while those arising from parallel processes are non-reversible.

In the uniform-gossip model, in [12] the authors provide an analysis of the 3-median rule, in which every node updates its value to the median of its random sample. They show that this dynamics converges to an almost-agreement configuration (which is even a good approximation of the global median) within rounds, w.h.p. It turns out that, in the binary case, the median rule is equivalent to the 3-majority dynamics, thus their result implies that 3-majority is an -stabilizing consensus with convergence time. However, in the non-binary case, it requires to be a totally-ordered set and the possibility to perform basic algebraic operations: This is a rather strong restriction in applications arising from social networks, voting-systems, and bio-inspired systems. More importantly, we emphasize that, even assuming an ordered opinion set , the 3-median rule does not guarantee the crucial property of validity against both -static (and, clearly, dynamic) adversaries even for very-small bounds on (say ).

We strongly believe that the validity property of consensus plays a crucial role in several realistic scenarios, such as monitoring sensor networks, bio-inspired dynamic systems, and voting systems [9, 21, 27].

More recently, the 3-majority rule in the multi-opinion case (i.e. for ) has been studied for a stronger goal than consensus, namely, stabilizing plurality consensus [7]. In this task, the goal is to reach an almost-stable consensus towards the valid opinion initially supported by the plurality of the nodes. However, the initial configuration is assumed to have a large bias towards the pluraltiy opinion. Then, let be the number of valid opinions, and let be the initial difference between the largest and the second-largest opinion: By strongly exploiting the assumption , the authors in [7] proved that, w.h.p., the system converges to the plurality opinion within time .

Another version of binary stabilizing almost-consensus is the one studied by Yildiz et al in [27]: Here, corrupted nodes are stubborn agents of a social network who influence others but never change their opinions. They prove negative results under a generalized variant of the classic voter dynamics in the (Poisson-clock) population-protocol model.

2 The Process and its Analysis in a Nuthshell

Preliminaries. We assume a distributed system consisting of nodes that communicate with each other over a complete graph via the synchronous uniform-gossip mechanism: In every round, each node can pull information from (at most) random neighbors, where is an absolute constant (in this work, ). At the onset, every node chooses an arbitrary item, called opinion, from an arbitrary finite set . A simple dynamics for consensus is the -majority protocol [7]:

In each round, every node samples three nodes uniformly at random (including itself and with repetitions) and revises its opinion according to the majority of the opinions it sees. If it sees three different opinions, it picks the first one.

Clearly, in the case of three different opinions, choosing the second or the third one would not make any difference, nor would choosing one of the observed opinions uniformly at random.

Since the communication graph is complete and nodes are anonymous, the overall system state at any round can be described by a configuration , where the support of opinion is the number of nodes holding opinion in that system’s state. Given configuration , we say that an opinion is active in if and, for any set of active opinions , we define . For any variable of the process, we write if we are considering its value at round and to denote the corresponding random variable. Furthermore, following [20], considered a configuration and a random variable defined over the process, we write for , i.e., to denote the probability distribution of the variable when the system evolves for consecutive rounds starting from configuration . Analogously, we write for the associated conditional expectation.


The next lemma provides the expected number of nodes supporting a given opinion at round (and a general upper bound to it), given the configuration at round . The simple proof of the first equality is in [7]. It is also included in Appendix A to make the paper self-contained.

Lemma 2.1 (See [7]).

Let be the configuration at round and let be the subset of active opinions in . Then, for any opinion ,

(1)

The above upper bound easily implies that opinions whose supports fall below the average decrease in expectation. This expected drift is a key-ingredient of our analysis and, as we will see in the next paragraph, it provides useful intuitions about the process. On the other hand, when is almost uniform, the above drift turns out to be negligible and symmetry breaking is due to the inherent variance of the random process.

Failed attempts. When the 3-majority dynamics starts from configurations that exhibit a large initial support bias between the largest and the second-largest opinions, the approach adopted in [7] successfully exploits the fact that the initial plurality is preserved throughout the evolution of the random process, with an expected positive drift that is also preserved, w.h.p. An intuition of this fact can be achieved from simple manipulations of (1). However, the aforementioned drift is only preserved if the largest opinion never changes (w.h.p.), no matter which the second-largest opinion is: a condition that is not met by uniform configurations. A promising attempt to cope with uniform configurations is to consider the r.v. where and are the r.v.s that take the index of (one of) the largest opinion and of (one of) the second-largest ones, respectively, in round . For any fixed pair , such that , (1) implies that the difference in the next round is positive in expectation, so a suitable submartingale argument [20] seemed to work in order to show that the system (rather quickly) achieves a “sufficiently-large” bias toward the plurality as to allow fast convergence. This approach would work if the random indices and maintained their initial values across the entire duration of the process. Unfortunately, starting from uniform configurations, in the next round, the expected difference between the new largest opinion and the new second largest one may have no positive drift at all. Roughly speaking, in the next round, the r.v. can be much larger than the r.v. .

A promising dynamics for the stabilizing almost-consensus problem is the one introduced in [12], in which nodes revise their opinions (assumed to be totally ordered) by taking the median between the currently held opinion and those held by two randomly sampled nodes. However, while we do not assume opinions to be integers (or totally ordered), their analysis strongly relies on the fact that the median opinion (or any good approximation of it) exhibits a strong increasing drift, even when starting from almost-uniform configuration, whereas no opinion is “special” to a majority rule when the starting configuration is uniform. The adoption of an inherently biased function as the median can have important consequences. To get an intuition, the reader may consider the following simple instance: , with the system starting in configuration . At the end of the first round, a static adversary changes the values of nodes, equally distributed in and , to value 2. The (non-valid) value 2 is the global median and some counting arguments show that, while values and have no positive expected drift, the median has an exponential expected drift that holds w.h.p. whenever . This might fool the system into the configuration in which , thus converging to a non-valid value.

Our New Approach: An Overview. Our analysis significantly departs from the above approaches. It is important to remark that, for , no analysis of the 3-majority dynamics with almost-uniform initial configurations is known, even in the simpler non-adversarial case. On the other hand, while simpler, the analysis of the non-adversarial case still has per-se interest and it requires to address some of the main technical challenges that also arise in the adversarial case. Section 3 will be thus devoted to the analysis of the non-adversarial case, while an outline is given in the paragraphs that follow.

When the configuration is (approximately) uniform, Lemma 2.1 says us that the process exhibits no significant drift toward any fixed opinion. Interestingly, things change if we consider the random variable , indicating the smallest opinion support at round . Let be the number of active opinions in a given round , we first prove that the expected value of always exhibits a non-negligible negative drift:

(2)

This drift is essentially a consequence of Lemma 2.1 and of the standard deviation of r.v.s s (see the proof of Lemma 3.3). The analysis then proceeds along consecutive phases, each consisting of a suitable number of consecutive rounds. If the number of active opinions at the beginning of the generic phase is , we prove that, with positive constant probability, vanishes within the end of the phase, so that the next phase begins with (at most) active opinions.

We clearly need a good bound on the length of a phase beginning with at most opinions. To this aim, we derive a new upper bound - stated in Lemma 3.2 - on the hitting time of stochastic processes with expected drift that are defined by finite-state Markov chains [20]. Thanks to this result, we can use the negative drift in (2) to prove that, from any configuration with active opinions, drops below the threshold within rounds, with constant positive probability: This “hitting” event represents the exit condition from the symmetry-breaking stage of the phase. Indeed, once it occurs, we can consider any fixed active opinion having support size below the above threshold (thanks to the previous stage, we know that there is a good chance this opinion exists): We then show that has a negative drift of order . This allows us to prove that drops from to zero within further rounds, with positive constant probability. This interval of rounds is the dropping stage of the phase.

Ideally, the process proceeds along consecutive phases, indexed as , such that we are left with at most active opinions at the end of Phase . In practice, we only have a constant probability that at least one opinion disappears during Phase . However, using standard probabilistic arguments, we can prove that, w.h.p., for every , the transition from to active opinions takes a constant (amortized) number of phases, each requiring rounds.

The presence of a dynamic, adaptive adversary makes the above analysis technically more complex. A major issue is that a different definition of Phase must be considered, since the adversary might permanently feed any opinion so that the latter never dies. So the number of active opinions might not decrease from one phase to the next one. Essentially, we need to manage the persistence of “small” (valid or not) opinions: The end of a phase is now characterized by one “big” valid color that becomes “small” and, moreover, we need to show that, in general, “small” colors never becomes “big”, no matter what the dynamic -bounded adversary does. An informal description of the dynamic-adversary case is given in Subsection 4.2.

3 The 3-Majority Dynamics without Adversary

Let be the subset of valid opinions, i.e. those supported by at least one node in the initial configuration, and denote by its size. This section is devoted to the proof of the following result.

Theorem 3.1 (The Adversary-Free Case.).

Starting from any initial configuration with active opinions, where is an arbitrarily-small constant, the 3-majority dynamics reaches consensus within rounds, w.h.p.

We first provide the lemmas required for the process analysis and then we give the formal proof of the above theorem.

The next lemma shows an upper bound on the time it takes a stochastic process with values in to reach or exceed a target value , under mild hypotheses on the process. We here give only an idea of the proof, the full proof is in Appendix A.

Lemma 3.2.

Let be a Markov chain with finite state space , let be a function mapping states of the chain in non-negative integer numbers, and let be the stochastic process over defined by . Let be a “target value” and let

be the random variable indicating the first time reaches or exceeds value . Assume that, for every state with , it holds that

  1. (Positive drift). for some

  2. (Bounded jumps). , for some .

Then, for every starting state , it holds that

Idea of the proof. From Hypothesis 1 it follows that is a submartingale that satisfies the hypotheses of the Doob’s Optional Stopping Theorem [13] (see e.g. Corollary 17.8 in [20] or Theorem 10.10 in [26]), thus

And from Hypothesis 2 it follows that . ∎

We now exploit the above lemma in order to bound the time required by the symmetry-breaking stage.

Lemma 3.3 (Symmetry-breaking stage).

Let be any configuration with active opinions. Within rounds it holds that

Sketch of Proof. Let be the set of active opinions in and let be the random variable indicating the opinion configuration at round , where we assume . Let be the minimum among all s and consider the stochastic process defined as . Observe that takes values in and it is a function of . We are interested in the first time becomes at least as large as , i.e.

We now show that satisfies Hypotheses 1 and 2 of Lemma 3.2, with , for a suitable constant .

1. Let be any configuration with active opinions such that . We want to prove that

(3)

Two cases may arise.

Case : Observe that, in this case, r.v.s conditional on have standard deviation . Moreover, they are binomial and negatively associated. Hence, by choosing small enough, from the Central Limit Theorem we have that

We thus get

(4)

Case : Equation (3) easily follows from Lemma 2.1. Indeed, let be an opinion such that , then

(5)

where we used the case’s condition and the fact that .

2. Since random variables conditional on the configuration at round are binomial, it is possible to apply Chernoff bound (though with some care) to prove that

(6)

Though this result seems intuitive, its formal proof is less obvious, since is a stopping time and thus itself a random variable. Lemma B.1 in Appendix B offers a formal proof of the above statement.

From (3) and (6), we have that satisfies the hypotheses of Lemma 3.2 with and . Hence and, from Markov inequality, for , we finally get

We now provide the analysis of the dropping stage: More precisely, we show that, if the system starts with up to active opinions and one of them (say ) is below the threshold , then drops to the smaller threshold within additional rounds. This bound can be proved w.h.p. since, in this regime, is still sufficiently large to apply the Chernoff bound. This concentration result is not necessary to the purpose of proving Theorem 3.1, while it is a key ingredient in the analysis of the adversarial case (Theorem 4.2). The next lemma can be proved by standard concentration arguments - applied in an iterative way - on the r.v. (see Appendix B).

Lemma 3.4 (Dropping stage 1).

Let be any configuration with active opinions, where is an arbitrarily-small positive constant, and such that an opinion exists with . Within rounds opinion becomes w.h.p.

In the next lemma we prove that once becomes smaller than , then opinion disappears within further rounds with constant probability. We here give only an idea of the proof, the full proof is in Appendix B.

Lemma 3.5 (Dropping stage 2).

Let be any configuration with active opinions, where is an arbitrarily-small positive constant, and such that an opinion exists with . Within rounds opinion disappears with probability at least .

Idea of the proof. If in configuration , then from Lemma 2.1 it follows that

Moreover, since conditional on is binomial, if , from the Chernoff bound it follows that . Hence, it is easy to check that for any initial configuration with the following recursive relation holds

that for some gives . Since is a non-negative integer-valued r.v., the thesis then follows from the Markov inequality. ∎

Proof of Theorem 3.1. From Lemmas 3.3, 3.4, and 3.5 it follows that from any configuration with active opinions, within rounds at least one of the opinions disappears with probability at least . Thus, within rounds, all opinions but one disappear w.h.p. ∎

4 Convergence Time of 3-Majority with Adversary

In this section we consider the presence of a Byzantine adversary that can adaptively change the opinion of a bounded number of nodes in order to delay convergence time toward a valid consensus, or even worse, to let the system converge toward a non valid one. We consider two different adversarial strategies: A static one and a stronger, dynamic one.

4.1 The -static adversary

At the end of the first round, once every node has fixed his own initial opinion, the adversary looks at the configuration and arbitrarily replaces the opinion of at most nodes with an arbitrary opinion in . Then the protocol starts the process and no further adversary’s actions are allowed. Since any opinion the adversary may introduce has size less than , as a simple consequence of the dropping stage (see Lemmas 3.4 and 3.5), the static adversarial case easily reduces to the non-adversarial one. We thus get the following

Corollary 4.1.

Let for some constant and . Starting from any initial configuration having opinions, the 3-majority protocol reaches a stabilizing almost-consensus in presence of any -static adversary within rounds, w.h.p.

4.2 The -dynamic adversary

The actions of this adversary over the studied process can be described as follows. At the end of every round , after nodes have updated their opinions (i.e. once the configuration is realized), the -dynamic adversary looks at the current opinion configuration and replaces the opinion of up to nodes with any opinion in .

In what follows we consider an -dynamic adversary with for a suitable positive constant . As we will show in the proof of Lemma C.7, this bound on turns out to be almost tight for guaranteeing that the process converges to an almost-consensus regime in polynomial time, w.h.p.

The presence of the adversary requires us to distinguish between valid and non valid opinions. So, we recall that the set of valid opinions is the subset of active opinions in the initial configuration and we observe that, in the reminder of this section, denotes the number of valid opinions, i.e., .

We are now ready to state our main result in the presence of the dynamic adversary (its full proof is given in Appendix C).

Theorem 4.2 (The Dynamic-Adversary Case.).

Let for some constant and for some constant . Starting from any initial configuration having opinions, the 3-majority reaches a (valid) stabilizing almost-consensus in presence of any -dynamic adversary within rounds, w.h.p.

Idea of the Proof.

We here provide a description of the main technical differences w.r.t. the analysis for the non-adversarial case.

As discussed in the overview of the process analysis, the adversary can introduce “small” non-valid opinions and it can keep small valid opinions active that would otherwise disappear (as shown in Section 3). These facts lead us to the problem of managing “small” opinions: The rigorous definition of small opinion is determined by the minimal negative drift for we derived in the proof of Lemma 3.3 (see (5)).

Let be the set of small opinions for some constant , and let its complement be the set of big opinions.

It turns out that we cannot use the definition of the (end of) phase adopted in the non-adversarial case: At least one (valid) opinion dies. Wlog, let us assume that, at the beginning, all the valid opinions are big. Then the new phase is an interval of consecutive rounds, in each of which exactly big valid opinions are present. The new goal is to show that at the end of phase , one of the big colors will get small and, moreover, this color (and no other small color) will never get big. In the symmetry-breaking stage of each phase, we thus need to show that the negative drift of (notice that the latter now denotes the minimum among the big colors) cannot be opposed by the actions of the -dynamic adversary, provided that . This fact (stated in Lemma C.7) is obtained via two different technical steps: i) A new bound on the expected negative drift for that considers both the presence of small good opinions and the adversary’s opposing action (this result is formalized in Lemma C.3); ii) A novel use of Lemma 3.2 on the hitting time of random processes in order to bound the expect time of the symmetry-breaking stage. We in fact need to define a new stopping condition that also includes some “bad” event: Some small (valid or not) color become big. We then show that bad stopping events never happen along the entire process, w.h.p. (this is essentially guaranteed by Lemma C.4).

The dropping stage of phase is now defined as the interval of rounds in which drops from the symmetry-breaking threshold to the size of small colors i.e. . Similarly to the non-adversary case, we can here fix the big opinion that is dropped below the symmetry-breaking threshold and look at its negative drift derived from Lemma C.3. The drift is strong enough to tolerate the the actions of the -bounded adversary and implies an bound on the time required by this second stage of phase . This stage’s analysis is given in Lemma C.8.

Finally, after phases, we are left with one (valid) opinion that accounts for nodes, while the remaining nodes can have any (possibly non valid) opinion and reflect the presence of the adversary. In fact, this is what happens with high probability.

5 Future Works

We strongly believe that our upper bound on the convergence time of the 3-majority dynamics is not tight w.r.t. . The factor seems to be not necessary: We believe that at least a factor can be saved. To this aim, we would need to show that “more” opinions get small during a phase. This number should also depend on the current number of big colors. Another idea would be that of (also) considering the growth of the maximal opinion. Unfortunately, differently from the minimal opinion (see (2) in Section 2), we don’t have any good bound on the expected drift for the maximal opinion that holds from any configuration. So, we don’t see how to efficiently adapt our approach without this crucial ingredient.

References

  • [1] Dana Angluin, James Aspnes, and Eisenstat David. Stably computable predicates are semilinear. In In Proc. of the 25th Ann. ACM SIGACT-SIGOPS Symp. on Principles of Distributed Computing (PODC’06), pages 292–299. ACM, 2006.
  • [2] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and Peralta René. Computation in networks of passively mobile finite-state sensors. Distributed Computing, 18(4):235–253, 2006.
  • [3] Dana Angluin, James Aspnes, and David Eisenstat. A Simple Population Protocol for Fast Robust Approximate Majority. Distributed Computing, 21(2):87–102, 2008. (Preliminary version in DISC’07).
  • [4] Dana Angluin, Michael J. Fischer, and Hong Jiang. Stabilizing consensus in mobile networks. In Proc. of Distributed Computing in Sensor Systems (DCOSS’06), volume 4026 of LNCS, pages 37–50, 2006.
  • [5] James Aspnes. Faster Randomized Consensus with an Oblivious Adversary. In Proc. of the 31st Ann. ACM SIGACT-SIGOPS Symp. on Principles of Distributed Computing (PODC’12), pages 1–8. ACM, 2012.
  • [6] Luca Becchetti, Andrea Clementi, Emanuele Natale, Francesco Pasquale, and Riccardo Silvestri. Plurality Consensus in the Gossip Model. In Proc. of the 26th Ann. ACM-SIAM Symp. on Discrete Algorithms (SODA’15), pages 371–390. SIAM, 2015.
  • [7] Luca Becchetti, Andrea Clementi, Emanuele Natale, Francesco Pasquale, Riccardo Silvestri, and Luca Trevisan. Simple dynamics for plurality consensus. In Proc. of the 26th ACM Symp. on Parallelism in Algorithms and Architectures (SPAA’14), pages 247–256. ACM, 2014.
  • [8] Ohad Ben-Shahar, Shlomi Dolev, Andrey Dolgin, and Michael Segal. Direction election in flocking swarms. In Proc. of the 6th Int. Workshop on Foundations of Mobile Computing (DIALM-POMC’10), pages 73–80. ACM, 2010.
  • [9] Luca Cardelli and Attila Csikász-Nagy. The Cell Cycle Switch Computes Approximate Majority. Scientific Reports, Vol. 2, 2012.
  • [10] Alan Demers, Dan Greene, Carl Hauser, Wes Irish, John Larson, Scott Shenker, Howard Sturgis, Dan Swinehart, and Doug Terry. Epidemic algorithms for replicated database maintenance. In Proc. of the 6th Ann. ACM Symposium on Principles of Distributed Computing (PODC’12), pages 1–12. ACM, 1987.
  • [11] Martin Dietzfelbinger, Andreas Goerdt, Michael Mitzenmacher, Andrea Montanari, Rasmus Pagh, and Michael Rink. Tight thresholds for cuckoo hashing via XORSAT. In Proc. of the 37th Int. Coll. on Automata, Languages, and Programming (ICALP’10), volume 6198 of LNCS, pages 213–225. Springer, 2010.
  • [12] Benjamin Doerr, Leslie A. Goldberg, Lorenz Minder, Thomas Sauerwald, and Christian Scheideler. Stabilizing consensus with the power of two choices. In Proc. of the 23rd Ann. ACM Symp. on Parallelism in Algorithms and Architectures (SPAA’11), pages 149–158. ACM, 2011.
  • [13] Joseph L. Doob. Stochastic Processes. John Wiley & Sons Inc., 1953.
  • [14] David Doty. Timing in chemical reaction networks. In Proc. of 25th Ann. ACM-SIAM Symp. on Discrete Algorithms (SODA’14), pages 772–784. SIAM, 2014.
  • [15] Ofer Feinerman, Bernhard Haeupler, and Amos Korman. Breathe Before Speaking: Efficient Information Dissemination Despite Noisy, Limited and Anonymous Communication. In Proc. of the ACM Symposium on Principles of Distributed Computing (PODC ’14). ACM, 2014.
  • [16] Nigel R. Franks, Stephen C. Pratt, Eamonn B. Mallon, Nicholas F. Britton, and David J.T. Sumpter. Information flow, opinion polling and collective intelligence in house–hunting social insects. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 357(1427):1567–1583, 2002.
  • [17] Seth Gilbert and Dariusz Kowalski. Distributed agreement with optimal communication complexity. In Proc. of 21st Ann. ACM-SIAM Symp. on Discrete Algorithms (SODA’10), pages 965–977. SIAM, 2010.
  • [18] Richard Karp, Christian Schindelhauer, Scott Shenker, and Berthold Vocking. Randomized rumor spreading. In Proc. of the 41th Ann. IEEE Symp. on Foundations of Computer Science (FOCS’00), pages 565–574. IEEE, 2000.
  • [19] David Kempe, Alin Dobra, and Johannes Gehrke. Gossip-Based Computation of Aggregate Information. In Proc. of 43rd Ann. IEEE Symp. on Foundations of Computer Science (FOCS’03), pages 482–491. IEEE, 2003.
  • [20] David Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, 2008.
  • [21] Elchanan Mossel, Joe Neeman, and Omer Tamuz. Majority dynamics and aggregation of information in social networks. Autonomous Agents and Multi-Agent Systems, 28(3):408–429, 2014.
  • [22] Marshall Pease, Robert Shostak, and Leslie Lamport. Reaching agreement in the presence of faults. Journal of the ACM, 27(2):228–234, 1980.
  • [23] Etienne Perron, Dinkar Vasudevan, and Milan Vojnovic. Using Three States for Binary Consensus on Complete Graphs. In Proc. of the 28th IEEE Conf. on Computer Communications (INFOCOM’09), pages 2527–1535. IEEE, 2009.
  • [24] Michael O. Rabin. Randomized byzantine generals. In Proc. of the 24th Ann. Symp. on Foundations of Computer Science (SFCS’83), pages 403–409. IEEE, 1983.
  • [25] Yongxiang Ruan and Yasamin Mostofi. Binary Consensus with Soft Information Processing in Cooperative Networks. In Proc. of the 47th IEEE Conf. on Decision and Control (CDC’08), pages 3613–3619. IEEE, 2008.
  • [26] David Williams. Probability with Martingales. Cambridge University Press, 1991.
  • [27] Mehmet E. Yildiz, Asuman E. Ozdaglar, Daron Acemoglu, Amin Saberi, and Anna Scaglione. Binary Opinion Dynamics with Stubborn Agents. ACM Trans. Econ. Comput., 4(1), 2013.

Appendix A Preliminary Results

Proof of Lemma 2.1

According to the -majority protocol, a node gets opinion if it chooses 3 times opinion , or if it chooses two times and one time a different opinion, or if it chooses the first time opinion and then, the second and third time, two different distinct opinions. Hence, if we denote by the indicator random variable of the event “Node gets opinion at time ”, we have that

Then the inequality in (1) is obtained by observing that the sum is minimized for .

Proof of Lemma 3.2

Consider the stochastic process and observe that for any state with it holds that

where in the inequality we used Hypotheses 1. Thus is a submartingale up to the stopping time , i.e. for any . Moreover, since the jumps of can be bounded by a value independent of

and it is easy to see that Hypotheses 1 implies , thus we can apply Doob’s Optional Stopping Theorem [13] (see also, e.g., Corollary 17.8 in [20] and Theorem 10.10 in [26]). It then follows that and, since , we have that

Finally, we get

where in the last inequality we used Hypothesis 2. ∎

Appendix B Proofs for the Non-Adversarial Case

Proof of Lemma 3.4

We first prove that the decreasing rate of depends on its value at the end of the previous round. More formally, if we are in a configuration satisfying the hypotheses of the lemma:

where

Using Lemma 2.1 and applying Chernoff bound we have:

(7)

The second equality in (7) follows from the definition of , while the third inequality follows by (upper) bounding the denominator of by , which is always possible since from the hypotheses. Finally, to prove the last equality, we used the fact that and that the function is decreasing iff , with .

Finally, we can iteratively apply (7) as long as we have at most active opinions and keeps not smaller than . By standard concentration arguments we get that the time to reach this threshold is , w.h.p. ∎

Proof of Lemma 3.5

Let be the set of active opinions. By conditioning on all the configurations that the system can take at round , we can bound the expectation of as follows

where we used that, for any configuration with , Lemma 2.1 gives the bound . Moreover, if , from Chernoff bound it follows that

for any such configuration . Hence, for any we have that . Indeed,

Thus for any the following recursive relation holds

And it gives

Hence, for we have that and since takes non-negative integer values, the thesis follows from Markov inequality. ∎

Lemma B.1.

Let be any configuration with active opinions. Consider the stochastic process defined as and define the stopping time . Then:

Proof.

First of all, , since has a negative drift (see the proof of Lemma 3.3). Next, from the definition of :

Next, from the definition of the stopping time :

(8)

where the last equality follows since implies .

We next consider . We can write: