Consensus Needs Broadcast in Noiseless Models but can be Exponentially Easier in the Presence of Noise

Consensus Needs Broadcast in Noiseless Models but can be Exponentially Easier in the Presence of Noise

Andrea Clementi
Università di Roma Tor Vergata
Rome, Italy
clementi@mat.uniroma2.it
   Luciano Gualà
Università di Roma Tor Vergata
Rome, Italy
guala@mat.uniroma2.it
   Emanuele Natale
Max Planck Institute for Informatics
Saarbrücken, Germany
emanuele.natale@mpi-inf.mpg.de
   Francesco Pasquale
Università di Roma Tor Vergata
Rome, Italy
pasquale@mat.uniroma2.it
   Giacomo Scornavacca
Università degli Studi dell’Aquila
L’aquila, Italy
giacomo.scornavacca@graduate.univaq.it
   Luca Trevisan
U.C. Berkeley
Berkeley, CA, United States
luca@berkeley.edu
Abstract

Consensus and Broadcast are two fundamental problems in distributed computing, whose solutions have several applications. Intuitively, Consensus should be no harder than Broadcast, and this can be rigorously established in several models. Can Consensus be easier than Broadcast?

In models that allow noiseless communication, we prove a reduction of (a suitable variant of) Broadcast to binary Consensus, that preserves the communication model and all complexity parameters such as randomness, number of rounds, communication per round, etc., while there is a loss in the success probability of the protocol. Using this reduction, we get, among other applications, the first logarithmic lower bound on the number of rounds needed to achieve Consensus in the uniform GOSSIP model on the complete graph. The lower bound is tight and, in this model, Consensus and Broadcast are equivalent.

We then turn to distributed models with noisy communication channels that have been studied in the context of some bio-inspired systems. In such models, only one noisy bit is exchanged when a communication channel is established between two nodes, and so one cannot easily simulate a noiseless protocol by using error-correcting codes. An lower bound on the number of rounds needed for Broadcast is proved by Boczkowski et al. [PLOS Comp. Bio. 2018] in one such model (noisy uniform PULL, where is a parameter that measures the amount of noise). In such model, we prove a new bound for Broadcast and a bound for binary Consensus, thus establishing an exponential gap between the number of rounds necessary for Consensus versus Broadcast.

\newref

lemname=Lemma 

Keywords: Distributed Consensus Algorithms, Broadcast, Gossip Models, Noisy Communication.

1 Introduction

In this paper we investigate the relation between Consensus and Broadcast, which are two of the most fundamental algorithmic problems in distributed computing [30, 35, 63, 66], and we study how the presence or absence of communication noise affects their complexity.

In the (Single-Source) Broadcast problem, one node in a network has an initial message msg and the goal is for all the nodes in the network to receive a copy of msg.

In the Consensus problem, each of the nodes of a network starts with an input value (which we will also call an opinion), and the goal is for all the nodes to converge to a configuration in which they all have the same opinion (this is the agreement requirement) and this shared opinion is one held by at least one node at the beginning (this is the validity requirement). In the Binary Consensus problem, there are only two possible opinions, which we denote by 0 and 1.

In the (binary) Majority Consensus problem [6, 34, 64] we are given the promise that one of the two possible opinions is initially held by at least nodes, where is a parameter of the problem, and the goal is for the nodes to converge to a configuration in which they all have the opinion that, at the beginning, was held by the majority of nodes. Note that Consensus and Majority Consensus are incomparable problems: a protocol may solve one problem without solving the other.111A Consensus protocol is allowed to converge to an agreement to an opinion that was initially in the minority (provided that it was held by at least one node), while a Majority Consensus protocol must converge to the initial majority whenever the minority opinion is held by fewer than nodes. On the other hand, a Majority Consensus problem is allowed to converge to a configuration with no agreement if the initial opinion vector does not satisfy the promise, while a Consensus protocol must converge to an agreement regardless of the initial opinion vector.

Motivations for studying the Broadcast problem are self-evident. Consensus and Majority Consensus are simplified models for the way inconsistencies and disagreements are resolved in social networks, biological models and peer-to-peer systems [36, 40, 61].222The Consensus problem is often studied in models in which nodes are subject to malicious faults, and, in that case, one has motivations from network security. In this paper we concentrate on models in which all nodes honestly follow the prescribed protocol and the only possibly faulty devices are the communication channels.

In distributed model that severely restrict the way in which nodes communicate (to model constraints that arise in peer-to-peer systems or in social or biological networks), upper and lower bounds for the Broadcast problem give insights on the effect of the communication constraints on the way in which information can spread in the network. The analysis of algorithms for Consensus often give insights on how to break symmetry in distributed networks, when looking at how the protocol handles an initial opinion vector in which exactly half the nodes have one opinion and half have the other. The analysis of algorithms for Majority Consensus usually hinge on studying the rate at which the number of nodes holding the minority opinion shrinks.

If the nodes are labeled by , and each node knows its label, then there is an easy reduction of binary Consensus to Broadcast: node broadcasts its initial opinion to all other nodes, and then all nodes agree on that opinion as the consensus opinion. Even if the nodes do not have known identities, they can first run a leader election protocol, and then proceed as above with the leader broadcasting its initial opinion. Even in models where leader election is not trivial, the best known Consensus protocol has, in all the cases that we are aware of, at most the “complexity” (whether it’s measured in memory per node, communication per round, number of rounds, etc.) of the best known broadcast protocol.

The question that we address in this paper is whether the converse hold, that is, are there ways of obtaining a Broadcast protocol from a Consensus problem or are there gaps, in certain models, between the complexity of the two problems?

Roughly speaking, we will show that, in the presence of noiseless communication channels, every Consensus protocol can be used to realize a weak form of Broadcast. Since, in many cases, known lower bounds for Broadcast apply also to such weak form, we get new lower bounds for Consensus. In a previously studied, and well motivated, distributed model with noisy communication, however, we establish an exponential gap between Consensus and Broadcast.

1.1 Communication and computational models

In order to state and discuss our results we first introduce some distributed models and their associated complexity measures.

We study protocols defined on a communication network, described by an undirected graph where is the set of nodes, each one running an instance of the distributed algorithm, and is the set of pairs of nodes between which there is a communication link that allows them to exchange data. When not specified, is assumed to be the complete graph.

In synchronous parallel models, there is a global clock and, at each time step, nodes are allowed to communicate using their links.

In the LOCAL model, there is no restriction on how many neighbors a node can talk to at each step, and no restriction on the number of bits transmitted at each step. There is also no restriction on the amount of memory and computational ability of each node. The only complexity measures is the number of rounds of communication. For example, it is easy to see that the complexity of Broadcast is the diameter of the graph . The CONGEST model is like the LOCAL model but the amount of data that each node can send at each time step is limited, usually to bits.

In the (general) GOSSIP model [29, 51], at each time step, each node chooses one of its neighbors and activates the communication link , over which communication becomes possible during that time step, allowing to send a message to and, simultaneously, to send a message to . We will call the caller of . In the PUSH variant, each node sends a message to its chosen neighbor ; in the PULL variant, each node sends a message to its callers. Note that, although each node chooses only one neighbor, some nodes may be chosen by several others, and so they may receive several messages in the PUSH setting, or send a message to several recipients in the PULL setting. In our algorithmic results for the GOSSIP model, we will assume that each message exchanged in each time step is only one bit, and our negative results for the noiseless setting will apply to the case of messages of unbounded length. In the uniform GOSSIP (respectively PUSH or PULL) model, the choice of is done uniformly at random among the neighbors of . This means that uniform models make sense even in anonymous networks, in which nodes are not aware of their identities nor of the identities of their neighbors.333In the general GOSSIP model in which a node can choose which incident edge to activate, a node must, at least, know its degree and have a way to distinguish between its incident edges.

In this work, we are mainly interested in models like GOSSIP that severely restrict communication [6, 2, 34, 36, 61, 64], both for efficiency consideration and because such models capture aspects of the way consensus is reached in biological population systems, and other domains of interest in network science [5, 7, 35, 18, 36, 38, 40]. Communication capabilities in such scenarios are typically constrained and non-deterministic: both features are well-captured by uniform models.

Asynchronous variants of the GOSSIP model (such as Population Protocols [6, 5]) have also been extensively studied [17, 48, 64]. In this variant, no global clock is available to nodes. Instead, nodes are idle until a single node is activated by a (possibly random) scheduler, either in discrete time or in continuous time. When a node wakes up, it activates one of its incident edges and wakes up the corresponding neighbor. Communication happens only between those two vertices, which subsequently go idle again until the next time they wake up.

Previous studies show that, in both PUSH and PULL variants of uniform GOSSIP, (binary) Consensus, Majority Consensus and Broadcast can be solved within logarithmic time (and work per node) in the complete graph, via elementary protocols444In the case of Majority Consensus, the initial additive bias must have size ., with high probability (for short w.h.p.555In this paper, we say that an event holds w.h.p. if , for some .) [6, 14, 17, 34, 48, 53] (see also Section 1.5). Moreover, efficient protocols have been proposed for Broadcast and Majority Consensus for some restricted families of graphs such as regular expanders and random graphs [1, 23, 22, 28, 49, 60].

However, while for Broadcast time and work are necessary in the complete graph [17, 48, 53], prior to this work, it was still unknown whether a more efficient protocol existed for Consensus and Majority Consensus.

1.2 Our contribution I: Broadcast is “no harder” than Consensus over noiseless communication

Our first result is a reduction of a weak form of Broadcast to Consensus (Theorem 3.2) which establishes, among other lower bounds, tight logarithmic lower bounds for Consensus and Majority Consensus both in the uniform GOSSIP (and hence uniform PULL and PUSH as well) model and in the general PUSH model.

To describe our result, it is useful to introduce the notion of nodes infected by a source node in a distributed protocol: if is a designated source node in the network, then we say that at time the node is the only infected node and, at time , a node is infected if and only if either it was infected at time or it received a communication from an infected node at time .

This notion is helpful in thinking about upper and lower bounds for Broadcast: any successful broadcast protocol from needs to infect all nodes from source , and any protocol that is able to infect all nodes from source can be used to broadcast from by appending msg to each message originating from an infected node. Thus any lower bound for infection is also a lower bound for Broadcast, and any protocol for infection can be converted, perhaps with a small overhead in communication, to a protocol for Broadcast. For example, in the PUSH model (either uniform or general 666See Section 2.1 for a formal definition of the two variants.), the number of infected nodes can at most double at each step, because each infected node can send a message to only one other node, and this is the standard argument that proves an lower bound for Broadcast.

In Theorem 3.2 we show that lower bounds for infection also give lower bounds for Consensus. More precisely we prove that if we have a Consensus protocol that, for every initial opinion vector, succeeds in achieving consensus with probability , then there is an initial opinion vector and a source such that the protocol infects all nodes from that source with probability at least . Equivalently, if we are in a model in which there is no source for which we can have probability, say, of infecting all nodes with certain resources (such as time, memory, communication per node, etc.), then, in the same model, and with the same resources, every Consensus protocol has probability of failing. For example, by the above argument, we have an lower bound for Consensus in the PUSH model (because, in fewer than rounds, the probability of infecting all nodes is zero).

The proof uses a hybrid argument to show that there are two initial opinion vectors and , which are identical except for the initial opinion of a node , such that there is at least a difference between the probability of converging to the all-zero configuration starting from or from . Then, we argue that this difference must come entirely from runs of the protocol that fail to achieve consensus (which happens only with probability) or from runs of the protocol in which infects all other nodes. Thus the probability that infects all nodes from the initial vector has to be .

As for Majority Consensus, we have a similar reduction, but from a variant of the infection problem in which there is an initial set of infected nodes.777Recall that is the value such that we are promised that the majority opinion is held, initially, by at least nodes.

Lower bounds for infection are known in several models in which there where no previous negative results for Consensus. We have not attempted to survey all possible applications of our reductions, but here we enumerate some of them:

  • In the uniform GOSSIP model (also known as uniform PUSH-PULL model), and in the general PUSH model, tight analysis (see [51, 53] and Subsection 3.1) show that any protocol for the complete graph w.h.p. does not complete Broadcast within less than rounds, where is a sufficiently small constant. Combining this lower bound with our reduction result above, we get an lower bound for Consensus. This is the first known lower bound for Consensus showing a full equivalence between the complexity of Broadcast and Consensus in such models. Regarding Majority Consensus, we also obtain an lower bound for any initial bias , with .

  • In a similar way, we are able to prove a lower bound of number of steps (and hence parallel time) or number of messages per node for Consensus on an asynchronous variant of the GOSSIP model, named Population Protocols with uniform/probabilistic scheduler, as defined in [6].

  • The last application we mention here concerns the synchronous Radio Network model [4, 10, 24, 67]. Several optimal bounds have been obtained on the Broadcast time [10, 27, 54, 55, 57] while only few results are known for Consensus time [24, 67]. In particular, we are not aware of better lower bounds other than the trivial (where denotes the diameter of the network). Then, by combining a previous lower bound in [4] on Broadcast with our reduction result, we get a new lower bound for Consensus in this model (see Subsection 3.1).

We also mention that our reduction allows us to prove that some of the above lower bounds hold also for a weaker notion of Consensus, namely -Almost Consensus (where nodes are allowed to not agree with the rest of the nodes), and even if the nodes have unbounded memory and can send/receive messages of unbounded size. We will expand on these comments in the technical sections.

1.3 Our contribution II: Consensus over noisy communication

We then turn to the study of distributed systems in which the communication links between nodes are noisy. We will consider a basic model of high-noise communication: the binary symmetric channel [59] in which each exchanged bit is flipped independently at random with probability , where , and we refer to as the noise parameter of the model.

In models such as LOCAL and CONGEST, the ability to send messages of logarithmic length (or longer) implies that, with a small overhead, one can encode the messages using error-correcting codes and simulate protocols that assume errorless communication.

In the uniform GOSSIP model with one-bit messages, however, error-correcting codes cannot be used and, indeed, whenever the number of rounds is sublinear in , most of the pairs of nodes that ever communicate only exchange a single bit.

The study of fundamental distributed tasks, such as Broadcast and Majority Consensus, has been undertaken in the uniform GOSSIP model with one-bit messages and noisy links [16, 38] as a way of modeling the restricted and faulty communication that takes place in biological systems, and as a way to understand how information can travel in such systems, and how they can repair inconsistencies. Such investigation falls under the agenda of natural algorithms, that is, the investigation of biological phenomena from an algorithmic perspective [20, 62].

In [38], the authors prove that (binary) Broadcast and (binary) Majority Consensus can be solved in time , where is the noise parameter, in the uniform PUSH model with one-bit messages. They also prove a matching lower bound assuming that the protocol satisfies a certain symmetry condition, which is true for the protocol of their upper bound. This has been later generalized to non-binary opinions in [39].

In the noisy uniform PULL model however, [16] proves an time lower bound888They actually proved a more general result including non-binary noisy channels.. This lower bound is proved even under assumptions that strengthen the negative result, such as unique node IDs, full synchronization, and shared randomness (see Section 2.4 of [16] for more details on this point).

Such a gap between noisy uniform PUSH and PULL comes from the fact that, in the PUSH model, a node is allowed to decline to send a message, and so one can arrange a protocol in which nodes do not start communicating until they have some confidence of the value of the broadcast value. In the PULL model, instead, a called node must send a message, and so the communication becomes polluted with noise from the messages of the non-informed nodes.

What about Consensus and Majority Consensus in the noisy PULL model? Our reduction in Theorem 3.2 suggests that there could be lower bounds for Consensus and Majority Consensus, but recall that the reduction is to the infection problem, and infection is equivalent to Broadcast only when we have errorless channels.

1.3.1 Upper bounds in noisy uniform Pull

We devise a simple and natural protocol for Consensus for the noisy uniform PULL model having convergence time , w.h.p., thus exhibiting an exponential gap between Consensus and Broadcast in the noisy uniform PULL model.

The protocol runs in two phases. In the first phase, each node repeatedly collects a batch of pulled opinions and then updates its opinion to the majority opinion in the batch. This is done times so that the first phase takes steps. In the second phase, each node collects a batch of pulled opinions and then updates its opinion to the majority opinion within the batch.

The main result of the analysis is that, w.h.p., at the end of the first phase there is an opinion that is held by at least nodes, and that if the initial opinions where unanimous then the initial opinion is the majority opinion after the first phase. Then, in the second phase, despite the communication errors, every node has a high probability of seeing the true phase-one majority as the empirical majority in the batch and so all nodes converge to the same valid opinion.

To analyze the first phase, we break it out into two sub-phases (this breakdown is only in the analysis, not in the protocol): in a first sub-phase of length , the protocol “breaks symmetry” w.h.p. and, no matter the initial vector, reaches a configuration in which one opinion is held by nodes. In the second sub-phase, also of length , a configuration of bias w.h.p. becomes a configuration of bias . The analysis of this sub-phase for achieving Majority Consensus is similar to that in [38, 39]. If the initial opinion vector is unanimous, then it is not necessary to break up the first phase into sub-phases, and one can directly see that a unanimous configuration maintains a bias , w.h.p., for the duration of the first phase.

A consequence of our analysis is that, if the initial opinion vector has a bias , then the protocol converges to the majority, w.h.p. So, we get a Majority-Consensus protocol for this model under the above condition on the bias.

We also provide a Broadcast protocol that runs in steps in the noisy uniform PULL model, nearly matching the lower bound mentioned before. This protocol also runs in two phases. In the first phase, which lasts for order of steps, the informed node responds to each PULL request with the message, and other nodes respond to each PULL request with zero. After this phase, each node makes a guess of the value of the message, and with high probability the number of nodes that make a correct guess is at least . The second phase is a Majority Consensus protocol applied to the first-phase guesses, which, as discussed above, takes only steps.

1.3.2 Lower bounds in noisy Pull models

We prove that any Consensus protocol that has error probability at most requires rounds (Theorem 4.1). This shows that the complexity of our protocol described above is tight for protocols that succeed w.h.p. We remark that our result holds for any version (general and uniform) of the noisy PULL model with noise parameter , unbounded local memory, even assuming unique node IDs.

In [38], an round lower bound is proved for Majority Consensus in the uniform PUSH model, for a restricted class of protocols. Their argument, roughly speaking, is that each node needs to receive a bit of information from the rest of the graph (namely, the majority value in the rest of the graph), and this bit needs to be correctly received with probability , while using a binary symmetric channel with error parameter . It is then a standard fact from information theory that the channel needs to be used times.

It is not clear how to adapt this argument to the Consensus problem. Indeed, it is not true that every node receives a bit of information with high confidence from the rest of the graph (consider the protocol in which one node broadcasts its opinion), and it is not clear if there is a distribution of initial opinions such that there is a node whose final opinion has mutual information close to 1 to the global initial opinion vector given the initial opinion of (the natural generalization of the argument of [38]).

Instead, we show that there are two initial opinion vectors and , a node , and a bit , such that the initial opinion of is the same in and , but the probability that outputs is when the initial opinion vector is and when the initial opinion vector is . Thus, the rest of the graph is sending a bit of information (whether the initial opinion vector is or ) and the communication succeeds with probability when the bit has one value and with probability if the bit has the other value. Despite this asymmetry, if the communication takes place over a binary symmetric channel with error parameter , a calculation using KL divergence shows that the channel has to be used times.

The lower bound of [16] for Broadcast in the uniform PULL model applies to protocols that have constant probability of correctly performing the broadcast operation. In Lemma 4.8 we sketch a way of modifying their proof to derive an for uniform PULL protocols for Broadcast that have high probability of success, matching the round complexity of our protocol mentioned above.

1.4 Two separations that follow from our bounds

We remark that our results establish two interesting separations.

The first, concerns the complexity gap between Consensus and Broadcast in the presence or absence of noise. Informally, we prove that, in the noiseless world, Broadcast and Consensus essentially have the same complexity in several natural models (Corollary 3.3). On the other hand, we show that there is a natural model where the presence of noise has reasonable motivations ([16, 38]), namely the noisy uniform PULL, for which the complexity of the two problems exhibits an exponential gap, since in this model Broadcast requires rounds [16] while we prove that Consensus can be solved in time (Theorem 5.1).

The second fact regards a separation between general PULL and PUSH models as far as Consensus is concerned in the noiseless world. Indeed, if we assume unique IDs, in the general PULL model, Consensus can be easily solved in constant time: every node can copy the opinion of a prescribed node by means of a single pull operation. On the other hand, in the general PUSH model, our Broadcast-Consensus reduction shows that rounds are actually necessary for solving Consensus.

1.5 Other related work

Consensus and Broadcast are fundamental algorithmic problems which have been the subject of a huge number of studies focusing on several distributed models and different computational aspects [30, 35, 63, 66]. We here briefly discuss those results which are more relevant w.r.t. to our contribution.

Noiseless communication.

Classical results prove that on the uniform PUSH or PULL models, Rumor Spreading (Broadcast) takes logarithmic time [41, 53, 65]. Then, a series of recent works has shown that simple uniform PULL protocols can quickly achieve Consensus, Majority Consensus and Broadcast even in the presence of a bounded number of node crashes or Byzantine nodes [15, 11, 14, 13, 31, 34, 44, 53]. The logarithmic bound is known to be tight for Broadcast [53], while, as remarked earlier, no non-trivial lower bounds are known for Consensus in any variant of the GOSSIP model. Further bounds are known for Broadcast and Majority Consensus on graphs having good expansion properties (see for instance [23, 22, 21, 46, 49, 47]). As for the general GOSSIP model with special conditions on node IDs, we mention the upper bound obtained in [9] which has been then improved to bound in [52]. A further issue is the minimum amount of randomness necessary to solve Broadcast within a given time bound. In the PUSH model, this issue is investigated in [33, 32], where upper bounds and tradeoffs are given.

Noisy communication.

In Subsection 1.3 we introduced and motivated the noisy communication model studied in [16, 38, 39] and adopted in this paper. Another model of noisy communication for distributed systems is the one considered in [3, 19]. Departing significantly from the model we adopt in this paper, here there is a (worst-case) adversary that can adaptively flip the bits exchanged during the execution of any protocol and the goal is to provide a robust version of the protocol under the assumption that the adversary has a limited budget on the number of bits it can change. Efficient solutions for such models typically use silent rounds [3] and error-correcting codes [3, 19]. In [37] a different task is studied in a model with noisy interactions: all nodes of a network hold a bit and they wish to transmit to a single receiver. This line of research culminated in the lower bound on the number of messages shown in [50], matching the upper bound shown in [43].

Other communication models.

In [42], Consensus has been studied on a fault-free model. They provide bounds on the message complexity for deterministic protocols. As for Radio Networks, we have already discussed the results for static topologies [4]. We remark here that finding lower bounds for Consensus (and Leader Election) on a rather general model of dynamic Radio Networks is an open question posed in [56], where some lower bounds on the -Token Dissemination Problem (a variant of Broadcast) have been derived. Another dynamic model of Radio Networks where lower bounds on Broadcast time have been derived can be found in [26]. Even though we have still not verified the applicability of our reduction result in these contexts, we believe this might be possible. Finally, we mention the works [24, 67] that consider faulty models (some with interference detectors), and provide complexity bounds on Consensus.

1.6 Roadmap of the Paper

The rest of the paper is organized as follows. In Section 2, preliminary definitions are given which will be used all over the paper. Section 3 deals with the noiseless case. In particular, we first describe the general reduction result of Broadcast to Consensus in noiseless communication models and derives the main applications of this result to some specific models, and then, in Section 3.2, we give the simple reduction of (multi)-Broadcast to Majority Consensus showing that the latter requires logarithmic time in any noiseless uniform GOSSIP model. Section 4 provides the lower bound on the noisy PULL model obtained by a reduction to an asymmetric Two-Party Protocol. In Section 5, we propose a simple majority protocol and show it solves Consensus and Majority Consensus in the noisy uniform PULL model within rounds. We also describe a protocol for Broadcast in the noisy uniform PULL running in rounds. Finally, some technical tools are located in a separate appendix.

2 Preliminaries

2.1 Distributed systems and communication models

Let be a distributed system formed by a set of nodes which mutually interact by exchanging messages over a connected graph according to a fixed communication model . The definition of includes all features of node communications including, for instance, synchronicity or not and the presence of link faults.

A configuration of a distributed system is the description of the states of all the nodes of at a given time. If we execute a protocol for , the random configuration the system lies in a generic time is denoted as .

When, in the GOSSIP, PUSH or PULL models, at each round the communication is established with a random neighbor chosen independently and u.a.r., we call the communication model uniform. In order to remark the difference with the uniform case, we call the communication model general when nodes are equipped with unique IDs which are known to all neighbors, and each node can choose the identity of the neighbor with which to communicate (possibly in a random way).

Finally, we distinguish two main communication scenarios. In the noiseless models, every transmitted message on a link of the graph is received safely, without any error.

In the presence of communication noise, instead, each bit of any transmitted message is flipped independently at random with probability , where is the noisy parameter. Then, in the sequel, the version of each model , in which the presence of communication noise above is introduced, will be shortly denote as noisy . Notice that, in order to capture the role of noise in systems where standard error correcting codes techniques are not feasible, we consider models where each single point-to-point transmission consists of one bit only. In this way, we easily have that, in GOSSIP models, the bit-communication and the convergence time of a Protocol are strongly related.

2.2 Consensus and Broadcast in distributed systems

Several versions of Consensus have been considered in the literature [30, 35, 63, 66]. Since our interest is mainly focused on models having strong constraints on communication (random, limited, and noisy), we adopt some weaker, probabilistic variants of consensus, studied in [6, 13, 34], that well captures this focus.

Formally, we say a protocol guarantees (binary) Consensus if, starting from any initial node opinion vector , the system reaches w.h.p. a configuration where every node has the same opinion (Agreement) and this opinion is valid, i.e., it was supported by at least one node at the starting time. Moreover, once the system reaches this consensus configuration, it is required to stay there for any arbitrarily-large polynomial number of rounds, w.h.p. This Stability property somewhat replaces the Termination property required by other, classic notions of consensus introduced in stronger distributed models [58].

In order to define Majority Consensus, we need to introduce the notion of bias of an (initial) opinion vector. Given any vector , the bias associated to is the difference between the number of nodes supporting opinion and the number of those supporting in . The state of each node clearly depends on the specific protocol, however, we can always assume it contains the current opinion of the node, so, we can also define the bias of a configuration . When the opinion vector (or the configuration) is clear from the context, we will just use the term . The majority opinion in a given vector (configuration) is the one having the largest number of nodes supporting it. With the term majority, we will indicate the number of nodes supporting the majority opinion. A protocol guarantees (binary) Majority Consensus if, starting from any initial opinion vector with bias , the system reaches w.h.p. a configuration where every node has the same opinion and this opinion is the initial majority one. Moreover, we require the same stability property we define for Consensus.

Both the notions of Consensus and Majority Consensus above can be further relaxed to those of -Almost Consensus and -Almost Majority Consensus, respectively. According to such weaker notions, we allow the system to converge to an almost-consensus regime where outliers may have a different opinion from the rest of the nodes. In this case, the fraction of outliers is a performance parameter of the protocol we will specify in the statements of our results. Even in this weaker notion, we require the same property of Stability but we remark that the subset of outliers may change during the almost-consensus regime. We emphasize that all lower bounds we obtain in this paper holds for such weaker versions of Almost Consensus, while the upper bound in Section 5 refers to (full) Consensus.

As discussed in the introduction, our work also deals with the (single-source) Broadcast task (a.k.a. Rumor Spreading). Given any source node having an initial message msg, a Broadcast Protocol is a protocol that, after a finite number of rounds, makes every node in receive a copy of (and, thus, be informed about) msg, w.h.p.999The success probability of the protocol is here defined over both the random choices (if any) of the communication mechanism and the ones of .. Similarly to Consensus, we also consider a weaker version of Broadcast where the final number of informed nodes is required to be at most , w.h.p.

3 Noiseless Communication: Broadcast vs Consensus

In this section we provide our first main result (Theorem 3.2) which establishes a strong connection between (Almost) Consensus and a suitable, weaker version of (Almost) Broadcast in the noiseless-communication framework. We first describe the result in a rather general setting and then we show its consequences, namely some lower bounds for the (Almost) Consensus problem in specific communication models.

Notice that in Section 3.2, we complement Theorem 3.2 with an analogous lower bound for Almost-Majority Consensus with sub-linear initial bias (see Theorem 3.10).

Let be a distributed system formed by a set of nodes which mutually interact over a support graph according to a fixed communication model . The crucial assumption we make in this section on is the absence of communication noise (i.e. message corruption): whenever a node transmits a message on one of its links, either this message is received with no change or it is fully lost and, in the latter case, both sender and receiver cannot get any information from the state of the corresponding port (no fault detection).

Under the above noiseless framework, the next theorem essentially states that (Almost) Consensus cannot be “easier” than (Almost) Broadcast. As we similarly show in Section 4, much of the technical difficulty in reasoning on the valid-consensus problem arises from the high level of freedom nodes have in agreeing on the final consensus value, since both values are valid solution as long as not all nodes start with inputs that are already identical.

In order to state the reduction, we need to introduce a slightly-different variant of Broadcast where, essentially, it is (only) required that some information from the source is spread on the network. Formally

Definition 3.1.

A protocol solves the -Infection problem w.r.t. a source node if it infects at least nodes, where we define a node infected recursively as follows: initially only is infected; a node becomes infected whenever it receives any message from an infected node.

Notice that a protocol solving the -Infection problem w.r.t. a source node can be easily turned into a protocol for broadcasting a message msg from to at least nodes. Indeed, we give the message msg to the source node , and we simulate . Every time an infected node sends a message, it appends msg to it. Clearly, the size of each message in is increased by the size of msg.

The next theorem is the main result of the section. Informally, it states that any protocol for Consensus actually solves the Infection problem (when initialized with a certain opinion vector) in a weak sense: the infection is w.r.t. a source that depends on the consensus protocol in a (possibly) uncontrolled manner; and (ii) the success probability of the infection is quite low. Another intuitive way to look at the result is as follows: any consensus protocol needs to solve the Infection problem from a certain source node when it starts from a certain initial opinion vector.

Theorem 3.2.

Let be a protocol reaching -Almost Consensus with probability at least . Then, a source node and an initial opinion vector exist such that , starting from , solves the -Infection problem w.r.t. with probability at least .

Proof.

Let be an arbitrary ordering of the vertices and, for any , let be the initial opinion vector in which the first nodes start with and the other with . Moreover, let be the indicator random variable taking value when , starting from , reaches -Almost Consensus on value , otherwise takes value (note that also when the protocol fails to reach -Almost Consensus).

Since the protocol converges to an almost consensus with probability at least , it must hold that

and

Hence, a node exists such that

(1)

We now show that, when starts from opinion vector , is (also) solving the -Infection problem w.r.t. source node .

First observe that and let us name the set of nodes infected by node , starting from opinion vector . We will prove that, if and then either or fails to reach -Almost Consensus starting from . More formally, if we name the event

we claim that

(2)

In order to obtain (2) we equivalently show that, if , , and does not fail starting from , it must hold that . First observe that, if then reaches almost consensus on starting from , therefore the number of nodes which output is at least . Moreover, if , and does not fail starting from , then the number of nodes which output is at most . Thus, moving the system from input to input , the number of nodes switching their opinion from to must be at least . Finally observe that, since and differ only at node , the nodes that change their output value must be infected by (according to Definition 3.1). Hence, , which implies that, when starting from , is also solving the -Infection problem w.r.t. node .

To conclude the proof, it remains to bound (from below) the probability with which this infection happens. From (1), (2) and the union bound, it follows that

Thus,

Remark.

Observe that factor in the parameter in the statement of Theorem 3.2 is tight. Indeed, consider a protocol in which each node outputs its input value: such protocol is trivially a -Almost Consensus protocol, while the number of infected node has size at most .

3.1 Specific lower bounds for Consensus

Theorem 3.2 allows us to derive lower bounds for Consensus for specific communication models and resources by using lower bounds for the Infection problem. In fact, by simply restating Theorem 3.2, we obtain the following.

Corollary 3.3.

Let be any fixed resource (e.g. time, work, bit-communication, etc.) defined on a distributed system , and suppose that any protocol, which uses at most units of , fails to solve the -Infection problem w.h.p. from any source node. Then, any protocol on this model reaching -Almost Consensus w.h.p., must use at least units of .

We now apply the above corollary in different settings. Unless differently stated, all results in this subsection refers to the complete graph of nodes.

The Gossip model.

The first lower bound is on the Consensus time for the uniform GOSSIP model (and hence for the uniform PUSH and the uniform PULL as well). We first state a simple technical result which will be handy also in the proof of Corollary 3.11. This is a well-known result in the community, however, for the sake of completeness, we give a self-contained proof.

Lemma 3.4.

Consider the uniform GOSSIP model and fix any constants and such that . Then, there is a sufficiently small constant such that, starting from any subset of infected nodes of size , the -Infection problem requires at least rounds, w.h.p.

Proof.

The proof shows that, starting from any subset of infected nodes, the set of infected nodes grows by at most a constant factor at each round,w.h.p. The latter fact easily implies that, if is sufficiently small, then, within rounds, at most nodes are infected w.h.p.

In order to show that the set of infected nodes increases by at most a constant factor, let be the set of nodes and the set of infected nodes at time . At each round the number of (bidirectional) communication-edges between and the uninfected nodes is . By a simple application of Chernoff bounds, it follows that the growth of number of infected nodes is bounded by a constant multiplicative factor , w.h.p. Since and w.h.p. , then and the latter is smaller than as long as .

Notice that to get a concentration result over all the process, we just observe that if every event in some family holds w.h.p., then, using the union bound, the intersection of any polylogarithmic number of such events holds w.h.p.

We can combine the above lemma (in the case where just one source node is initially infected), with Corollary 3.3, and get the following.

Corollary 3.5.

Consider the uniform GOSSIP model and fix any constant such that . Then, any protocol reaching -Almost Consensus w.h.p. requires communication rounds.

Next, consider the general PUSH model and fix any (Broadcast) protocol: then, starting from any source node, the set of infected nodes grows by a factor at most at each round. Hence, Corollary 3.3 also implies an lower bound for the general PUSH model.

Corollary 3.6.

Let be any constant. Then any protocol reaching -Almost Consensus w.h.p. on the general PUSH model, requires communication rounds, even when the nodes have unique identifiers.

Remark 3.7.

Corollary 3.6 should be contrasted with the fact that in the general PULL model, assuming unique identifiers, (valid) Consensus can be solved in a single round, by having all nodes adopt the input value of a specific node101010On the other hand, if we assume that nodes do not initially share unique identifiers (for example, in the PULL Model with numbered ports), it is easy to see that the broadcast problem cannot be solved w.h.p. in time, since the number of nodes from which a given node can receive any information from, increases by at most a factor at each round..

Population Protocols.

Another interesting model where we can apply our reduction is the Population Protocol one with uniform/probabilistic scheduler as defined in [6, 8]. Broadcast in this model has essentially the same complexity of that in the asynchronous uniform GOSSIP model: in particolar, a similar result to that in Lemma 3.4 holds for Broadcast time (see for instance [48]): any Population Protocol with uniform/probabilistic scheduler on the complete graph cannot infect more than nodes w.h.p. within number of rounds (and hence parallel time) or number of messages per node. Hence, by using Corollary 3.3, we can state the following result.

Corollary 3.8.

Let be any constant. Then any Population Protocol (with uniform/probabilistic scheduler) reaching -Almost Consensus requires number of steps (and hence parallel time) and number of messages per node.

Radio Networks.

In the synchronous Radio Network model [4, 10, 24, 67], the presence of message collisions on the (unique) shared radio frequency is modelled by the following communication paradigm: a node can receive a message at a given round if and only if exactly one of its neighbors transmits at round . We consider the model setting with no collision detection, i.e., the nodes of the graph are not able to get any information when a collision occurs.

In [4] the authors derive a lower bound on the radio-broadcast time in networks of constant diameter. In particular, their proof relies on a construction of a family of graphs of nodes having diameter 2 where every protocol that runs for no more than rounds (where is a sufficiently small constant) cannot infect at least one node w.h.p. (in fact, with probability 1). We observe that the proof can be adapted in order to hold for any choice of the source and when every node knows the graph topology, so for any choice of the initial configuration. This implies that their lower bound also applies on the time required by any protocol to -infect all nodes (with ) according to Definition 3.1. Then, from Theorem 3.2 we get a lower bound on the Consensus time in Radio Networks.

Corollary 3.9.

Consider the Radio Network model. There is a family of constant-diameter graphs, where any (randomized) protocol reaching Consensus requires time, w.h.p.

3.2 A lower bound for -Almost Majority Consensus

The conditions required by Majority Consensus are much stronger than the validity one and make the relationship between this task and the -Infection problem with multiple source nodes rather simple to derive.

Lemma 3.10.

Let be any fixed resource defined on a distributed system and suppose there is no Infection protocol that, starting from any subset of nodes with , can inform at least nodes by using at most units of , w.h.p. Then, any protocol on this model, reaching -Almost Majority Consensus w.h.p., must use at least units of .

Proof.

W.l.o.g., let be an even number where is the initial bias. Consider an arbitrary labeling of the nodes and two initial input vectors and such that

(3)

In order for a node to converge to the correct majority opinion, it is necessary that it is able to distinguish between configuration and . Since and are identical for all nodes , it is then necessary for to be infected by each of the source nodes and the proof is completed.

A specific lower bound for Majority Consensus.

The above lemma allows us to obtain a logarithmic lower bound on the convergence time required by any almost Majority-Consensus protocol on uniform GOSSIP (and, hence, on uniform PULL and uniform PUSH). Notice that the bound holds even for protocols achieving the task with constant probability only.

Corollary 3.11.

Consider -Almost Majority Consensus in uniform GOSSIP starting with initial bias , for any positive constant . Then, any protocol that solves the task above with probability at least requires rounds.

Proof.

The proof follows from Lemma 3.4 and Lemma 3.10, where is the number of rounds in the uniform GOSSIP model. ∎

4 Lower Bounds in the Noisy Model

The main result of this section is the following lower bound for the -Almost Consensus Problem in the noisy general PULL model (and hence in noisy uniform PULL as well).

Theorem 4.1.

Let be any real such that and consider any protocol for the noisy general PULL model with noise parameter . If solves -Almost Consensus with probability at least , then it requires at least rounds111111We notice the double role parameter has in this statement..

Proof Outline..

W.l.o.g. we assume that during the execution of , every node pulls another node at each round. Indeed, if not true, we can simply consider a protocol where this property holds but the extra messages are ignored, obtaining an equivalent protocol.

Suppose that we have a protocol that solves the -Almost Consensus with probability at least . The definition of -Almost Consensus implies that each node has an initial one-bit input and produces a one-bit output and

  • If the initial opinion vector is all-zeroes, then with probability all nodes but at most output zero.

  • If the initial opinion vector is all-ones, then with probability all nodes but at most output one.

  • For every initial opinion vector, with probability all nodes but at most agree.

From the above constraints, we derive the existence of (at least) one node which must get from the rest of the system “enough information” in order to decide its output. More in details, in Lemma 4.3, we show that any Almost-Consensus protocol implies a solution to a two-party communication problem over a noisy communication channel, where one of the two parties represents node and it needs to act differently according to the information owned by the other party, namely the rest of the graph.

In Lemma 4.7, we then show that, in the two-party problem above, has to receive at least bits (and thus, according to the noisy PULL, performs at least the same number of rounds) in order to recover the information owned by the rest of the graph with a sufficiently large probability, and thus deciding its output. This implies the desired bound. ∎

4.1 Reduction to the Two-Party Protocol

As outlined in the proof of Theorem 4.1, we start by showing that, if we have a (valid) Almost Consensus protocol, we can convert it into a two-party communication protocol between party and party with certain properties. More formally, we give the following definition.

Definition 4.2.

A -Two-Party Protocol is a two party noisy communication protocol between parties and such that

  • starts with a bit and at the end outputs one bit,

  • receives messages, each one of one bit,

  • Each bit of communication passes through a binary symmetric channel that flips the bit with probability ,

  • If then outputs 0 with probability ,

  • If then outputs 0 with probability .

We can now state the formal reduction result.

Lemma 4.3.

Let be a protocol that solves -Almost Consensus problem in rounds with probability at least in the noisy general PULL model for some . Then, there exists a -Two-Party Protocol, where is the noisy parameter.

Proof.

As defined in the previous section, let be the initial opinion vector such that the first nodes initially support opinion and the others support opinion . Let “” be the event “ converges to a consensus where all nodes, but at most , output opinion ” where . By hypotheses of the lemma we have that

  • During the execution of , each node exchanges at most one-bit messages;

  • ;

  • ;

  • For any initial opinion vector , the probability that reaches an almost consensus is at least , namely .

Thanks to the agreement property we know that . Since there are only two possible opinions, then or . W.l.o.g. we assume that it holds , indeed if not true, we can simply rename opinion with opinion . Now we leverage on this property in order to show that, starting from , a large fraction of nodes have a constant probability to output opinion . Let be any node, we define “” be the event “ outputs opinion in ”, where .

Fact 4.4.

If then a node subset with exists such that for any it holds .

Proof.

Let be the set of nodes such that , and let . Since the expectation of the number of nodes that output conditioned to the event “” is at most , the size of is at most . Indeed,

This implies that . To conclude the proof, we observe that, for any , we have

We now consider the initial opinion vector , where all the nodes support opinion . Remind that . Using a similar argument of Fact 4.4, we can prove the following

Fact 4.5.

If then a node subset with exists such that for any , .

Proof.

Let be the set of nodes such that , and set . Since the expected number of nodes that output opinion conditioned to the event "" is at most , the size of is at most , and hence .

To conclude the proof, observe that, for any ,

By combining Facts 4.4 and 4.5, we obtain the following

Fact 4.6.

Let . At least one node exists such that:
(i) its initial opinion is in both and , and
(ii) and .

Proof.

From Facts 4.4 and 4.5, if then , and thus . Since the nodes having initial opinion in both vectors and are exactly , it must exist at least one node having Properties (i) and (ii). ∎

We now realize a -Two-Party Protocol between parties and . If has input , then simulates with initial opinion and simulates all other nodes as if they had all initial opinion . If has input , then simulates with initial opinion and simulates all other nodes as if they had as an initial opinion vector of ones and zeroes. In the simulation, and need to communicate (via the binary symmetric channel) only when sends or receives messages in . At the end, will output or , depending on the outcome of the Almost Consensus protocol .

Note that at each round of simulation of , receives exactly121212At the beginning of the proof we assumed w.l.o.g. that at each round each node pulls another node. bit and no other information is available to it. Hence, in the resulting Two-Party Protocol, the only information obtained by is the bit received in that round from (corresponding to the rest of the graph).

Thanks to Fact 4.6, for any , if has input , with probability at least , will output . On the other hand, if has input , with probability at least , will output . ∎

4.2 Lower bound for the Two-Party Protocol

Lemma 4.7.

Any -Two-Party Protocol requires a number of rounds such that .

Proof.

In any interaction between and of rounds, we name view the sequence of all -bit messages received by during the interaction. Notice that this sequence determines the sequence of messages sent by and the final output of . Let be the random variable that represents the (random) view when ’s input is 1 and let be the random variable that represents the (random) view when ’s input is 0.

Recall that the Kullback–Leibler divergence between and is defined as

We will prove the following two facts which easily imply the claim of the lemma:

(4)
(5)

To prove (4) we use the data-processing inequality which (in particular) states that for every (possibly random) function we have

(6)

For a view , define to be the output of for that view. Then and are 0/1 random variables. If we call and we get

and recalling that and we get

(7)

which proves (4).

To prove (5) we use the chain rule. If we have two pairs of jointly distributed random variables and , then the conditional KL divergence is defined as

Let be the operation that denotes the concatenation between two sequence of random variables, the chain rule is

Since any view has length , we write and . Then we can write the KL divergence of and as

For each we can easily compute , indeed both “” and “” are binary random variables such that, for any , it holds

(8)

Thus we have

(9)
(10)
(11)

where in (a) we use the Taylor approximation. Thus, we can conclude that

4.3 Absence of reliable components in communication mechanism

As discussed in the Introduction, Theorem 4.1 should be contrasted with the result on the noisy uniform PUSH model [38, 39], in which at each round a node may send a bit to a random neighbor and, upon being received, the bit may be flipped by the communication noise with probability .

In [38] it is assumed that the protocol satisfies a symmetry hypothesis, i.e. the choice of nodes on whether to communicate or not, cannot depend on the value of the bit that the node wish to communicate. Without such assumption, the action of communicating anything can be employed to reliably solve the valid consensus problem (and many others).

More precisely, we can have nodes sending messages at even rounds131313Note that this expedient relies on nodes sharing a synchronous binary clock. In [38], it is shown how such clock can be easily obtained if, for example, nodes are initially inactive and become active upon receiving the first message. only if they wish to communicate value , and at odd rounds only if they wish to communicate value .

Our lower bound is indeed not applicable to the noisy uniform PUSH model, because it is not possible to reduce Consensus to the Two-Party Protocol in this model. Precisely, Definition 4.2 requires that each bit of information passes through the noisy channel (property iii)). In the noisy general PULL model, this is verified at the end of the proof of Lemma 4.3. However, this is not the case in the noisy uniform PUSH model since, besides the (noisy) content of the message, the message received by communicates to it the fact that another node has chosen to communicate something at the present round.

Hence, the above comparison with the noisy PUSH model essentially suggests that the lower bound in Theorem 4.1 sensibly relies on the fact that no component of the communication model is immune to the noise action.

4.4 A lower bound for Broadcast

As discussed in Section 1.3, [16] gives an lower bound for Broadcast in the noisy uniform PULL model. Using a similar argument to that used in the proof of our Lemma 4.7 in Section 4.2, in what follows we give a sketch of how the lower bound above can be strengthen to that holds for any protocol solving the binary Broadcast problem w.h.p.

Lemma 4.8.

Any protocol that solves Broadcast in the noisy uniform PULL model w.h.p. requires rounds.

Sketch of Proof..

In [16], the authors prove the following: let be a -round protocol for Broadcast in 1-bit noisy uniform PULL with error parameter starting from an arbitrary source. Let and be the distributions of the sequence of all received messages by all nodes through all the rounds, assuming that the source message is or , respectively. Then the KL divergence between and is . If the protocol succeeds with constant probability, then the statistical distance between and has to be and the KL divergence between and also has to be , leading to the lower bound of [16].

The key observation here is that if the success probability is set to be no smaller than , for some , then, by using a similar argument to that in Lemma 4.7, we derive that the KL divergence and must be , and so the lower bound is .

5 Upper Bounds in the Noisy Model

In Theorem 4.1, we obtained a lower bound on the number of rounds required by any protocol for Almost Consensus and Almost Majority Consensus that works w.h.p., in the noisy general PULL model. In the next section we show that, in this model, that lower bound is tight for both tasks. As discussed in the introduction, combined with the lower bound for Broadcast in [16] this result demonstrates a strong complexity gap between Consensus and Broadcast in the noisy uniform PULL.

5.1 Upper bound for Consensus in noisy Pull

Theorem 5.1.

In the noisy uniform PULL model, with noisy parameter , a protocol exists that achieves Consensus within rounds and communication, w.h.p. The protocol requires local memory.

Moreover, if the protocol starts from any initial opinion vector with bias , then it guarantees Majority Consensus, w.h.p.

The protocol we refer to in the above theorem works in two consecutive phases. Each phase is a simple application of the well-known -Majority Dynamics [13, 14]:

-Majority. At every round, each node samples neighbours141414In the binary case when is odd, the -Majority is stochastically equivalent to the -Majority where ties are broken u.a.r. (see Lemma 17 in [39]). For this reason, in this section we assume that is odd. independently and u.a.r. (with replacement). Then, the node updates its opinion according to the majority opinion in the sample.

Notice that -Majority, as stated above, assumes a uniform -PULL model where, at each round, every node can pull one message from each of the neighbors chosen independently and uniformly at random with replacement151515The assumption that neighbors are chosen independently and uniformly at random with replacement is consistent with previous work [12, 45].. However, it is easy to verify that this parallel process can be implemented on the uniform -PULL model using additional local memory and with a slowdown factor for its convergence time. In the rest of this section, we will thus consider the following two-phase protocol on the uniform -PULL model.

Majority Protocol. Let be a sufficiently large positive constant161616The value of will be fixed later in the analysis.. Every node performs rounds of -Majority with , followed by one round of the -Majority with .

The proof of Theorem 5.1 will proceed according to the following scheme: we will show that

  • If then

    • starting from any opinion vector, within rounds of -Majority the process reaches an opinion vector where the bias is , w.h.p. (Lemma 5.3),

    • starting from an opinion vector with bias then, within rounds of -Majority the process reaches an opinion vector where the bias is and the majority opinion is preserved (Lemma 5.4),

  • If and the opinion vector has bias , then in one round of -Majority the process reaches consensus on the majority opinion, w.h.p. (Lemma 5.5).

5.2 Proof of Theorem 5.1

Let be the random variable indicating the opinion vector at round of the majority protocol. Let us name the number of nodes supporting opinion in such opinion vector and let us define the bias at round as . In the rest of the section we assume w.l.o.g. that the bias is positive.

Since every time a node pulls the opinion of a node , node correctly gets the opinion of node with probability and it gets the other opinion with probability , we can write the probability that a node observes opinion after a pull (i.e. after the node has sampled a neighbor and the noise has possibly flipped its opinion), as a function of the bias

From now on with “ supports 0”we mean that after a round of the -Majority the node has opinion , namely the most frequent opinion in the random sample is . From Lemma 2 in [38] it follows that, if each node samples neighbors, then the bias grows exponentially, in expectation, until it reaches linear size.

Lemma 5.2 (Lemma 2 in [38]).

Let be an odd integer for a sufficiently large , and be any opinion vector with bias , then it holds that

The above lemma is useful to give high probability results on the behaviour of the protocol when the bias is large enough to guarantee concentration around the expectation. It thus remains to handle the cases in which the bias is so small that its expected multiplicative growth is smaller than its standard deviation. By leveraging on Lemma 4.5 in [25] (see Appendix