Hardness of computing and approximating predicates and functions with leaderless population protocols^{1}^{1}1The first and second authors were supported by NSF grant , and the third author by NSF grant .
Abstract
Population protocols are a distributed computing model appropriate for describing massive numbers of agents with very limited computational power (finite automata in this paper), such as sensor networks or programmable chemical reaction networks in synthetic biology. A population protocol is said to require a leader if every valid initial configuration contains a single agent in a special “leader” state that helps to coordinate the computation. Although the class of predicates and functions computable with probability (stable computation) is the same whether a leader is required or not (semilinear functions and predicates), it is not known whether a leader is necessary for fast computation. Due to the large number of agents (synthetic molecular systems routinely have trillions of molecules), efficient population protocols are generally defined as those computing in polylogarithmic in (parallel) time. We consider population protocols that start in leaderless initial configurations, and the computation is regarded finished when the population protocol reaches a configuration from which a different output is no longer reachable.
In this setting we show that a wide class of functions and predicates computable by population protocols are not efficiently computable (they require at least linear time to stabilize on a correct answer), nor are some linear functions even efficiently approximable. For example, our results for predicates immediately imply that the widely studied parity, majority, and equality predicates cannot be computed in sublinear time. (Existing arguments specific to majority were already known). Moreover, it requires at least linear time for a population protocol even to approximate division by a constant or subtraction (or any linear function with a coefficient outside of ), in the sense that for sufficiently small , the output of a sublinear time protocol can stabilize outside the interval on infinitely many inputs . We also show that it requires linear time to exactly compute a wide range of semilinear functions (e.g., if is even and if is odd).
In a complementary positive result, we show that with a sufficiently large value of , a population protocol can approximate any linear with nonnegative rational coefficients, within approximation factor , in time.
1 Introduction
Population protocols were introduced by Angluin, Aspnes, Diamadi, Fischer, and Peralta[4] as a model of distributed computing in which the agents have very little computational power and no control over their schedule of interaction with other agents. They can be thought of as a special case of a model of concurrent processing introduced in the 1960s, known alternately as vector addition systems[26], Petri nets[30], or commutative semiThue systems (or, when all transitions are reversible, “commutative semigroups”)[12, 28]. As well as being an appropriate model for electronic computing scenarios such as sensor networks, they are a useful abstraction of “fastmixing” physical systems such as animal populations[33], gene regulatory networks[9], and chemical reactions.
The latter application is especially germane: several recent wetlab experiments demonstrate the systematic engineering of customdesigned chemical reactions [34, 16, 8, 31], unfortunately in all cases having a cost that scales linearly with the number of unique chemical species (states). (The cost can even be quadratic if certain errortolerance mechanisms are employed [32].) Thus, it is imperative in implementing a molecular computational system to keep the number of distinct chemical species at a minimum. On the other hand, it is common (and relatively cheap) for the total number of such molecules (agents) to number in the trillions in a single test tube. It is thus important to understand the computational power enabled by a large number of agents , where each agent has only a constant number of states (each agent is a finite state machine).
A population protocol is said to require a leader if every valid initial configuration contains a single agent in a special “leader” state that helps to coordinate the computation. Studying computation without a leader is important for understanding essentially distributed systems where symmetry breaking is difficult. Further, in the chemical setting obtaining singlemolecule precision in the initial configuration is difficult. Thus, it would be highly desirable if the population protocol did not require an exquisitely tuned initial configuration.
1.1 Introduction to the model
A population protocol is defined by a finite set of states that each agent may have, together with a transition function^{2}^{2}2Some work allows nondeterministic transitions, in which the transition function maps to subsets of . Our results are independent of whether transitions are nondeterministic, and we choose a deterministic, symmetric transition function, rather than a more general relation , merely for notational convenience. . A configuration is a nonzero vector describing, for each , the count of how many agents are in state . By convention we denote the number of agents by Given states , if (denoted ), and if a pair of agents in respective states and interact, then their states become and .^{3}^{3}3In the most generic model, there is no restriction on which agents are permitted to interact. If one prefers to think of the agents as existing on nodes of a graph, then it is the complete graph for a population of agents. The next pair of agents to interact is chosen uniformly at random. The expected (parallel) time for any event to occur is the expected number of interactions, divided by the number of agents . This measure of time is based on the natural parallel model where each agent participates in a constant number of interactions in one unit of time; hence total interactions are expected per unit time [6].
The most wellstudied population protocol task is computing Booleanvalued predicates. It is known that a protocol stably decides a predicate (meaning computes the correct answer with probability 1; see Section 4 for a formal definition) if [4] and only if [5] is semilinear.
Population protocols can also compute integervalued functions . Suppose we start with agents in “input” state and the remaining agents in a “quiescent” state . Consider the protocol with a single transition rule . Eventually exactly agents are in the “output” state , so this protocol computes the function . Furthermore (letting count of state ), if initially (e.g., ), then it takes expected time until . Similarly, the transition rule computes the function , but exponentially slower, in expected time . The transitions and compute (assuming ), also in time if .
Formally, we say a population protocol stably computes a function if, for every “valid” initial configuration representing input (via counts of “input” states ) with probability 1 the system reaches from to such that ( is the “output” state) and for every reachable from (i.e., is stable). Defining what constitutes a “valid” initial configuration (i.e., what noninput states can be present initially, and how many) is nontrivial. In this paper we focus on population protocols without a leader—a state present in count , or small count—in the initial configuration. Here, we equate “leaderless” with initial configurations in which no positive state count is sublinear in the population size .
It is known that a function is stably computable by a population protocol if and only if its graph is a semilinear set [5, 15]. This means intuitively that it is piecewise affine, with each affine piece having rational slopes.
Despite the exact characterization of predicates and functions stably computable by population protocols, we still lack a full understanding of which of the stably computable (i.e., semilinear) predicates and functions are computable quickly (say, in time polylogarithmic in ) and which are only computable slowly (linear in ). For positive results, much is known about time to convergence (time to get the correct answer). It has been known for over a decade that with an initial leader, any semilinear predicate can be stably computed with polylogarithmic convergence time [6]. Furthermore, it has recently been shown that all semilinear predicates can be computed without a leader with sublinear convergence time [27]. (See Section 1.4 for details.)
In this paper, however, we exclusively study time to stabilization without a leader (time after which the answer is guaranteed to remain correct). Except where explicitly marked otherwise with a variant of the word “converge”, all references to time in this paper refer to time until stabilization. Section 9 explains in more detail the distinction between the two.
1.2 Contributions
Undecidability of many predicates in sublinear time.
Every semilinear predicate is stably decidable in time [6]. Some, such as iff , are stably decidable in time by a leaderless protocol, in this case by the transition , where “votes” for output and votes 0. A predicate is eventually constant if for all sufficiently large . We show in Theorem 4.4 that unless is eventually constant, any leaderless population protocol stably deciding a predicate requires at least linear time. Examples of noneventually constant predicates include parity ( iff is odd), majority ( iff ), and equality ( iff ). It does not include certain semilinear predicates, such as iff (decidable in time) or iff (decidable in time, and no faster protocol is known).
Definition of function computation and approximation.
We formally define computation and approximation of functions for population protocols. This mode of computation was discussed briefly in the first population protocols paper[4, Section 3.4], which focused more on Boolean predicate computation, and it was defined formally in the more general model of chemical reaction networks[15, 19]. Some subtle issues arise that are unique to population protocols. We also formally define a notion of function approximation with population protocols.
Inapproximability of most linear functions with sublinear time and sublinear error.
Recall that the transition rule computes in linear time. Consider the transitions and , starting with , for some , and (so total agents). Then eventually and (stabilizing ), after expected time. (This is analyzed in more detail in Section 7.) Thus, if we tolerate an error linear in , then can be approximated in logarithmic time. However, Theorem 6.1 shows this error bound to be tight: any leaderless population protocol that approximates , or any other linear function with a coefficient outside of (such as or ), requires at least linear time to achieve sublinear error.
As a corollary, such functions cannot be stably computed in sublinear time (since computing exactly is the same as approximating with zero error). Conversely, it is simple to show that any linear function with all coefficients in is stably computable in logarithmic time (Observation 7.1). Thus we have a dichotomy theorem for the efficiency (with regard to stabilization) of computing linear functions by leaderless population protocols: if all of ’s coefficients are in , then it is computable in logarithmic time, and otherwise it requires linear time.
Approximability of nonnegative rationalcoefficient linear functions with logarithmic time and linear error.
Theorem 6.1 says that no linear function with a coefficient outside of can be stably computed with sublinear time and sublinear error. In a complementary positive result, Theorem 7.2, by relaxing the error to linear, and restricting the coefficients to be nonnegative rationals (but not necessarily integers), we show how to approximate any such linear function in logarithmic time. (It is open if can be approximated with linear error in logarithmic time.)
Uncomputability of many nonlinear functions in sublinear time.
What about nonlinear functions? Theorem 8.5 states that sublinear time computation cannot go much beyond linear functions with coefficients in : unless is eventually linear, meaning linear with nonnegative integer coefficients on all sufficiently large inputs, any protocol stably computing requires at least linear time. Examples of noneventuallylinear functions, that provably cannot be computed in sublinear time, include (computable slowly via ), and (computable slowly via ).
The only remaining semilinear functions whose asymptotic time complexity remains unknown are those “piecewise linear” functions that switch between pieces only near the boundary of ; for example, if and otherwise.
Note that there is a fundamental difficulty in extending the negative results to functions and predicates that “do something different only near the boundary of ”. This is because for inputs where one state is present in small count, the population protocol could in principle use that input as a “leader state”—and no longer be leaderless. However, this does not directly lead to a positive result for such inputs, because it is not obvious how to use (for instance) state as a leader when its count is 1 while still maintaining correctness for larger counts of .
Our results leave open the possibility that noneventually constant predicates and noneventuallylinear functions, which cannot be computed in sublinear time in our setting, could be efficiently computed in the following ways:

With an initial leader stabilizing to the correct answer in sublinear time,

Stabilizing to an output in expected sublinear time but allowing a small probability of incorrect output (with or without a leader), or
1.3 Essential proof techniques
Techniques developed in previous work for proving time lower bounds [20, 1] can certainly generalize beyond leader election and majority, although it was not clear what precise category of computation they cover. However, to extend the impossibility results to all noneventuallylinear functions, we needed to develop new tools.
Compared to our previous work showing the impossibility of sublinear time leader election [20], we achieve three main advances in proof technique. First, the previous machinery did not give us a way to affect largecount states predictably to change the answer, but rather focused on using surgery to remove a single leader state. Second, we need much additional reasoning to argue if a predicate is not eventually constant, then we can find infinitely many dense inputs that differ on their output but are close together. This leads to a contradiction when we use transition manipulation arguments to show how to absorb the small extra difference between the inputs without changing the output. Third, we need entirely different reasoning to argue that if a semilinear function is not eventually linear, then we can find infinitely many dense inputs that do not appear “locally affine”: pushing a small distance from changes the function to , but pushing by the same distance again changes it a different amount, i.e., , where . This leads to a contradiction when we use transition manipulation arguments to show how, from input , to stabilize the count of the output to the incorrect value .^{4}^{4}4 These arguments are easier to understand for the special case when we can assume is linear. Thus Section 6 concentrates on this special case, obtaining an exact characterization of the efficiently computable linear functions. Section 8 reasons about the more difficult case of arbitrary semilinear functions.
Both in prior and current work, the high level intuition of the proof technique is as follows. The overall argument is a proof by contradiction: if sublinear time computation is possible, then we find a nefarious execution sequence that stabilizes to an incorrect output. In more detail, sublinear time computation requires avoiding “bottlenecks”—having to go through a transition in which both states are present in small count (constant independent of the number of agents ). Traversing even a single such transition requires linear time. Technical lemmas show that bottleneckfree execution sequences from dense initial configurations (i.e., where every state that is present is present in at least count) are amenable to predictable “surgery” [20, 1]. At the high level, the surgery lemmas show how states that are present in “low” count when the population protocol stabilizes, can be manipulated (added or removed) such that only “high” count other states are affected. Since it can also be shown that changing high count states in a stable configuration does not affect its stability, this means that the population protocol cannot “notice” the surgery, and remains stabilized to the previous output. For leader election, the surgery allows one to remove an additional leader state (leaving us with no leaders). For majority computation [1], the minority input must be present in low count (or absent) at the end. This allows one to add enough of the minority input to turn it into the majority, while the protocol continues to output the wrong answer.
However, applying the previously developed surgery lemmas to fool a function computing population protocol is more difficult. The surgery to consume additional input states affects the count of the output state, which could be present in “large count” at the end. How do we know that the effect of the surgery on the output is not consistent with the desired output of the function? In order to arrive at a contradiction we develop two new techniques, both of which are necessary to cover all cases. The first involves showing that the slope of the change in the count of the output state as a function of the input states is inconsistent. The second involves exposing the semilinear structure of the graph of the function being computed, and forcing it to enter the “wrong piece” (i.e., periodic coset).
1.4 Related work
Positive results.
Angluin, Aspnes, Diamadi, Fischer, and Peralta [4] showed that any semilinear predicate can be decided in expected parallel time , later improved to by Angluin, Aspnes, and Eisenstat [6]. More strikingly, the latter paper showed that if an initial leader is present (a state assigned to only a single agent in every valid initial configuration), then there is a protocol for that converges to the correct answer in expected time . However, this protocol’s expected time to stabilize is still provably . Section 9 explains this distinction in more detail. Chen, Doty, and Soloveichik [15] showed in the related model of chemical reaction networks (borrowing techniques from the related predicate results [4, 5]) that any semilinear function (integeroutput ) can similarly be computed with expected convergence time if an initial leader is present, but again with much slower stabilization time . Doty and Hajiaghayi [19] showed that any semilinear function can be computed by a chemical reaction network without a leader with expected convergence and stabilization time . Although the chemical reaction network model is more general, these results hold for population protocols.
Kosowski and Uznański [27] show that all semilinear predicates can be computed without an initial leader, converging in time if a small probability of error is allowed, and converging in time with probability 1, where can be made arbitrarily close to 0 by changing the protocol. They also showed leader election protocols (which can be thought of as computing the constant function ) with the same properties.
Since efficient computation seems to be helped by a leader, the computational task of leader election has received significant recent attention. In particular, Alistarh and Gelashvili [3] showed that in a variant of the model allowing the number of states to grow with the population size , a protocol with states can elect a leader with high probability in expected time. Alistarh, Aspnes, Eisenstat, Gelashvili, and Rivest [1] later showed how to reduce the number of states to , at the cost of increasing the expected time to . Gasieniec and Stachowiak [21] showed that there is a protocol with states electing a leader in time in expectation and with high probability, recently improved to time by Gasieniec, Stachowiak, and Uznański [22]. This asymptotically matches the states provably required for sublinear time leader election (see negative results below).
Negative results.
The first attempt to show the limitations of sublinear time population protocols, using the more general model of chemical reaction networks, was made by Chen, Cummings, Doty, and Soloveichik [14]. They studied a variant of the problem in which negative results are easier to prove, an “adversarial worstcase” notion of sublinear time: the protocol is required to be sublinear time not only from the initial configuration, but also from any reachable configuration. They showed that the predicates computable in this manner are precisely those whose output depends only on the presence or absence of states (and not on their exact positive counts). Doty and Soloveichik [20] showed the first lower bound on expected time from valid initial configurations, proving that any protocol electing a leader with probability 1 takes time.
These techniques were improved by Alistarh, Aspnes, Eisenstat, Gelashvili, and Rivest [1], who showed that even with up to states, any protocol electing a leader with probability 1 requires nearly linear time: . They used these tools to prove time lower bounds for another important computational task: majority (detecting whether state or is more numerous in the initial population, by stabilizing on a configuration in which the state with the larger initial count occupies the whole population). Alistarh, Aspnes, and Gelashvili [2] strengthened the state lower bound, showing that states are required to compute majority in time for some , when a certain “natural” condition is imposed on the protocol that holds for all known protocols.
In contrast to these previous results on the specific tasks of leader election and majority, we obtain time lower bounds for a broad class of functions and predicates, showing “most” of those computable at all by population protocols, cannot be computed in sublinear time. Since they all can be computed in linear time, this settles their asymptotic population protocol time complexity.
Informally, one explanation for our result could be that some computation requires electing “leaders” as part of the computation, and other computation does not. Since leader election itself requires linear time as shown in [20], the computation that requires it is necessarily inefficient. It is not clear, however, how to define the notion of a predicate or function computation requiring electing a leader somewhere in the computation, but recent work by Michail and Spirakis helps to clarify the picture [29].
1.5 Organization of this paper
Section 2 defines population protocol model and notation. Section 3 proves the technical lemmas that are used in all the time lower bound proofs. Section 4 shows that a wide class of predicates requires time to compute. Section 5 explains our definitions of function computation and approximation. Section 6 shows that linear functions with either a negative (e.g., ) or noninteger (e.g., ) coefficient cannot be stably approximated with error in time. Section 7 shows our positive result, Theorem 7.2, that linear functions with all nonnegative rational coefficients (e.g., ) can be stably approximated with error in time. Section 8 studies nonlinear functions, showing that a large class of those computable by population protocols require time to compute. Section 9 states conclusions and open questions.
2 Preliminaries
If is a finite set (in this paper, of states, which will be denoted as lowercase Roman letters with an overbar such as ), we write to denote the set of functions . Equivalently, we view an element as a vector of nonnegative integers, with each coordinate “labeled” by an element of . (By assuming some canonical ordering of , we also interpret as a vector .) Given and , we refer to as the count of in . Let . We write to denote that for all . Since we view vectors equivalently as multisets of elements from , if we say is a subset of . For , we say that is dense if, for all , if , then .
It is sometimes convenient to use multiset notation to denote vectors, e.g., and both denote the vector defined by , , and for all . Given , we define the vector componentwise operations of addition , subtraction , and scalar multiplication for . For a set , we view a vector equivalently as a vector by assuming for all Write to denote the vector such that for all . For any vector or matrix , let denote the largest absolute value of any component of . Also, given and , is a shorthand for , and similar for .
In this paper, the floor function is defined to be the integer closest to 0 that is distance from the input, e.g., and . For an (infinite) set/sequence of configurations , let be the set of states whose counts are bounded by a constant in . Let . For , let , denote the set of vectors in which each coordinate is at least .
2.1 Population Protocols
A population protocol is a pair , where is a finite set of states and is the (symmetric) transition function. A configuration of a population protocol is a vector , with the interpretation that agents are in state . If there is some “current” configuration understood from context, we write to denote . By convention, the value represents the total number of agents . A transition is a 4tuple , written , such that . If an agent in state interacts with an agent in state , then they change states to and . This paper typically defines a protocol by a list of transitions, with implicit. There is a null transition if a different output for is not specified.
Given and transition , we say that is applicable to if , i.e., contains 2 agents, one in state and one in state . If is applicable to , then write to denote the configuration (i.e., that results from applying to ); otherwise is undefined. A finite or infinite sequence of transitions is a transition sequence. Given a and a transition sequence , the induced execution sequence (or path) is a finite or infinite sequence of configurations such that, for all , .^{5}^{5}5When the initial configuration to which a transition sequence is applied is clear from context, we may overload terminology and refer to a transition sequence and an execution sequence interchangeably. If a finite execution sequence, with associated transition sequence , starts with and ends with , we write . We write (or when is clear from context) if such a path exists (i.e., it is possible to reach from to ) and we say that is reachable from . Let to denote the set of all configurations reachable from , writing when is clear from context. If it is understood from context what is the initial configuration , then say is simply reachable if . If a transition has the property that for , , or if ( and ( or )), then we say that consumes ; i.e., applying reduces the count of . We say produces if it increases the count of .
2.2 Time Complexity
The model used to analyze time complexity is a discretetime Markov process, whose states correspond to configurations of the population protocol. In any configuration the next interaction is chosen by selecting a pair of agents uniformly at random and applying transition function to determine the next configuration. Since a transition may be null, selfloops are allowed. To measure time we count the expected total number of interactions (including null), and divide by the number of agents . (In the population protocols literature, this is often called “parallel time”; i.e. interactions among a population of agents corresponds to one unit of time). Let and . Denote the probability that the protocol reaches from to some configuration by . If ,^{6}^{6}6Since population protocols have a finite reachable configuration space, this is equivalent to requiring that for all , there is a . define the expected time to reach from to , denoted , to be the expected number of interactions to reach from to some , divided by the number of agents . If then .
3 Technical tools
In this section we explain some technical results that are used in proving the time lower bounds of Theorems 4.4, 6.3, 6.4, 8.4, and 8.5. In some cases the main ideas are present in previous papers, but several must be adapted significantly to the current problem. Throughout Section 3, let be a population protocol.
Although other results from this section are used in this paper, the key technical result of this section is Corollary 3.11. It gives a generic method to start with an initial configuration reaching in sublinear time to a configuration (in all our uses is a stable configuration, but this is not required by the corollary), and starting from two copies of , to manipulate the transitions leading from to while having a predictable effect on the counts of certain states, possibly also starting with a “small” number of extra states, denoted in Corollary 3.11. This leads to a contradiction when the effect on the counts of the states representing the output can be shown to be incorrect for the given input .
We often deal with infinite sequences of configurations.^{7}^{7}7In general these will not be execution sequences. Typically none of the configurations are reachable from any others because they are configurations with increasing numbers of agents. The following lemma, used frequently in reasoning about population protocols, shows that we can always take a nondecreasing subsequence.
Lemma 3.1 (Dickson’s Lemma [17]).
Any infinite sequence has an infinite nondecreasing subsequence , where .
3.1 Bottleneck transitions take linear time
Let . A transition is a bottleneck for configuration if and .
The next observation, proved in [20], states that, if to get from a configuration to some configuration in a set , it is necessary to execute a transition in which the counts of and are both at most some number , then the expected time to reach from to some configuration in is .
Observation 3.2 ([20]).
Let , , and such that . If every path taking to a configuration has a bottleneck, then .
The next corollary is useful.
Observation 3.3 ([20]).
Let , , , and such that , , and every path from every to some has a bottleneck. Then
3.2 Transition ordering lemma
The following lemma was originally proved in [14] and was restated in the language of population protocols as Lemma 4.5 in [20]. Intuitively, the lemma states that a “fast” transition sequence (meaning one without a bottleneck transition) that decreases certain states from large counts to small counts must contain transitions of a certain restricted form. In particular the form is as follows: if is the set of states whose counts decrease from large to small, then we can write the states in in some order , such that for each , there is a transition that consumes , and every other state involved in is either not in , or comes later in the ordering. These transitions will later be used to do controlled “surgery” on fast transition sequences, because they give a way to alter the count of , by inserting or removing the transitions , knowing that this will not affect the counts of .
Let , with . We say that is ordered (via ) if there is an order on , so that we may write , such that, for all , there is a transition , such that . In other words, for each there is a transition consuming exactly one without affecting .
Lemma 3.4 (adapted from [14]).
Let such that . Let such that via a path without a bottleneck. Define and Then is ordered via , and each occurs at least times in .
3.3 Sublinear time from dense configuration implies bottleneck free path from dense configuration with every state present
Say that is full if , i.e., every state is present. The following theorem states that with high probability, a population protocol will reach from an dense configuration to a configuration in which all states are present (full) in “large” count (dense, for some ).^{8}^{8}8With the same probability, this happens in time , although this fact is not needed in this paper. It was proven in [18] in the more general model of chemical reaction networks, for a subclass of such networks that includes all population protocols.
Theorem 3.1 (adapted from [18]).
Let . Then there are constants such that, letting is full and dense , for all sufficiently large dense ,
The following was originally proved as Lemma 4.4 in [20]. The result was stated with being the set of what was called “stable configs,” but we have adapted it to make the statement more general and quantitatively relate the bound to the expected time . It states that if a protocol goes from an dense configuration to a set of states in expected time , then there is a full dense (for ) reachable configuration and a path from to a state in with no bottleneck transition, where . If , then , which suffices for our subsequent results.
Lemma 3.5 (adapted from [20]).
For all , there is a such that the following holds. Suppose that for some , some set and some set of dense initial configurations , for all , . Define . There is an such that for all with , there is and path such that:

for all ,

, where , and

has no bottleneck transition.
Proof.
Intuitively, the lemma follows from the fact that state is reached with high probability by Theorem 3.1, and if no paths such as existed, then all paths from to a stable configuration would have a bottleneck and require more than the stated time by Observation 3.3. Since is reached with high probability, this would imply the entire expected time is linear.
For any configuration reachable from some configuration , there is a transition sequence satisfying condition (2) by the fact that . It remains to show we can find and satisfying conditions (1) and (3).
By Theorem 3.1 there exist (which depend only on and ) such that, starting in any sufficiently large initial configuration , with probability at least , reaches a configuration where all states have count at least , where . For all , let . Let be a lower bound on such that Theorem 3.1 applies for all and . Then for all such that , . Choose any for which there is with . Then any satisfies condition (1): for all . We now show that by choosing from for a large enough , we can find a corresponding satisfying condition (3) as well.
Suppose for the sake of contradiction that, we cannot satisfy condition (3) when choosing as above, no matter how large we make . This means that for infinitely many , (and therefore infinitely many population sizes ), all transition sequences from to have a bottleneck. Applying Observation 3.3, letting , , , tells us that , so , a contradiction. ∎
In the following lemma, note that the indexing is over a subset ; for example, the sequence might be indexed if , allowing us to retain the convention that the population size is represented by . Lemma 3.6 essentially states that, if infinitely many configurations satisfy the hypothesis of Lemma 3.5, then we can find three infinite sequences satisfying the conclusion of Lemma 3.5: initial configurations , intermediate full configurations , and “final” configurations (in our applications all will be stable), which by Dickson’s lemma can all be assumed nondecreasing.
Lemma 3.6.
For all , there is a such that the following holds. Suppose that for some set and infinite set of dense initial configurations , for all , . Define . There is an infinite set and infinite sequences of configurations , , , where and are nondecreasing, and an infinite sequence of paths such that, for all ,

,

,

for all ,

, where , and

has no bottleneck transition.
Proof.
Since is infinite, the set is infinite. Pick an infinite sequence from , where ( may range over a subset of here, but for each , at most one configuration in the sequence has size ). For each in the sequence, pick , and for as in Lemma 3.5. By Dickson’s Lemma (Lemma 3.1) there is an infinite subset such that and are nondecreasing on the respective subsequences of and corresponding to . Lemma 3.5 ensures that properties (1)(5) are satisfied. ∎
The conclusion of Lemma 3.6, with its various infinite sequences, is quite complex. The hypothesis of Lemma 3.9 is equally complex; they are used in tandem to prove Lemma 3.10 and Corollary 3.11, the latter being our main technical tool for proving the time lower bounds of Theorems 4.4, 6.3, 6.4, 8.4, and 8.5.
The idea of Lemma 3.10 is to start with a protocol satisfying the hypothesis of Lemma 3.6, which reaches in sublinear time from some set of dense initial configurations to some set (in all applications, is the set of stable configurations reachable from ). Then, invoke Lemma 3.9 to show that it is possible from certain initial configurations to drive some states in the set to 0.
The reason that the statement of Lemma 3.10 is also fairly complex, and references some of these infinite sequences, is that the set appearing in the conclusion of Lemma 3.9 depends on the particular infinite sequence defined in the conclusion of Lemma 3.6. Several infinite sequences, each with their own , could satisfy the hypothesis of Lemma 3.9, and it matters which one we pick. Thus, in applying these results, before reaching the conclusion of Lemma 3.9, we must explicitly define these infinite sequences to know the particular to which the conclusion of Lemma 3.9 applies.
3.4 Path manipulation
This is the most technically dense subsection, with many intermediate technical lemmas that culminate in our primary technical tool for proving time lower bounds, Corollary 3.11. Each lemma statement is complex and involves many interacting variables. The first three lemmas are accompanied by an example and figures to help trace through the intuition.
The next two lemmas, Lemmas 3.7 and 3.8, apply to population protocols that have transitions as described in Lemma 3.4. Both use these transitions in order to manipulate a configuration (by manipulating a “fast” path leading to it from another configuration) until it has prescribed counts of states in from Lemma 3.4.
Lemmas 3.7 and 3.8 are based on statements first proven as “Claim 1” and “Claim 2” in [20]. Since their statements in that paper were not selfcontained (being claims as part of a larger proof), we have rephrased them as selfcontained Lemmas 3.7 and 3.8, and we give selfcontained proofs. Furthermore, we have significantly adapted both the statements and proofs to make them more generally useful for proving negative results, in particular stating the minimum conditions required to apply the lemmas, in addition to quantitatively accounting for the precise effect that the path manipulation has on the underlying configurations.
We use linear algebra to describe changes in counts of states. It is beneficial to fix some notational conventions first. Recall is the set of all states, where , and where . A matrix is an integervalued matrix with rows and columns, with row corresponding to state and column corresponding to state . Given a vector representing counts of states in , then is a vector representing changes in counts of states in .
Our notation for indexing these matrices will generally follow our usual vector convention of using the name of the state itself, rather than an integer index, so for example, refers to the entry in the column corresponding to and the row corresponding to . If necessary to identify the position, this will correspond to the ’th row and ’th column. Where convenient, we also use the traditional notation as well: for instance, a protocol being ordered implies a 11 correspondence between transitions and , which can both be indexed by .
Similarly, when convenient we will abuse notation and consider a vector , for a predicate or function with inputs, to equivalently represent a configuration or subconfiguration in , where is the set of input states of the population protocol.
The next lemma says that for any amount of states in , there exists an amount of states that, if present in addition to , can be used to remove and (the states of that are in ), resulting in a configuration with no states in . Furthermore, both and are linear functions of .
So when we employ Lemma 3.7 later, where will these extra agents come from? Although we talk about them as if they are somehow physically added, in actuality, we’ll start with a larger initial configuration and “guide” some of the agents to the desired states that make up ; this is the work of Lemma 3.8.
Lemma 3.7 (adapted from [20]).
Let such that is ordered, , and let . Then there are matrices and , with , , such that, for all