Simple and Efficient Local Codes for Distributed Stable Network ConstructionSupported in part by the project “Foundations of Dynamic Distributed Computing Systems” (FOCUS) which is implemented under the “ARISTEIA” Action of the Operational Programme “Education and Lifelong Learning” and is co-funded by the European Union (European Social Fund) and Greek National Resources.

Simple and Efficient Local Codes for Distributed Stable Network Constructionthanks: Supported in part by the project “Foundations of Dynamic Distributed Computing Systems” (Focus) which is implemented under the “ARISTEIA” Action of the Operational Programme “Education and Lifelong Learning” and is co-funded by the European Union (European Social Fund) and Greek National Resources.

Othon Michail Computer Technology Institute & Press “Diophantus” (CTI), Patras, Greece    Paul G. Spirakis Computer Technology Institute & Press “Diophantus” (CTI), Patras, Greece Department of Computer Science, University of Liverpool, UK
Email: michailo@cti.gr, P.Spirakis@liverpool.ac.uk
Abstract

In this work, we study protocols (i.e. distributed algorithms) so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair, intuitively meaning that it must assign to every reachable configuration of the system a non-zero probability to occur. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. In particular, in every interaction, the protocol may activate an inactive connection, deactivate an active one, or leave the state of a connection unchanged. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network (i.e. one that does not change any more). We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. Our universality protocols use a subset of the population (waste) in order to distributedly construct there a TM able to decide a graph class in some given space. Then, the protocols repeatedly construct in the rest of the population (useful space) a graph equiprobably drawn from all possible graphs. The TM works on this and accepts if the presented graph is in the class. We additionally show how to partition the population into supernodes, each being a line of nodes, for the largest such . This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions. Delicate composition and reinitialization issues have to be solved for these general constructions to work.

Keywords: distributed network construction, stabilization, homogeneous population, distributed protocol, interacting automata, fairness, random schedule, structure formation, self-organization

1 Introduction

1.1 Motivation

Suppose a set of tiny computational devices (possibly at the nanoscale) is injected into a human circulatory system for the purpose of monitoring or even treating a disease. The devices are incapable of controlling their mobility. The mobility of the devices, and consequently the interactions between them, stems solely from the dynamicity of the environment, the blood flow inside the circulatory system in this case. Additionally, each device alone is incapable of performing any useful computation, as the small scale of the device highly constrains its computational capabilities. The goal is for the devices to accomplish their task via cooperation. To this end, the devices are equipped with a mechanism that allows them to create bonds with other devices (mimicking nature’s ability to do so). So, whenever two devices come sufficiently close to each other and interact, apart from updating their local states, they may also become connected by establishing a physical connection between them. Moreover, two connected devices may at some point choose to drop their connection. In this manner, the devices can organize themselves into a desired global structure. This network-constructing self-assembly capability allows the artificial population of devices to evolve greater complexity, better storage capacity, and to adapt and optimize its performance to the needs of the specific task to be accomplished.

1.2 Our Approach

In this work, we study the fundamental problem of network construction by a distributed computing system. The system consists of a set of processes that are capable of performing local computation (via pairwise interactions) and of forming and deleting connections between them. Connections between processes can be either physical or virtual depending on the application. In the most general case, a connection between two processes can be in one of a finite number of possible states. For example, state 0 could mean that the connection does not exist while state , for some finite , that the connection exists and has strength . We consider here the simplest case, which we call the on/off case, in which, at any time, a connection can either exist or not exist, that is there are just two states for the connections. If a connection exists we also say that it is active and if it does not exist we say that it is inactive. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network. In the simplest case, the output-network is the one induced by the active connections and it is stable when no connection changes state any more.

Our aim in this work is to initiate this study by proposing and studying a very simple, yet sufficiently generic, model for distributed network construction. To this end, we assume the computationally weakest type of processes. In particular, the processes are finite automata that all begin from the same initial state and all execute the same finite program which is stored in their memory (i.e. the system is homogeneous). The communication model that we consider is also very minimal. In particular, we consider processes that are inhabitants of an adversarial environment that has total control over the inter-process interactions. We model such an environment by an adversary scheduler that operates in discrete steps selecting in every step a pair of processes which then interact according to the common program. This represents very well systems of (not necessarily computational) entities that interact in pairs whenever two of them come sufficiently close to each other. When two processes interact, the program takes as input the states of the interacting processes and the state of their connection and outputs a new state for each process and a new state for the connection. The only restriction that we impose on the scheduler in order to study the constructive power of the model is that it is fair, by which we mean the weak requirement that, at every step, it assigns to every reachable configuration of the system a non-zero probability to occur. In other words, a fair scheduler cannot forever conceal an always reachable configuration of the system. Note that such a generic scheduler gives no information about the running time of our constructors. Thus, to estimate the efficiency of our solutions we assume a uniform random scheduler, one of the simplest fair probabilistic schedulers. The uniform random scheduler selects in every step independently and uniformly at random a pair of processes to interact from all such pairs. What renders this model interesting is its ability to achieve complex global behavior via a set of notably simple, uniform (i.e. with codes that are independent of the size of the system), homogeneous, and cooperative entities.

We now give a simple illustration of the above. Assume a set of very weak processes that can only be in one of two states, “black” or “red”. Initially, all processes are black. We can think of the processes as small particles that move randomly in a fair solution. The particles are capable of forming and deleting physical connections between them, by which we mean that, whenever two particles interact, they can read and write the state of their connection. Moreover, for simplicity of the model, we assume that fairness of the solution is independent of the states of the connections. This is in contrast to schedulers that would take into account the geometry of the active connections and would, for example, forbid two non-neighboring particles of the same component to interact with each other. In particular, we assume that throughout the execution every pair of processes may be selected for interaction. Consider now the following simple problem. We want to identically program the initially disorganized particles so that they become self-organized into a spanning star. In particular, we want to end up with a unique black particle connected (via active connections) to red particles and all other connections (between red particles) being inactive. Equivalently, given a (possibly physical) system that tends to form a spanning star we would like to unveil the code behind this behavior. Consider the following program. When two black particles that are not connected interact, they become connected and one of them becomes red. When two connected red particles interact they become disconnected (i.e. reds repel). Finally, when a black and a red that are not connected interact they become connected (i.e. blacks and reds attract). The protocol forms a spanning star as follows. As whenever two blacks interact only one survives and the other becomes red, eventually a unique black will remain and all other particles will be red (we say “eventually”, meaning “in finite time”, because we do not know how much time it will take for all blacks to meet each other but from fairness we know that this has to occur in a finite number of steps). As blacks and reds attract while reds repel, it is clear that eventually the unique black will be connected to all reds while every pair of reds will be disconnected. Moreover, no rule of the program can modify such a configuration thus the constructed spanning star is stable (see Figure 1). It is worth noting that this very simple protocol is optimal both w.r.t. to the number of states that it uses and w.r.t. to the time it takes to construct a stable spanning star under the uniform random scheduler.

Figure 1: (a) Initially all particles are black and no active connections exist. (b) After a while, only 3 black particles have survived each having a set of red neighbors (red particles appear as gray here). Note that some red particles are also connected to red particles. The tendency is for the red particles to repel red particles and attract black particles. (c) A unique black has survived, it has attracted all red particles, and all connections between red particles have been deactivated. The construction is a stable spanning star.

Our model for network construction is strongly inspired by the Population Protocol model [AAD06] and the Mediated Population Protocol model [MCS11a]. In the former, connections do not have states. States on the connections were first introduced in the latter. The main difference to our model is that in those models the focus was on the computation of functions of some input values and not on network construction. Another important difference is that we allow the edges to choose between only two possible states which was not the case in [MCS11a]. Interestingly, when operating under a uniform random scheduler, population protocols are formally equivalent to chemical reaction networks (CRNs) which model chemistry in a well-mixed solution [Dot14]. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising programming language for the design of artificial molecular control circuitry. However, CRNs and population protocols can only capture the dynamics of molecular counts and not of structure formation. Our model then may also be viewed as an extension of population protocols and CRNs aiming to capture the stable structures that may occur in a well-mixed solution. From this perspective, our goal is to determine what stable structures can result in such systems (natural or artificial), how fast, and under what conditions (e.g. by what underlying codes/reaction-rules). Most computability issues in the area of population protocols have now been resolved. Finite-state processes on a complete interaction network, i.e. one in which every pair of processes may interact, (and several variations) compute the semilinear predicates [AAER07]. Semilinearity persists up to local space but not more than this [CMN11]. If additionally the connections between processes can hold a state from a finite domain (note that this is a stronger requirement than the on/off that the present work assumes) then the computational power dramatically increases to the commutative subclass of [MCS11a]. Other important works include [GR09] which equipped the nodes of population protocols with unique ids and [BBCK10] which introduced a (weak) notion of speed of the nodes that allowed the design of fast converging protocols with only weak requirements. For a very recent introductory text see [MCS11b].

The paper essentially consists of two parts. In the first part, we give simple (i.e. small) and efficient (i.e. polynomial-time) protocols for the construction of several fundamental networks. In particular, we give protocols for spanning lines, spanning rings, cycle-covers, partitioning into cliques, and regular networks (formal definitions of all problems considered can be found in Section 3.2). We remark that the spanning line problem is of outstanding importance because it constitutes a basic ingredient of universal constructors. We give three different protocols for this problem each improving on the running time but using more states to this end. Additionally, we establish a generic lower bound on the expected running time of all constructors that construct a spanning network and a lower bound for the spanning line, where throughout this work denotes the number of processes. Our fastest protocol for the problem runs in expected time and uses 9 states while our simplest uses only 5 states but pays in an expected time which is between and . In the second part, we investigate the more generic question of what is in principle constructible by our model. We arrive there at several satisfactory characterizations establishing some sort of universality of the model. The main idea is as follows. To construct a decidable graph-language we (i) construct on of the processes (called the waste) a network capable of simulating a Turing Machine (abbreviated “TM” throughout the paper) and of constructing a random network on the remaining processes (called the useful space), (ii) use to construct a random network on the remaining processes, (iii) execute on the TM that decides with as input. If the TM accepts, then we output (note that this is not a terminating step - the reason why will become clear in Section 6; the protocol just freezes and its output forever remains ), otherwise we go back to (ii) and repeat. Using this core idea we prove several universality results for our model. Additionally, we show how to organize the population into a distributed system with names and logarithmic local memories.

In Section 2, we discuss further related literature. Section 3 brings together all definitions and basic facts that are used throughout the paper. In particular, in Section 3.1 we formally define the model of network constructors, Section 3.2 formally defines all network construction problems that are considered in this work, and in Section 3.3 we identify and analyze a set of basic probabilistic processes that are recurrent in the analysis of the running times of network constructors. In Section 4, we study the spanning line problem. In Section 5, we provide direct constructors for all the other basic network construction problems. Section 6 presents our universality results. Finally, in Section 7 we conclude and give further research directions that are opened by our work.

2 Further Related Work

Algorithmic Self-Assembly. There are already several models trying to capture the self-assembly capability of natural processes with the purpose of engineering systems and developing algorithms inspired by such processes. For example, [Dot12] proposes to learn how to program molecules to manipulate themselves, grow into machines and at the same time control their own growth. The research area of “algorithmic self-assembly” belongs to the field of “molecular computing”. The latter was initiated by Adleman [Adl94], who designed interacting DNA molecules to solve an instance of the Hamiltonian path problem. The model guiding the study in algorithmic self-assembly is the Abstract Tile Assembly Model (aTAM) [Win98, RW00] and variations. In contrast to those models that try to incorporate the exact molecular mechanisms (like e.g. temperature, energy, and bounded degree), we propose a very abstract combinatorial rule-based model, free of specific application-driven assumptions, with the aim of revealing the fundamental laws governing the distributed (algorithmic) generation of networks. Our model may serve as a common substructure to more applied models (like assembly models or models with geometry restrictions) that may be obtained from our model by imposing restrictions on the scheduler, the degree, and the number of local states (see Section 7 for several interesting variations of our model).

Distributed Network Construction. To the best of our knowledge, classical distributed computing has not considered the problem of constructing an actual communication network from scratch. From the seminal work of Angluin [Ang80] that initiated the theoretical study of distributed computing systems up to now, the focus has been more on assuming a given communication topology and constructing a virtual network over it, e.g. a spanning tree for the purpose of fast dissemination of information. Moreover, these models assume most of the time unique identities, unbounded memories, and message-passing communication. Additionally, a process always communicates with its neighboring processes (see [Lyn96] for all the details). An exception is the area of geometric pattern formation by mobile robots (cf. [SY99, DFSY10] and references therein). A great difference, though, to our model is that in mobile robotics the computational entities have complete control over their mobility and thus over their future interactions. That is, the goal of a protocol is to result in a desired interaction pattern while in our model the goal of a protocol is to construct a network while operating under a totally unpredictable interaction pattern. Very recently, a model inspired by the behavior of ameba that allows algorithmic research on self-organizing particle systems was proposed [DGRS13]. The goal is for the particles to self-organize in order to adapt to a desired shape without any central control, which is quite similar to our objective, however the two models seem two have little in common. In the same work, the authors observe that, in contrast to the considerable work that has been performed w.r.t. to systems (e.g. self-reconfigurable robotic systems), only very little theoretical work has been done in this area. This further supports the importance of introducing a simple yet sufficiently generic model for distributed network construction, as we do in this work.

Cellular Automata. A cellular automaton (cf. e.g. [Sch11]) consists of a grid of cells each cell being a finite automaton. A cell updates its own state by reading the states of its neighboring cells (e.g. 2 in the 1-dimensional case and 4 in the 2-dimensional case). All cells may perform the updates in discrete synchronous steps or updates may occur asynchronously. Cellular automata have been used as models for self-replication, for modeling several physical systems (e.g. neural activity, bacterial growth, pattern formation in nature), and for understanding emergence, complexity, and self-organization issues. Though there are some similarities there are also significant differences between our model and cellular automata. One is that in our model the interaction pattern is nondeterministic as it depends on the scheduler and a process may interact with any other process of the system and not just with some predefined neighbors. Moreover, our model has a direct capability of forming networks whereas cellular automata can form networks only indirectly (an edge between two cells and has to be represented as a line of cells beginning at , ending at and all cells on the line being in a special edge-state). In fact, cellular automata are more suitable for studying the formation of patterns on e.g. a discrete surface of static cells while our model is more suitable for studying how a totally dynamic (e.g. mobile) and initially disordered collection of entities can self-organize into a network.

Social Networks. There is a great amount of work dealing with networks formed by a group of interacting individuals. Individuals, also called players, which may, for example, be people, animals, or companies, depending on the application, usually have incentives and connections between individuals indicate some social relationship, like for example friendship. The network is formed by allowing the individuals to form or delete connections usually selfishly by trying to maximize their own utility. The usual goal there is to study how the whole network affects the outcome of a specific interaction, to predict the network that will be formed by a set of selfish individuals, and to characterize the quality of the network formed (e.g. its efficiency). See e.g. [Jac05, BEK13]. This is a game-theoretic setting which is very different from the setting considered here as the latter does not include incentives and utilities. Another important line of research considers random social networks in which new links are formed according to some probability distribution. For example, in [BA99] it was shown that growth and preferential attachment that characterize a great majority of social networks (like, for example, the Internet) results in scale-free properties that are not predicted by the Erdös-Rényi random graph model [ER59, Bol01]. Though, in principle, we allow processes to perform a coin tossing during an interaction, our focus is not on the formation of a random network but on cooperative (algorithmic) construction according to a common set of rules. In summary, our model looks more like a standard dynamic distributed computing system in which the interacting entities are computing processes that all execute the same program.

Network Formation in Nature. Nature has an intrinsic ability to form complex structures and networks via a process known as self-assembly. By self-assembly, small components (like e.g. molecules) automatically assemble into large, and usually complex structures (like e.g. a crystal). There is an abundance of such examples in the physical world. Lipid molecules form a cell’s membrane, ribosomal proteins and RNA coalesce into functional ribosomes, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade bacteria [Dot12]. Mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. Such cooperative networks grow faster than selfish autocatalytic cycles indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation [VMC12]. Through billions of years of prebiotic molecular selection and evolution, nature has produced a basic set of molecules. By combining these simple elements, natural processes are capable of fashioning an enormously diverse range of fabrication units, which can further self-organize into refined structures, materials and molecular machines that not only have high precision, flexibility and error-correction capacity, but are also self-sustaining and evolving. In fact, nature shows a strong preference for bottom-up design.

Systems and solutions inspired by nature have often turned out to be extremely practical and efficient. For example, the bottom-up approach of nature inspires the fabrication of biomaterials by attempting to mimic these phenomena with the aim of creating new and varied structures with novel utilities well beyond the gifts of nature [Zha03]. Moreover, there is already a remarkable amount of work envisioning our future ability to engineer computing and robotic systems by manipulating molecules with nanoscale precision. Ambitious long-term applications include molecular computers [BPS10] and miniature (nano)robots for surgical instrumentation, diagnosis and drug delivery in medical applications (e.g. it has very recently been reported that DNA nanorobots could even kill cancer cells [DBC12]) and monitoring in extreme conditions (e.g. in toxic environments). However, the road towards this vision passes first through our ability to discover the laws governing the capability of distributed systems to construct networks. The gain of developing such a theory will be twofold: It will give some insight to the role (and the mechanisms) of network formation in the complexity of natural processes and it will allow us engineer artificial systems that achieve this complexity.

3 Preliminaries

3.1 A Model of Network Constructors

Definition 1

A Network Constructor (NET) is a distributed protocol defined by a 4-tuple , where is a finite set of node-states, is the initial node-state, is the set of output node-states, and is the transition function.

If , we call a transition (or rule) and we define , , and . A transition is called effective if for at least one and ineffective otherwise. When we present the transition function of a protocol we only present the effective transitions. Additionally, we agree that the size of a protocol is the number of its states, i.e. .

The system consists of a population of distributed processes (also called nodes when clear from context). In the generic case, there is an underlying interaction graph specifying the permissible interactions between the nodes. Interactions in this model are always pairwise. In this work, is a complete undirected interaction graph, i.e. , where . Initially, all nodes in are in the initial node-state .

A central assumption of the model is that edges have binary states. An edge in state 0 is said to be inactive while an edge in state 1 is said to be active. All edges are initially inactive.

Execution of the protocol proceeds in discrete steps. In every step, a pair of nodes from is selected by an adversary scheduler and these nodes interact and update their states and the state of the edge joining them according to the transition function . In particular, we assume that, for all distinct node-states and for all edge-states , specifies either or . So, if , , and are the states of nodes , , and edge , respectively, then the unique rule corresponding to these states, let it be , is applied, the edge that was in state updates its state to and if , then updates its state to and updates its state to , if and , then both nodes update their states to , and if and , then the node that gets is drawn equiprobably from the two interacting nodes and the other node gets .

A configuration is a mapping specifying the state of each node and each edge of the interaction graph. Let and be configurations, and let , be distinct nodes. We say that goes to via encounter , denoted , if . We say that is reachable in one step from , denoted , if for some encounter . We say that is reachable from and write , if there is a sequence of configurations , such that for all , .

An execution is a finite or infinite sequence of configurations , where is an initial configuration and , for all . A fairness condition is imposed on the adversary to ensure the protocol makes progress. An infinite execution is fair if for every pair of configurations and such that , if occurs infinitely often in the execution then so does . In what follows, every execution of a NET will by definition considered to be fair.

We define the output of a configuration as the graph where and , and . In words, the output-graph of a configuration consists of those nodes that are in output states and those edges between them that are active, i.e. the active subgraph induced by the nodes that are in output states. The output of an execution is said to stabilize (or converge) to a graph if there exists some step s.t. for all , i.e. from step and onwards the output-graph remains unchanged. Every such configuration , for , is called output-stable. The running time (or time to convergence) of an execution is defined as the minimum such (or if no such exists). Throughout the paper, whenever we study the running time of a NET, we assume that interactions are chosen by a uniform random scheduler which, in every step, selects independently and uniformly at random one of the possible interactions. In this case, the running time becomes a random variable (abbreviated “r.v.”) and our goal is to obtain bounds on the expectation of . Note that the uniform random scheduler is fair with probability 1.

Definition 2

We say that an execution of a NET on processes constructs a graph (or network) , if its output stabilizes to a graph isomorphic to .

Definition 3

We say that a NET constructs a graph language with useful space , if is the greatest function for which: (i) for all , every execution of on processes constructs a of order at least (provided that such a exists) and, additionally, (ii) for all there is an execution of on processes, for some satisfying , that constructs . Equivalently, we say that constructs with waste .

Definition 4

Define to be the class of all graph languages that are constructible with useful space by a NET. We call the relation or on/off class.

Also define in precisely the same way as but in the extension of the above model in which every pair of processes is capable of tossing an unbiased coin during an interaction between them. In particular, in the weakest probabilistic version of the model, we allow transitions that with probability give one outcome and with probability another. Additionally, we require that all graphs have the same probability to be constructed by the protocol.

We denote by (for “Deterministic Graph Space”) the class of all graph languages that are decidable by a TM of (binary) space , where is the length of the adjacency matrix encoding of the input graph.

3.2 Problem Definitions

We here provide formal definitions of all the network construction problems that are considered in this work. Protocols and bounds for these problems are presented in Sections 4 and 5.

Global line. The goal is for the distributed processes to construct a spanning line, i.e. a connected graph in which 2 nodes have degree 1 and nodes have degree 2.

Cycle cover. Every process in must eventually have degree 2. The result is a collection of node-disjoint cycles spanning .

Global star. The processes must construct a spanning star, i.e. a connected graph in which 1 node, called the center, has degree and nodes, called the peripheral nodes, have degree 1.

Global ring. The processes must construct a spanning ring, i.e. a connected graph in which every node has degree 2.

-regular connected. The generalization of global ring in which every node has degree (note that is a constant and a protocol for the problem must run correctly on any number of processes).

-cliques. The processes must partition themselves into cliques of order each (again is a constant).

Replication. The protocol is given an input graph on a subset of the processes (). The processes in are initially in state and the edges of are the active edges between them. All other edges in are initially inactive. The processes in are initially in state . The goal is to create a replica of on , provided that . Formally, we want, in every execution, the output induced by the active edges between the nodes of to stabilize to a graph isomorphic to .

3.3 Basic Probabilistic Processes

We now present a set of very fundamental probabilistic processes that are recurrent in the analysis of the running times of network constructors. All these processes assume a uniform random scheduler and are applications of the standard coupon collector problem. In most of these processes, we ignore the states of the edges and focus only on the dynamics of the node-states, that is we consider rules of the form . Throughout this section, we call a step a success if an effective rule applies on the interacting nodes and we denote by the r.v. of the running time of the processes.

One-way epidemic. Consider the protocol in which the only effective transition is . Initially, there is a single and s and we want to estimate the expected number of steps until all nodes become s.

Proposition 1

The expected time to convergence of a one-way epidemic (under the uniform random scheduler) is .

Proof

Let be a r.v. defined to be the number of steps until all nodes are in state . Call a step a success if an effective rule applies and a new appears on some node. Divide the steps of the protocol into epochs, where epoch begins with the step following the st success and ends with the step at which the th success occurs. Let also the r.v. , be the number of steps in the -th epoch. Let be the probability of success at any step during the -th epoch. We have , where denotes the total number of possible interactions and . By linearity of expectation we have

where denotes the th Harmonic number.

One-to-one elimination. All nodes are initially in state . The only effective transition of the protocol is . We are now interested in the expected time until a single remains. We call the process one-to-one elimination because s are only eliminated with themselves. A straightforward application is in protocols that elect a unique leader by beginning with all nodes in the leader state and eliminating a leader whenever two leaders interact.

Proposition 2

The expected time to convergence of a one-to-one elimination is .

Proof

Epoch begins with the step following the th success and ends with the step at which the st success occurs. The probability of success during the th epoch, for , is and

The above uses the fact that is bounded from above by . This holds because .

Now, for the lower bound, observe that the last two s need on average steps to meet each other. As , we conclude that . ∎

A slight variation of the one-to-one elimination protocol constructs a maximum matching, i.e. a matching of cardinality (which is a perfect matching in case is even). The variation is and the running time of a one-to-one elimination, i.e. , is an upper bound on this variation. For the lower bound, notice that when only two (or three) s remain the expected number of steps for a success is (, respectively), that is the running time is also . We conclude that there is a protocol that constructs a maximum matching in an expected number of steps.

One-to-all elimination. All nodes are initially in state . The effective rules of the protocol are and . We are now interested in the expected time until no remains. The process is called one-to-all elimination because s are eliminated not only when they interact with s but also when they interact with s. At a first sight, it seems to run faster than a one-way epidemic as s still propagate towards s as in a one-way epidemic but now s are also created when two s interact. We show that this is not the case.

Proposition 3

The expected time to convergence of a one-to-all elimination is .

Proof

The probability of success during the th epoch, for , is and

For the upper bound, we have

For the lower bound, we have

We conclude that . ∎

Meet everybody. A single node is initially in state and all other nodes are in state . The only effective transition is . We study the time until all s become s which is equal to the time needed for to interact with every other node.

Proposition 4

The expected time to convergence of a meet everybody is .

Proof

Assume that in every step participates in an interaction. Then must collect the coupons which are different nodes that it must interact with. Clearly, in every step, every node has the same probability to interact with , i.e. , and this is the classical coupon collector problem that takes average time . But on average needs steps to participate in an interaction, thus the total time is . ∎

Node cover. All nodes are initially in state . The only effective transitions are , . We are interested in the number of steps until all nodes become s, i.e. the time needed for every node to interact at least once.

Proposition 5

The expected time to convergence of a node cover is .

Proof

For the upper bound, simply observe that the running time of a one-to-all elimination, i.e. , is an upper bound on the running time of a node cover. The reason is that a node cover is a one-to-all elimination in which in some cases we may get two new bs by one effective transition (namely ) while in one-to-all elimination all effective transitions result in at most one new .

For the lower bound, if is the number of s then the probability of success is . Observe now that a node cover process is slower than the artificial variation in which whenever rule applies we pick another and make it a . This is because, given s, this artificial process has the same probability of success as a node cover but additionally in every success the artificial process is guaranteed to produce two new s while a node cover may in some cases produce only one new . Define . Then, taking into account what we already proved in the lower bound of one-to-all elimination (see Proposition 3), we have

We conclude that . ∎

Edge cover. All nodes are in state throughout the execution of the protocol. The only effective transition is (we now focus on edge-state updates), i.e. whenever an edge is found inactive it is activated (recall that initially all edges are inactive). We study the number of steps until all edges in become activated, which is equal to the time needed for all possible interactions to occur.

Proposition 6

The expected time to convergence of an edge cover is .

Proof

Given that and given that successes (i.e. distinct interactions) have occurred the corresponding probability for the coupon collector argument is and the expected number of steps is . Another way to see this is to observe that it is a classical coupon collector problem with coupons each selected in every step with probability , thus . ∎

Table 1 summarizes the expected time to convergence of each of the above fundamental probabilistic processes.

Protocol Expected Time
One-way epidemic
One-to-one elimination
One-to-all elimination
Meet everybody
Node Cover
Edge cover
Table 1: Our results for the expected time to convergence of several fundamental probabilistic processes.

4 Constructing a Global Line

In this section, we study probably the most fundamental network-construction problem, which is the problem of constructing a spanning line. Its importance lies in the fact that a spanning line provides an ordering on the processes which can then be exploited (as shown in Section 6) to simulate a TM and thus to establish universality of our model. We give three different protocols for the spanning line problem each improving on the running time but using more states to this end.

We begin with a generic lower bound holding for all protocols that construct a spanning network.

Theorem 4.1 (Generic Lower Bound)

The expected time to convergence of any protocol that constructs a spanning network, i.e. one in which every node has at least one active edge incident to it, is . Moreover, this is the best lower bound for general spanning networks that we can hope for, as there is a protocol that constructs a spanning network in expected time.

Proof

Consider the time at which the last edge is activated. Clearly, by that time, all nodes must have some active edge incident to them which implies that every node must have interacted at least once. Thus the running time is lower bounded by a node cover, which by Proposition 5 takes an expected number of steps.

Now consider the variation of node cover which in every transition that is effective w.r.t. node-states additionally activates the corresponding edge. In particular, the protocol consists of the rules and . Clearly, when every node has interacted at least once, or equivalently when all s have become s, every node has an active edge incident to it, and thus the resulting stable network is spanning. The reason is that all nodes are s in the beginning, every node at some point is converted to , and every such conversion results in an activation of the corresponding edge. As a node-cover completes in steps, the above protocol takes steps to construct a spanning network. ∎

We now give an improved lower bound for the particular case of constructing a spanning line.

Theorem 4.2 (Line Lower Bound)

The expected time to convergence of any protocol that constructs a spanning line is .

Proof

Take any protocol that constructs a spanning line and any execution of on nodes. Consider the step at which performed the last modification of an edge. Observe that the construction after step must be a spanning line. We distinguish two cases.

(i) The last modification was an activation. In this case, the construction just before step was either a line on nodes and an isolated node or two disjoint lines spanning all nodes. To see this, observe that these are the only constructions that can be turned into a line by a single additional activation. In the first case, the probability of obtaining an interaction between the isolated node and one of the endpoints of the line is and in the second the probability of obtaining an interaction between an endpoint of one line and an endpoint of the other line is . In both cases, the expected number of steps until the last edge becomes activated is .

(ii) The last modification was a deactivation. This implies that the construction just before step was a spanning line with an additional active edge between two nodes, and , that are not neighbors on the line. If one of these nodes, say , is an internal node, then has degree 3 and we can only obtain a line by deactivating one of the edges incident to . Clearly, the probability of getting one of these edges is and it is even smaller if both nodes are internal. Thus, if at least one of and is internal, the expected number of steps is . It remains to consider the case in which the construction just before step was a spanning ring, i.e. the case in which and are the endpoints of the spanning line. In this case, consider the step of the last modification of an edge that resulted in the ring. To this end notice that all nodes of a ring have degree 2. If was an activation then exactly two nodes had degree 1 and if was a deactivation then two nodes had degree 3. In both cases, there is a single interaction that results in a ring, the probability of success is and the expectation is again . ∎

We proceed by presenting protocols for the spanning line problem.

4.1 1st Protocol

We present now our simplest protocol for the spanning line problem.

:
// All transitions that do not appear have no effect
Protocol 1 Simple-Global-Line

Protocol Simple-Global-Line: , , , ,

Theorem 4.3

Protocol Simple-Global-Line constructs a spanning line. It uses 5 states and its expected running time is and .

Proof

We begin by proving that, for any number of processes , the protocol correctly constructs a spanning line under any fair scheduler. Then we study the running time of the protocol under the uniform random scheduler.

Correctness. In the initial configuration , all nodes are in state and all edges are inactive, i.e in state 0. Every configuration that is reachable from consists of a collection of active lines and isolated nodes. Additionally, every active line has a unique leader which either occupies an endpoint and is in state or occupies an internal node, is in state , and moves along the line. Whenever the leader lies on an endpoint of its line, its state is and whenever it lies on an internal node, its state is . Lines can expand towards isolated nodes and two lines can connect their endpoints to get merged into a single line (with total length equal to the sum of the lengths of the merged lines plus one). Both of these operations only take place when the corresponding endpoint of every line that takes part in the operation is in state .

We have to prove two things: (i) there is a set of output-stable configurations whose active network is a spanning line, (ii) for every reachable configuration (i.e. ) it holds that for some . For (i), consider a spanning line, in which the non-leader endpoints are in state , the non-leader internal nodes in , and there is a unique leader either in state if it occupies an endpoint or in state if it occupies an internal node. For (ii), note that any reachable configuration is a collection of active lines with unique leaders and isolated nodes. We present a (finite) sequence of transitions that converts to a . If there are isolated nodes, take any line and if its leader is internal make it reach one of the endpoints by selecting the appropriate interactions. Then successively apply the rule to expand the line towards all isolated nodes. Thus we may now w.l.o.g. consider a collection of lines without isolated nodes. By successively applying the rule to pairs of lines while always moving the internal leaders that appear towards an endpoint it is not hard to see that the process results in an output-stable configuration from , i.e. one whose active network is a spanning line.

Running Time Upper Bound. For the running time upper bound, we have an expected number of steps until another progress is made (i.e. for another merging to occur given that at least two -leaders exist) and steps for the resulting random walk (walk of state until it reaches one endpoint of the line) to finish and to have again the system ready for progress. follows because we have a random walk on a line with two absorbing barriers (see e.g. [Fel68] pages 348-349) delayed on average by a factor of . As progress must be made times, we conclude that the expected running time of the protocol is bounded from above by .

We next prove that we cannot hope to improve the upper bound on the expected running time by a better analysis by more than a factor of . For this we first prove that the protocol w.h.p. constructs different lines of length 1 during its course. A set of disjoint lines implies that distinct merging processes have to be executed in order to merge them all in a common line and each single merging results in the execution of another random walk. We exploit all these to prove the desired lower bound.

Recall that initially all nodes are in . Every interaction between two -nodes constructs another line of length 1. Call the random interaction of step a success if both participants are in . Let be the r.v. of the number of nodes in state ; i.e. initially . Note that, at every step, decreases by at most 2, which happens only in a success (it may also remain unchanged, or decrease by 1 if a leader expands towards a ). Let the r.v. be the number of successes up to step and be the total number of successes throughout the course of the protocol (e.g. until no further successes are possible or until stabilization). Our goal is to calculate the expectation of as this is equal to the number of distinct lines of length 1 that the protocol is expected to form throughout its execution (note that these lines do not necessarily have to coexist). Given , the probability of success at the current step is . As long as it holds that . Moreover, as decreases by at most 2 in every step, there are at least steps until becomes less or equal to . Thus, our process dominates a Bernoulli process with trials and probability of success in each trial. For this process we have .

We now exploit the following Chernoff bound (cf. [MR95], page 70) establishing that w.h.p. does not deviate much below its mean :
Chernoff Bound. Let be independent Poisson trials such that, for , , where . Then, for , , and ,

Additionally, it holds that . Thus implies

So, for all ,

and as dominates , we have . In words, w.h.p. we expect lines of length 1 to be constructed by the protocol.

Now, given that , we distinguish two cases: (i) At some point during the course of the protocol two lines both of length get merged. In this case, the corresponding random walk takes on average transitions involving the leader and on average the leader is selected every steps to interact with one of the 2 active edges incident to it. That is, the expected number of steps for the completion of such a random walk is and the expected running time of the protocol is (ii) In every merging process, at least one of the two participating lines has length at most . We have already shown that the protocol w.h.p. forms distinct lines of length 1. Consider now the interval . As for all , only a single line can ever have length and one, call it , will necessarily fall in this interval due to the fact that the length of will increase by at most in every merging until it becomes . Consider now the time at which has length . As the total length due to lines of length 1 (ever to appear) is and the length of is there is still a remaining length of at least to be merged to . As the maximum length of any line different than is , will get merged to the remaining length via at least distinct mergings with lines of length at most . These mergings, and thus also the resulting random walks, cannot occur in parallel as all of them share as a common participant (and a line can only participate in one merging at a time). Let denote the length of the -th line merged to , for . If has length just before the -th merging, then the expected duration of the resulting random walk is and the new resulting from merging will have length . Let denote the duration of all random walks, and , , the duration of the -th random walk. In total, the expected duration of all random walks resulting from the mergings of is

The fifth equality follows from the fact that . We conclude that the expected running time of the protocol is also in this case .

Now if we define the r.v. to be the total running time of the protocol (until convergence), by the law of total probability and for every constant , we have that:

Thus, the expected running time of the protocol is . ∎

4.2 2nd Protocol

The random walk approach followed in Protocol 1 takes time, thus a straightforward attempt for improvement is to replace the random walk merging process with some more “deterministic” merging. In Protocol 2, the random walk rules 3-5 of Protocol 1 have been replaced by a more “deterministic” procedure.

:
Protocol 2 Intermediate-Global-Line
Theorem 4.4

Protocol Intermediate-Global-Line constructs a spanning line. It uses 8 states and its expected running time under the uniform random scheduler is and .

Proof

The proof idea is precisely the same as that of Theorem 4.3. The only difference is that now merging two lines of lengths and takes time (asymptotically) instead of the of the random walk. Thus, for the upper bound we need mergings each taking an average of to complete in the worst case. holds because steps are performed by the merging process in the worst-case and the process must wait an average of until its leader is selected to interact over one of its active edges. Thus the dominating factor is now .

For the lower bound we again have w.h.p. lines of length 1 and for cases (i), (ii) as above we have: (i) merging two lines both of length takes time . (ii)