# Towards Swarm Calculus: Urn Models of Collective Decisions and Universal Properties of Swarm Performance

## Abstract

Methods of general applicability are searched for in swarm intelligence with the aim of gaining new insights about natural swarms and to develop design methodologies for artificial swarms. An ideal solution could be a ‘swarm calculus’ that allows to calculate key features of swarms such as expected swarm performance and robustness based on only a few parameters. To work towards this ideal, one needs to find methods and models with high degrees of generality. In this paper, we report two models that might be examples of exceptional generality. First, an abstract model is presented that describes swarm performance depending on swarm density based on the dichotomy between cooperation and interference. Typical swarm experiments are given as examples to show how the model fits to several different results. Second, we give an abstract model of collective decision making that is inspired by urn models. The effects of positive feedback probability, that is increasing over time in a decision making system, are understood by the help of a parameter that controls the feedback based on the swarm’s current consensus. Several applicable methods, such as the description as Markov process, calculation of splitting probabilities, mean first passage times, and measurements of positive feedback, are discussed and applications to artificial and natural swarms are reported.

## 1 Introduction

The research of swarm intelligence is important in both biology to gain new insights about natural swarms and also in fields dealing with artificial swarms, such as swarm robotics, to obtain sophisticated design methodologies. The ideal tool would allow to calculate fundamental features of swarm behavior, such as performance, stability, and robustness, and would need only a few observed parameters in the case of natural swarm systems or a few designed parameters in the case of artificial swarms. We call this highly desired set of tools ‘swarm calculus’ (calculus in its general sense). The underlying idea is to create a set of mathematical tools that are easily applied to a variety of settings in the field of swarm intelligence. In addition, these tools should be easily combined which would allow using them as mathematical building blocks for modeling. Thus, models will surely be an important part of swarm calculus.

General properties and generally applicable models need to be found to
obtain a general methodology of understanding and designing swarm
systems. Today it seems that only few models exist that have the
potential to become general swarm models. For example, swarm models in
biology are particularly distinguished by their
variety (Okubo and Levin, 2001; Okubo, 1986; Vicsek and Zafeiris, 2012; Edelstein-Keshet, 2006; Camazine et al., 2001). Typically,
a specialized model is created for each biological challenge. It seems
that the desire for models with wide applicability to a collection of
natural swarms is rather low in that community. In the field of
artificial swarms, such as robot swarms, the desire for generality
seems to be bigger, which is, for example, expressed by several models
in swarm
robotics (Hamann, 2010; Berman et al., 2011; Prorok et al., 2011; Milutinovic and Lima, 2007; Lerman et al., 2005). The
driving force for the creation of these models is to support the
design of swarm robotic systems within a maximal range of
applications. The focus of these models is on quantitative
features of the swarm behavior, such as the distribution of robots or
required times for certain tasks. However, there is a struggle between
the intended generality of the model and the creation of a direct
mapping between the model and the actual description of the individual
robot’s behavior. A higher degree of generality is achievable if the
demand for a detailed description of behavioral features is abandoned
and focus is set only on high-level features such as overall
performance or the macroscopic process of a collective decision. Such
high-level models can be expressed by concise mathematical
descriptions that, in turn, allow direct applications of standard
methods from statistics, linear algebra, and statistical mechanics. In
this paper^{1}

## 2 Fundamentals of swarm performance and collective decisions

In this section we define the concepts of swarm performance and collective decision-making along with so-called urn models upon which the collective-decision model is based.

### 2.1 Swarm performance

By ‘swarm performance’ we denote the efficiency of a swarm concerning a certain task. For example, the swarm performance can be a success rate of how often the task is accomplished on average, it can be the average speed of a swarm in collective motion etc. Here we are interested in the swarm performance as a function over ‘swarm density’, which is how many agents are found on average within a certain area. For the following reason, the function of swarm performance depending on swarm density cannot simply be a linear function. For a true swarm system, a very low density, which corresponds situations with only a few agents in the whole area, has to result in low performance because there is neither a lot of cooperation between agents because they seldom meet nor a significant speed-up. With increasing density, the performance increases, on the one hand, because of a simple speed-up (e.g., two robots clean the floor faster than one) and, on the other hand, because of increasing opportunities of cooperation (assuming that cooperation is an essential beneficial part of swarms). In natural swarms such increases in performance with increasing swarm size are revealed, for example, in productivity gains and also in the emergence of increased division of labor as an indicator for increased cooperation (Jeanne and Nordheim, 1996; Karsai and Wenzel, 1998; Gautrais et al., 2002; Jeanson et al., 2007). Even superlinear performance increases are possible in this interval of swarm density and was reported for a swarm of robots (Mondada et al., 2005). For artificial swarms it was reported that at some critical/optimal density (Schneider-Fontán and Matarić, 1996) the performance curve will first level off and then decrease (Arkin et al., 1993) because improvements in cooperation possibilities will be lower than the drawback of high densities, namely interference (Lerman and Galstyan, 2002). With further increase of the density, the performance continues to decrease as reported for multi-robot systems (Goldberg and Matarić, 1997). Hence, swarms generally face a tradeoff between cooperation, which is beneficial, and interference, which is usually obstructive, however, has sometimes positive effects both within certain natural swarms and as a tool for designing swarm algorithms (Dussutour et al., 2004; Goldberg and Matarić, 1997).

In the following we report that many swarm systems not only show similar qualitative properties but show also similarities in the actual shapes of their swarm performance over swarm size/density graphs (see function of swarm performance in Fig. \vreffig:perfAllInOne). Examples are the performance of foraging in a group of robots (Fig. \vreffig:fitted:lerman and Fig. 10a in (Lerman and Galstyan, 2002)), the activation dynamics and information capacity in an abstract cellular automaton model of ants (Figs. 1b and 1c in (Miramontes, 1995)), and even in the sizes of social networks (Fig. 8b in (Strogatz, 2001)). A similar curve is also presented as a hypothesis for per capita output in social wasps by Jeanne and Nordheim (1996). The existence of this general shape was already reported by Østergaard et al. (2001) in their expected performance model for multi-robot systems in constrained environments:

We know that by varying the implementation of a given task, we can move the point of “maximum performance” and we can change the shapes of the curve on either side of it, but we cannot change the general shape of the graph.

Traffic models of flow over density are related because traffic flow can be increased when cars cooperatively share streets but the flow decreases when the streets are too crowded and cars interfere too much with each other. While the ‘fundamental diagram’ of traffic flow (Lighthill and Whitham, 1955) is symmetric, more realistic models propose at least two asymmetric phases of free and synchronized flow (e.g., Fig 3(b) in (Wong and Wong, 2002)). Actual measurements on highways show curves with shapes similar to Fig. a (e.g., see Fig. 6-4 in (Mahmassani et al., 2009)). In these models, there exist two densities for a given flow (except for maximum flow) similar to the situation here where we have two swarm densities for each swarm performance (one smaller than the optimal density and one bigger than the optimal density; the corresponding function that maps densities to performance values is surjective, not bijective).

### 2.2 Collective decisions

In the context of swarm intelligence, collective decision-making is a process that is distributed over a group of agents without global knowledge. Each agent decides based on locally sampled data such as the current decision of its neighbors. There are many biological systems showing collective decision-making, for example, food source choice in honey bees (Seeley et al., 1991), nest site selection in ants (Mallon et al., 2001), and escape route search in social spiders (Saffre et al., 1999). Collective decision-making systems are often modeled as positive-feedback systems that utilize initial fluctuations which are amplified and that way help to converge to a global decision (Deneubourg et al., 1990; Mallon et al., 2001; Nicolis et al., 2011). Interesting features of collective decisions are, for example, the speed-accuracy trade-off (Nicolis et al., 2011) or the influence of noise (Dussutour et al., 2009; Yates et al., 2009). Furthermore, it turns out that positive feedback is not always productive but can also generate irrational decisions (Nicolis et al., 2011).

In the following, we limit our investigations to binary decision processes because they allow for a concise mathematical notation and, hence, allow a manageable application of mathematical methods. The investigated systems are either inherently noisy (e.g., explicit stochastic processes within the agents’ behaviors) or can validly be modeled as noisy processes (e.g., deterministic chaos with strong dependence on the initial conditions). We are not interested in the quality of the final decision—that is the utility of choosing option over option or vice versa—and assume that there is no initial bias to one or the other. In selecting an appropriate model for collective decisions, our main concern is simplicity while keeping focus also on the relation of how much is needed as input to the model and how much is generated by it. We want to keep the number of parameters small while achieving descriptions of qualitative aspects of collective decisions as effects of the model. In order to obtain these standards, we choose a minimal macroscopic model which has only one state variable describing the current status of the collective decision within the swarm (e.g., 80% for option and consequently 20% for option ).

For simplicity, we view the asynchronous distributed process of collective decisions as a round-based game which allows only one agent at a time to either revise its current decision or to convince a peer to revise its current decision. The relation to natural systems is that decision events are serialized and intermediate periods of time are ignored. The influence of this assumption to the steady state behavior is considered to be low. For this purpose, we re-interpret well-known urn models as models of collective decisions and extend them appropriately. We use simple models inspired by the urn model of Ehrenfest and Ehrenfest (1907) and by the urn model of Eigen and Winkler (1993).

#### Ehrenfest urn model

This urn model was originally introduced by Ehrenfest and Ehrenfest (1907) in the context of dissipation, thermodynamics, statistical mechanics, and entropy. The dynamics of the model is defined as follows. Our urn is filled with marbles. Say, initially all marbles are blue. Whenever we draw a blue one we replace it with a red marble. If we draw a red marble we replace it with a blue one. Obviously, the two extreme states of either having an urn full of blue marbles or an urn full of red marbles are unstable. Similarly, this is true for all states of unevenly distributed colors. To formalize this process, we keep track of how the number of blue marbles (without loss of generality) changes depending on how many blue marbles were in the urn at that time. We can do this empirically or we can actually compute the average expected ‘gain’ in terms of blue marbles. For example, say at time we have blue marbles in the urn and a total of marbles. The probability of drawing a blue marble is therefore . The case of drawing a blue marble has to be weighted by because this is the change in terms of blue marbles in that case. The probability of drawing a red marble is which is weighted by . Hence, the expected average change of blue marbles per round depending on the current number of blue marbles is . This can be done for all possible states yielding

(1) |

which is plotted in Fig. a. Hence, the average dynamics of this game is given by .

The recurrence can be solved by generating functions (Graham et al., 1998). For we obtain the generating function

(2) |

The th coefficient of this power series is the closed form for . We get

(3) |

Hence, for initializations and the symmetrical case the system converges in average rather fast to the equilibrium . The actual dynamics of this game is, of course, a stochastic process which can, for example, be modeled by , for a noise term .

Several generalizations have been proposed for this urn model (Krafft and Schaefer, 1993; Klein, 1956), however, these investigations mostly focus on mathematically tractable variants. In the following we report generalizations, which focus on applications to feedback processes.

#### Eigen urn model

The Ehrenfest model can be interpreted as an example of a negative feedback process. Deviations from the fixed point are corrected by negative feedback (the predominant color will diminish on average). Eigen and Winkler (1993) reported a similar urn model to show the effect of positive feedback. In this model drawing a blue marble has the effect of replacing a red marble by a blue one and vice versa. The expected average change of blue marbles per round changes accordingly to

(4) |

For a plot of see Fig. b.

While we still have an expected change of for as in the Ehrenfest model, the fixed point is unstable now as its surrounding drives trajectories away from and towards the stable fixed points and respectively. In Sec. 4, we introduce a more general urn model that takes the intensity of positive feedback as a parameter. This model can be used to investigate collective decisions in swarms.

## 3 Universal properties of swarm performance

Having identified the two main components (cooperation and interference) and the typical shape of these graphs, we can define a simple model. The idea is to fit this model to empirical data for verification and predictions.

### 3.1 Simple model of swarm performance

For a given bounded, constant area the swarm density is defined by the swarm size according to . Also a dynamic area could be assumed but throughout this paper we want to keep swarm density and swarm size interchangeable based on the identity . Although for a given swarm density the swarm performance might be quantitatively and qualitatively different for different areas, here we focus on describing such swarm–performance functions separately. We define the swarm performance depending on swarm size by

(5) |

for parameters (decreasing exponential function), (scaling), , and (see Fig. a). Parameter is subtracted to force a decrease to zero (). The swarm performance depends on two components, and . First, the swarm effort without negative feedback is defined by the cooperation function (see also Fig. a)

(6) |

This function can be interpreted as the potential for cooperation in a swarm that would exist without certain constraints, such as physical collisions or other spatial restrictions. The same formula was used by Breder (1954) to model the cohesiveness of a fish school and by Bjerknes and Winfield (2010) to model swarm velocity in emergent taxis. However, they used parameters of while we are also using values of . In principle this is a major difference because represents a sublinear performance increase due to cooperation and represents a superlinear increase. Whether such a direct interpretation of resulting parameter settings is instrumental is unclear. Especially when analyzing the product of both cooperation and interference functions (Eq. 5) it is seen that the steepness depends on contributions from both functions which can be differing considerably, for example, based on scaling. Second, the interference function (see also Fig. a) is defined by

(7) |

with used for scaling (e.g., ). The interference function can also be interpreted as the swarm performance achievable without cooperation, that is, achievable swarm performance without positive feedback. The exponential decrease of the interference function seems to be a reasonable choice because, for example, the Ringelmann effect according to Ingham et al. (1974) implies also a nonlinear decrease of individual performance with increasing group size; see also (Kennedy and Eberhart, 2001, p. 236). Nonlinear effects that decrease efficiency with increasing swarm size are plausible due to negative feedback processes such as the collision avoidance behavior of one robot triggers the collision avoidance behavior of several others in high density situations. Still, there are many options of available nonlinear functions but best results were obtained with exponential functions. Also Fig. 10b in (Lerman and Galstyan, 2002) shows an exponentially decreasing efficiency per robot in a foraging task.

### 3.2 Examples

To prove the wide applicability of this simple model we fit it to some swarm performance plots that are available. We investigate four scenarios: foraging in a group of robots (Lerman and Galstyan, 2002), collective decision making (Hamann et al., 2012) based on BEECLUST (Schmickl and Hamann, 2011), aggregations in tree-like structures and reduction to shortest paths (Hamann, 2006) similar to (Hamann and Wörn, 2008), and the emergent taxis scenario (also sometimes called ‘alpha algorithm’, Nembrini et al. (2002); Bjerknes et al. (2007)).

Given the data of the overall performance, the four parameters , , , of (Eq. 5) can be directly fitted. This was done for the three examples shown in Figs. b, c, and d. The equation can be well fitted to the empirical data (for details about the fitted functions see the appendix). In case of the foraging scenario (Fig. b) we also have data of the efficiency per robot. We can use the model parameters and , that were obtained by fitting the model to the overall performance, to predict the efficiency per robot, which is a function that we suppose to be proportional to effects by interference. This is done by scaling the interference function linearly and plotting it against the efficiency per robot. The result is shown in Fig. b.

We analyze the forth example, emergent taxis (Nembrini et al., 2002; Bjerknes et al., 2007), in more detail and, for this purpose, give a short description of the algorithm. The objective is to move a swarm of robots towards a light beacon. The robots are limited in their capabilities because they only have an omnidirectional beacon sensor that detects two states, beacon seen or not seen, but gives no bearing. If a robot does not see the beacon this is always because one or several other robots are within the line of sight between this robot and the beacon. Robots that see the beacon are called ‘illuminated’ and robots that do not see the beacon are called ‘shadowed’. The robots’ behavior is defined to differ depending on these two states. Shadowed robots have a shorter collision avoidance radius than illuminated robots. Consequently if a shadowed and an illuminated robot approach each other, the illuminated robot will trigger its collision avoidance behavior while the shadowed robot will not be affected until it gets even closer and triggers its own collision avoidance behavior as well. This interplay between shadowed and illuminated robots generates a bias towards the beacon. In addition, the algorithm has a ‘coherence state’ which aims at keeping the robots together within each others communication range. It is assumed that a robot is able to count the robots that are within range. Once this number drops below a threshold , the robot will do a u-turn which hopefully brings it back into the swarm. In the following investigations we set the threshold at first to which means we turn the coherence behavior off. Later we follow (Bjerknes et al., 2007) and set the threshold to swarm size () to enforce full coherence. Initially the robot swarm is distant from the beacon and is randomly distributed while ensuring coherence. Interestingly, it is difficult to identify two separated behavior components of cooperation and interference. The robots cooperate in generating coherence and in having shadowed robots that approach illuminated robots which drives them towards the beacon. However, collision avoidance behavior also has disadvantageous effects. In order to keep coherence the robots might be forced to aggregate too densely which might result in robots blocking each other.

The following empirical data is based on a simple simulation of emergent taxis. This simulation is noise-free and therefore robots move in straight lines except for u-turns according to the emergent taxis algorithm (a random turn after regaining coherence was not implemented).

First, we measure the performance that is achieved without cooperation. This is done by defining a random behavior that ignores any characteristic feature of the actual emergent taxis algorithm. We set this parameter to in the simulation to obtain the cooperation-free behavior. Hence, no robot will ever u-turn and they basically disperse in the arena. A simulation run is stopped once a robot touches a wall. The performance of the swarm is measured by the total distance covered by the swarm’s barycenter multiplied by the swarm size (i.e., an estimate of the sum of all distances that were effectively covered by each robot). The performance obtained by this random behavior can be fitted using the interference function of Eq. 7 (interpreting the interference function as a measurement of swarm performance without cooperation). The fitted interference function and the empirically obtained data is shown in Fig. a labeled ‘random’. The interference function does not drop to zero for large ; this bias towards the light source (positive covered distance) is due to the initial positioning of the swarm closer to the wall that is farther away from the light source.

In a second step, the full model of swarm performance (Eq. 5) is to be fitted to the actual emergent taxis scenario. The data is obtained by setting the threshold in our simple simulation of emergent taxis back to normal (). The fitting is done by keeping the interference function fixed and we fit only the cooperation function (i.e., fitting and while keeping , and fixed). The fitted swarm performance model is shown in Fig. a labeled ‘emergent taxis’. This simple model is capable of predictions, if the interference function has been fitted and we fit the cooperation function only to a small interval of, for example, (i.e., intervals close to the maximal performance). This is shown in Fig. b. The implication is that if the interference function is known as well as the optimal swarm size then the behavior within the other intervals can be predicted.

Note that the model operates on a single averaged value to describe the performance which does not fully catch the system’s behavior. At least in some scenarios, as here in the emergent taxis scenario, the performance does not just continuously decrease due to continuously increasing interference. Instead, two coexisting phases of behaviors emerge: functioning swarms moving forward and pinned swarms with extreme numbers of u-turns. In emergent taxis this is shown, for example, by a histogram of barycenter speeds in Fig. c. For the mean of a unimodal distribution increases with increasing . Starting at about a second phase emerges that shows slowly moving swarms and generates a bimodal distribution. Hence, given the fully deterministic implementation of our simulation, there are two classes of initial states (robot positions and orientations) that determine the two extremes of success or total failure. Consequently, swarm performance functions as shown in Fig. a need to be interpreted with care because they might indicate an average behavior that does actually not occur. Still, these swarm performance functions are useful if the values are interpreted relatively as success rates. That way Fig. a gives a good estimate of the frequency of the two phases. In other scenarios the interference might truly increase continuously due to a qualitatively different process, such as saturation of target areas with robots.

The presented model of swarm performance has potential to be applicable to many swarm systems. In the next section, a model is given for a subset of swarm systems namely collective decision-making systems. The two models relate to each other as in some cases they are both applicable to the same swarm system. A candidate for such a system could be a BEECLUST-controlled swarm. The application of the swarm performance model to this system is given in Fig. c. Data that supports, that an application of the following collective decision model is likely, is given by Hamann et al. (2012). In such systems the effectivity of the collective decision and the performance of the swarm are directly linked and consequently the two reported models are, too.

## 4 Universal properties of collective decisions

In the following, we investigate macroscopic models of collective decisions. One of the most general and at the same time simplest models of collective decisions is a model of only one state variable , which gives the temporal evolution of the swarm fraction that is in favor of one of the options in a binary decision process. If we assume that there is no initial bias to either option (i.e., full symmetry), then we need a tie breaker for . A good choice for a tie breaker is noise because any real swarm will be noisy. The average change of depending on itself per time () is of interest. Given that the system should be able to converge to either of the options at a time plus having the symmetric case of , function needs to have at least three roots (, , ) and consequently is at least a cubic function. Instead of developing a model, that defines such a function, we prefer a model that allows this function to emerge from a simple process. Once symmetry is overcome by fluctuations, swarm systems have a tendency to confirm and reinforce such a preliminary decision due to positive feedback (say, for there is a tendency towards ). We define such a process depending on probabilities of positive feedback next.

### 4.1 Simple model of collective decisions

As discussed in the introduction, we define an urn model that was inspired by the models of Ehrenfest and Ehrenfest (1907) and of Eigen and Winkler (1993). We use an urn model that has optionally positive or negative feedback depending on the system’s current state and depending on a stochastic process. The urn is filled with marbles which are either red or blue. The game’s dynamics is turn-based. First, a marble is drawn with replacement followed by replacing a second one determined by the color of the first marble. The probability of drawing a blue marble is implicitly determined by the current number of blue marbles in the urn. The subsequent replacement of a second marble has either a positive or a negative feedback. Say, we draw a blue marble, we notice the color, and put it back into the urn. Then our model defines that with probability a red marble will be replaced by a blue one (i.e., a positive feedback event because drawing a blue one increased the number of blue marbles) and with probability a blue one will be replaced by a red one (i.e., a negative feedback event because now drawing a blue one decreased the number of blue marbles). Hence, gives the probability of positive feedback.

The analogy of this model to a collective decision making scenario is the following. The initial drawing resembles the frequency of individual decisions in the swarm over time within the turn-based model. This frequency is proportional to the current ratio of blue marbles in the urn. Consequently, we serialize the system dynamics and each system state has at most two predecessors (). The replacement of the second marble resembles the effect of either a swarm member convincing another one about its decision or being convinced of the opposite.

The probability of positive feedback is determined explicitly. We define the determination whether positive feedback or negative feedback is effective in a given system state as a binary random experiment. The sample space is with PFB denoting positive feedback, NFB denoting negative feedback, and we define a probability measure , consequently holds. Hence, the probability of positive feedback is defined by (probability of negative feedback is ) and for now we choose a sine function

(8) |

Due to the symmetry it is irrelevant whether is set to the ratio of blue marbles or the ratio of red marbles. Later we will find that similar but different functions might be a better choice in certain situations (Sec. 4.4). The constant scales the amplitude of the sine function (see Fig. a) and consequently defines the predominant ‘sign’ of the feedback and the probabilities of positive and negative feedback. The integral gives the overall probability of having positive feedback independent of . Negative feedback is predominant for any for and an interval around emerges for which positive feedback is predominant for .

is plotted for different settings of in Fig. a. There is maximum probability for positive feedback for the fully symmetric case of as clearly seen in Fig. a. For and we have because no positive feedback is possible (either all marbles are already blue or all marbles are red and therefore no blue one can be drawn). For the probability of positive feedback is small (), consequently the system is stable and kept around .

We can calculate now the average expected change per round of blue marbles by summing over the four cases: drawing a blue or red marble, followed by positive or negative feedback, multiplied by the ‘payoff’ in terms of blue marbles, respectively. Using the symmetry we get

(9) |

We defined based on a trigonometric function but alternatively one can choose, for example, a quadratic function

(10) |

yielding a cubic function

(11) |

In the following, however, we will use Eq. 9. Also note that by turning positive feedback completely off () we obtain the Ehrenfest urn model again (cf. Eqs. 1 and 9) and by activating maximal positive feedback () we obtain the Eigen urn model (cf. Eq. 4).

While in the above definition the positive feedback probability is explicitly given, it might be unknown in applications of this model. In these applications, the positive feedback itself might not be measurable but maybe the number of observed decision revisions of the agents. We introduce the absolute number of observed individual decision revisions from red to blue over any given period and from blue to red . The ratio of red-to-blue revisions is directly related to the expected average change per round. Assuming payoffs of , the average change of blue marbles per round is obtained by the weighted sum

(12) |

which simplifies to

(13) |

In addition, the ratio also directly relates to the positive feedback probability by considering the above mentioned four cases: drawing a blue or red marble, followed by positive or negative feedback. Red-to-blue revisions are observed only in two of these four cases: drawing a blue marble followed by positive feedback and drawing a red marble followed by negative feedback. The summed probability of these two cases has to equal the ratio of red-to-blue revisions, which is consequently interpreted as probability. The equation

(14) |

yields using the symmetry

(15) |

an equation which is to be used to determine the positive feedback probability based on measurements of agents’ revisions. The pole at and the consequently undefined is reasonable. Any definition of would be without effect to the system because negative and positive feedback are indistinguishable at . We actually define in Eq. 8, which is, however, only a simplification of notation. Also note that this mathematical difficulty has limited effect in applications because swarms are inherently discrete. For example, does not exist for odd swarm sizes. The squares in Fig. a give the redetermined based on Eq. 15 and on measurements of and . The values for cannot be given, as discussed, and also values in the vicinity of show discrepancies because they are close to the pole of Eq. 15.

In Fig. b we compare the theoretical average change per round according to Eq. 9 to the empirically obtained average change in terms of number of marbles for the different settings of . The agreement between theory and empirical data is good (root mean squared errors of ) as expected. Two zeros and emerge additionally to for : and . Positive values of for represent dynamics that has a bias towards and negative values represent dynamics with a bias towards and vice versa for the other half ().

Fig. 5 gives an estimate of the asymptotic behavior of this urn model for varied feedback intensity . It shows a pitchfork bifurcation at , which is to be expected based on Fig. b and which is fuzzy because of the underlying stochastic process. For , the curve defined by becomes cubic and generates two new stable fixed points while the former at becomes unstable.

### 4.2 Optional extension of the model

Since positive and negative feedback is determined stochastically based on the current global state in the urn model, it is straightforward to determine the number of replaced marbles stochastically, too. Instead of having a fixed ‘payoff’ (number of replaced marbles) of for positive feedback and for negative feedback we can, for example, define a probability of the event ’having a payoff of ’ (or respectively) and a probability for a payoff of 0. Thus the average payoff would be and for positive and negative feedback respectively. In addition, the payoff can be variant depending on the current global state , that is, we define a function . It turns out that a definition similar to the positive feedback probability is useful here. We define the variant payoff by

(16) |

for appropriately chosen constants and . defines the average over absolute values of changes in , similar to the diffusion coefficient in Fokker-Planck theory. Measurements of the diffusion coefficient in swarm systems were reported in Yates et al. (2009) and Hamann et al. (2010), which show low values for and with a peak at . With defined by Eq. 16 we have symmetry again () and hence the extension of Eq. 9 is simply

(17) |

### 4.3 Available and unavailable methods

Note that the recurrence is a trigonometric or cubic function based on Eq. 8 or Eq. 10, respectively. Thus, it is much more difficult to be handled analytically and a concise result as for the Ehrenfest model (Eq. 3) cannot be obtained. When applying nonlinear equations for we enter the domains of nonlinear dynamics and chaotic systems with all their known mathematical intractabilities. Hence, in general we have to rely on numerical methods.

However, if we choose to investigate probability distributions instead of single trajectories , an interesting mathematical option is available. The steady state of the probability distribution over an ensemble of realizations of can be obtained analytically. We assume for simplicity that the current state of the collective decision system changes every step by exactly one marble. The process defined by the urn model is memoryless, that is, it has the Markov property and we can define a Markov chain with states. We define the transition matrix T of the Markov chain by

(18) |

The steady state is then computed by determining the eigenvectors

(19) |

Generally there are several eigenvectors but only one that has no changing sign in its elements (all positive or all negative) which represents the equilibrium distribution of the Markov chain or respectively.

With the steady state at hand, several methods of statistical analysis are available. As indicated by the data obtained by simulation shown in Fig. 5, the equilibrium distributions are bimodal for . Hence, it is an option to calculate splitting probabilities (Gardiner, 1985). The splitting probability gives the probability that the system initialized at will reach the state before the state . It is calculated by

(20) |

Note that this equation is based on a continuous distribution which can be obtained from a discrete distribution (e.g., the above equilibrium distribution based on Markov theory) by fitting. Another option, which was used here, is to apply Fokker–Planck theory which allows to calculate a continuous equilibrium distribution directly based on (Hamann et al., 2010). We set and to the positions of the two peaks in the steady state probability density that we obtain for . Results for varied positive feedback intensity (while keeping and constant) are shown in Fig. 6. It is clearly seen that for there is a wide interval with a fifty-fifty chance to end at either or . This is because the steady state probability density is unimodal for . Beginning with , when the steady state probability density starts to be bimodal, and with increasing the probability of switching from one peak to the other is decreasing considerably which is an indicator for an effective collective decision.

Another statistical property of Markov processes, especially those with bistable potentials, is the mean first passage time. Of interest is the mean time to switch from the collective decision for option ( with ) to option ( with ) or vice versa due to symmetry. The switching time is of particular interest in the context of swarm intelligence because it tells whether a swarm is able to stay for a considerable time in a given state (e.g., see Yates et al. (2009)). In case of collective motion, the mean switching time describes whether the swarm will be aligned long enough to cover a considerable distance.

Markov theory allows to calculate the mean first passage time using the transition matrix T and specifying a target state which is defined to be absorbing (), that is, we convert the system into an absorbing Markov chain. Using the fundamental matrix (with identity matrix and is the transition matrix of transitional states), a vector of mean first passage times for all transitional states is obtained by with is a column vector of all 1’s (Grinstead and Snell, 1997). In addition, an estimate of the mean switching time can be determined numerically in the urn model by generating trajectories and observing the switching behavior. This estimate is a lower bound (finite simulation time, finite number of samples, power-law distribution of first passage times). A comparison between theory and simulation along with fitted functions () is shown in Fig. 7. The switching time scales approximately exponentially with swarm size . The fact that Eq. 5 is also a good choice in fitting the mean first passage times can be interpreted as more than mere coincidence. The mean first passage time may be seen as performance measure because longer times reflect a more stable swarm. However, for Eq. 5 we require whereas here fitting results in which would be interpreted as a positive effect of interference (cf. 7).

### 4.4 Examples

Next we want to compare the data from our urn model (Fig. b) to data from more complex models, such as the density classification scenario (Hamann and Wörn, 2007). In the following, we apply the more general definition of (Eq. 17) but with an invariant payoff for a scaling constant that scales the average change for different payoffs.

The density classification scenario (Hamann and Wörn, 2007) is about a swarm of red and green agents moving around randomly. Their only interaction is constantly keeping track of those agents’ colors they bump into. Once an agent has seen five agents of either color it changes its own color to that it has encountered most. Here, gives the ratio of red agents. The name of this scenario is due to the idea that the swarm should converge to that color that was initially superior in numbers. It turns out that the averaged change (see Fig. a) starts with a curve similar to that of in Fig. b and then converges to a curve that is similar to that of . That is, is time-variant and, for example, the above mentioned Markov model would have to be extended to a time-inhomogeneous Markov model to achieve a better correlation with the measurements. Early in the simulation there is mostly negative feedback forcing values close to . The negative feedback decreases with increasing time, which results finally in positive feedback for . Comparing Fig. b to Fig. a indicates a good qualitative agreement between our urn model and the density classification scenario. Given that the curves in Fig. a converge over time to the final shape, which is resembled by our model for increasing in Fig. b, one can say that positive feedback builds up slowly over time in the density classification scenario. By fitting Eq. 17 to the data shown in Fig. a we get estimates for the feedback intensity . From the earliest and steepest line to the latest and only curve with positive slope in we get values of for times . By continuing this fitting for additional data not shown in Fig. a, we are able to investigate the temporal evolution of feedback intensity according to our model. In Fig. b, the data points of feedback intensity obtained by fitting are shown along with a function that was fitted to the data. This result supports the assumption of a ‘negative exponential’ increase over time () of positive feedback in this system as already stated in (Hamann et al., 2010).

The positive feedback probability can also be measured via the observed decision revisions according to Eq. 15. This was done for the density classification scenario; the results are given in Fig. a, which shows the measured positive feedback probability for time . The data was fitted using the following axially symmetric function

(21) |

for constants and , which turns out to be a better fit here than the sine function. Once the function is fitted, it can be used to calculate the expected average change following Eq. 17 (only scaling constant needs to be fitted). As seen in Fig. b, we obtain a good fit (root mean squared error of ). Indeed, our experience shows that the procedure of fitting instead of directly is much more accurate and needs fewer samples. Especially, it places the zeros more accurate, which are important to predict the correct steady state of the system. Even with just 100 samples, good fits were obtained, which indicates that this model could be applied to data from natural swarm systems. The prediction of the steady state based on the measured positive feedback probability following Eqs. 17, 4.3, and 19 is shown in Fig. c. In comparison to the measurements from simulation, this prediction shows a reasonable agreement.

Other examples showing similarities to the -graph in Fig. b are Figs. 2B and 3B in Yates et al. (2009) which show the drift coefficient dependent on the current alignment of a swarm (average velocity). While the data obtained from experiments with locusts, Fig. 2B in (Yates et al., 2009), is too noisy, we use the data from their model, Fig. 3B in (Yates et al., 2009), to fit our model. The result is shown in Fig. 10. We obtain a maximal positive feedback of .

## 5 Discussion

In this paper, we have reported two abstract models of swarms with high generality due to our long-term objective of creating a swarm calculus. The first model focuses on the dependency of swarm performance on swarm density by separating the system into two parts: cooperation and interference. It explains that an optimal or critical swarm density exists at which the peak performance is reached. With the second model we describe the dynamics of collective decision processes with focus on the existence and intensity of feedback. It gives an explanation of how the typical cubic functions of decision revision emerge by an increase of positive feedback over time.

The first model is simple and the existence of optimal swarm densities is a well-known fact. However, to the authors knowledge, no similar model combined with a validation by fitting the model to data from diverse swarm applications was reported before except for the hypotheses stated by Østergaard et al. (2001). Despite its simplicity, the model has the capability to give predictions of swarm performance, especially, if the available data, to which it is fitted, includes an interval around the optimal density. That way this model might serve as a swarm calculus of swarm performance. In addition, we want to draw attention to the problem of masking special density-dependent properties by only investigating the mean performance. The example shown in Fig. c documents the existence of phases in swarm systems.

The second model is abstract as well but has a higher complexity and is more conclusive as it allows for mathematical derivations. Based on this urn model for positive feedback decision processes, the emerging cubic function of decision revisions can be derived (see Eq. 9). Hence, we generate the function of decision revision based on our urn model, which allows for an interpretation of how the function emerges while, for example, in (Yates et al., 2009) this function is measured in a local model. Our model of collective decisions might qualify as a part of swarm calculus because those decision revision functions seem to be a general phenomenon in swarms.

This model can also be used to predict probability density functions of steady states in swarm systems. The workflow of measuring the positive feedback probability , fitting a function, using this function to calculate the expected average change , which can in turn be used to predict the expected probability density function of the steady state (eigenvectors of transition matrix), is accurate with comparatively small sample numbers.

A model of notable similarity was published in the context of ‘sociophysics’ (Galam, 2004). It is based on the assumption that subgroups form in collective decisions within which a majority rule determines the subgroup’s decision. The addition of contrarians, that is voters always voting against the current majority decision, generates dynamics that are similar to the reported observations in noisy swarms. Galam’s model is, however, focused on constant sizes of these subgroups and their combinatorics while in swarm intelligence these groups and their sizes are dynamic.

A result of interest is also the particular function of the positive feedback increase over time () in the density classification task (see Fig. b). It is to be noted that this increase seems to be independent from respective values of the current consensus . Furthermore, extreme values, such as or , are not observed. An in-depth analysis of the underlying processes is beyond this paper but we want to present two ideas. First, the final saturation phase () is most likely caused by explicit noise in the simulation. The agent–agent recognition rate was set to 0.8 which keeps small. Second, the initial fast increase of (after a transient which might also be caused by the simulation because agents revise their color only after a minimum of five agent–agent encounters) might be caused by locally emerging sub-groups of homogeneous color within small areas that generate ‘islands’ of early positive feedback. These properties might, however, be highly stochastic and difficult to measure. Time-variant positive feedback was also observed in BEECLUST-controlled swarms as reported before (Hamann et al., 2012). Hence, a feedback system as given in Fig. 11 seems to be a rather common situation in swarm systems. and are properties of a swarm system that form a feedback system (e.g., amount of pheromone and number of recruited ants). is a third swarm property, which is subject to a time-variant process and which influences the feedback of on . In terms of the above urn model we can mimic this situation by saying in Fig. 11 is the number of blue marbles (w.l.o.g.), is the probability of drawing a blue marble, is the probability of positive feedback (i.e., this edge can also negatively influence ), and is an unspecified state variable that increases positive feedback () over time and is influenced by an additional, unknown process. This triggers the question of what can be and how it influences the feedback process independent of the current swarm consensus .

As seen in Fig. 10, we get maximally positive feedback for the data of Yates et al. (2009) which has the effect that states of low alignment () are left as fast as possible. This reinforces the findings about the diffusion coefficient reported by Yates et al. (2009). A major feature of this self-organizing swarm seems to be the minimization of times in states of low alignment (Yates et al.: “A higher diffusion coefficient at lower alignments suggests that the locusts ‘prefer’ to be in a highly aligned state”).

## 6 Conclusion

On of the main results reported in this paper is that generally applicable swarm models, that have simple preconditions, exist. The application of the swarm performance model necessitates only a concept of swarm density and the application of the collective decisions model necessitates only a consensus variable of a binary decision. Although both models are simple, they have enough explanatory power to give insights into swarm processes such as the interplay of cooperation and interference and the installation of positive feedback.

The two presented models illustrate the methodology that can be applied to find more models and to extend swarm calculus. The methodology is characterized by a combination of a heuristic approach and a simple mathematical formalism. While the empirical part establishes a direct connection to applications, the formalism allows the integration of superior mathematical methods such as Markov theory and linear algebra. That way there is reason for hope that similar models might be found, for example, for swarm systems showing aggregation, flocking, synchronization, or self-assembly. The main benefit of such models might be general insights in group behavior and swarms but also direct applications could be possible. It could be possible to implement variants of these models on swarm robots if the global knowledge necessary for this kind of models can be substituted by local samplings. The models could also be used as components and several such components could be combined to form specialized, sophisticated models. The presented models could be combined to model a swarm showing collective decision–making, such as a BEECLUST-controlled swarm as pointed out above. Once a comprehensive set of such models has been collected, research on swarms could also be guided by formalisms supplemental to empirical research. Hence, we contend that it is possible to generate a set of models and methods of general applicability for swarm science, that is, to create a swarm calculus.

## Appendix A Details on curve fitting

All curve fitting was done with an implementation of the nonlinear
least-squares Marquardt-Levenberg
algorithm (Levenberg, 1944; Marquardt, 1963) using gnuplot 4.6
patchlevel 1 (2012-09-26)^{2}

### a.1 Foraging in a group of robots

function | |
---|---|

degrees of freedom | 52 |

root mean square of residuals | 0.000146389 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.00248537 | +/- 4.499e-05 | (1.81%) | |

1.23745 | +/- 0.01969 | (1.591%) | |

-0.199589 | +/- 0.002932 | (1.469%) |

### a.2 Collective decision making based on BEECLUST

function | |
---|---|

degrees of freedom | 22 |

root mean square of residuals | 0.0515291 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.158797 | +/- 0.02234 | (14.07%) | |

0.772042 | +/- 0.06951 | (9.003%) | |

-0.0386915 | +/- 0.002867 | (7.409%) |

### a.3 Aggregation in tree-like structures and reduction to shortest path

The data for the curve fitted in Fig. d is shown in Tab. 3. In this case weighted fitting was used (values and were weighted ten times higher than other values) to enforce the limit .

function | |
---|---|

degrees of freedom | 21 |

root mean square of residuals | 0.0924653 |

parameter | value | asymptotic standard error | |
---|---|---|---|

114.55 | +/- 49.11 | (42.87%) | |

0.836024 | +/- 0.07586 | (9.074%) | |

-89.9857 | +/- 6.985 | (7.763%) |

### a.4 Emergent–taxis behavior

The data for the curve fitted in Fig. a is shown in Tab. 4. Fitting was done in two steps. First, the interference function was fitted. Second, the performance function was fitted while keeping the parameters and fixed.

function | |
---|---|

degrees of freedom | 48 |

root mean square of residuals | 0.00479438 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.213822 | +/- 0.007214 | (3.374%) | |

-0.182333 | +/- 0.007664 | (4.203%) | |

0.0750781 | +/- 0.0008863 | (1.181%) |

function | |
---|---|

degrees of freedom | 41 |

root mean square of residuals | 0.0403196 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.0106104 | +/- 0.0009767 | (9.205%) | |

3.23718 | +/- 0.03055 | (0.9438%) |

### a.5 Emergent–taxis behavior, narrow fit

The data for the curve fitted in Fig. b is shown in Tab. 5. The parameters and of the interference function as obtained in A.4 were reused. The performance function was fitted within the narrow interval of while keeping the parameters and fixed.

function | |
---|---|

degrees of freedom | 1 |

root mean square of residuals | 0.0180345 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.00660836 | +/- 0.005772 | (87.34%) | |

3.38946 | +/- 0.2856 | (8.425%) |

### a.6 Mean first passage times

The data for the curve fitted in Fig. 7 is shown in Tab. 6. Weighted fitting was applied based on the measured standard deviation and weights scaled by respectively.

function measured | |
---|---|

function theoretical | |

degrees of freedom | 7 |

rms of residuals () | 0.0613924 |

rms of residuals () | 1.35583 |

parameter | value | asymptotic standard error | |
---|---|---|---|

1.36333 | +/- 0.07285 | (5.343%) | |

1.31916 | +/- 0.03673 | (2.784%) | |

0.0933643 | +/- 0.002197 | (2.353%) | |

1.31234 | +/- 0.2235 | (17.03%) | |

1.52047 | +/- 0.05814 | (3.824%) | |

0.107615 | +/- 0.001153 | (1.072%) |

### a.7 Density classification

The data for the curve fitted in Fig. a is shown in Tab. 7. For times , we set as otherwise the fitting would result in .

functions | |
---|---|

() | |

time | degrees of freedom |

100 | 32 |

200 | 40 |

400 | 50 |

800 | 63 |

1600 | 75 |

3200 | 81 |

6400 | 89 |

time | root mean square of residuals |

100 | 3.29349e-05 |

200 | 3.15751e-05 |

400 | 2.35511e-05 |

800 | 1.94473e-05 |

1600 | 1.99464e-05 |

3200 | 2.37314e-05 |

6400 | 1.99628e-05 |

time | parameter | value | asymptotic standard error | |
---|---|---|---|---|

100 | 0.00297812 | +/- 3.011e-05 | (1.011%) | |

200 | 0.00209906 | +/- 2.084e-05 | (0.9927%) | |

400 | 0.00133093 | +/- 1.12e-05 | (0.8417%) | |

800 | 0.000768213 | +/- 3.729e-05 | (4.854%) | |

800 | 0.00719126 | +/- 0.0335 | (465.9%) | |

1600 | 0.000666737 | +/- 1.904e-05 | (2.856%) | |

1600 | 0.304734 | +/- 0.0148 | (4.858%) | |

3200 | 0.00075085 | +/- 1.642e-05 | (2.186%) | |

3200 | 0.603136 | +/- 0.007976 | (1.322%) | |

6400 | 0.000846385 | +/- 9.191e-06 | (1.086%) | |

6400 | 0.744183 | +/- 0.004884 | (0.6563%) |

### a.8 Feedback intensities

The data for the curve fitted in Fig. b is shown in Tab. 8. Weighted fitting was applied with zero-weight for data points of , which means we ignore the initial values of . Data points of had double the weight than values of .

function | |
---|---|

degrees of freedom | 28 |

root mean square of residuals | 0.0173061 |

parameter | value | asymptotic standard error | |
---|---|---|---|

-0.000495857 | +/- 2.064e-05 | (4.161%) | |

-0.215755 | +/- 0.01069 | (4.956%) |

### a.9 Positive feedback probability

function | |
---|---|

degrees of freedom | 120 |

root mean square of residuals | 0.00562933 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.679526 | +/- 0.001996 | (0.2938%) | |

11.9802 | +/- 0.1334 | (1.113%) |

### a.10 Swarm alignment in locusts

The data for the curve fitted in Fig. 10 is shown in Tab. 10. We set as otherwise the fitting would result in . Weighted fitting was applied values of and had double weight than values of .

functions | |
---|---|

() | |

degrees of freedom | 174 |

root mean square of residuals | 0.000270536 |

parameter | value | asymptotic standard error | |
---|---|---|---|

0.00426427 | +/- 8.578e-05 | (2.012%) |

### Footnotes

- This paper is an extended version of Hamann (2012). The main extensions are the method of deriving the probability of positive feedback based on observed decision revisions (Sec. 4.1), a discussion of additional methodology such as Markov chains, splitting probabilities, and mean first passage times (Sec. 4.3), and a comprehensive introduction of the Ehrenfest and the Eigen urn models.
- see http://www.gnuplot.info/

### References

- Arkin, R. C., Balch, T., and Nitz, E. (1993). Communication of behavioral state in multi-agent retrieval tasks. In Book, W. and Luh, J., editors, IEEE Conference on Robotics and Automation, volume 3, pages 588–594, Los Alamitos, CA. IEEE Press.
- Berman, S., Kumar, V., and Nagpal, R. (2011). Design of control policies for spatially inhomogeneous robot swarms with application to commercial pollination. In LaValle, S., Arai, H., Brock, O., Ding, H., Laugier, C., Okamura, A. M., Reveliotis, S. S., Sukhatme, G. S., and Yagi, Y., editors, IEEE International Conference on Robotics and Automation (ICRA’11), pages 378–385, Los Alamitos, CA. IEEE Press.
- Bjerknes, J. D. and Winfield, A. (2010). On fault-tolerance and scalability of swarm robotic systems. In Martinoli, A., Mondada, F., Correll, N., Mermoud, G., Egerstedt, M., Hsieh, M. A., Parker, L. E., and Støy, K., editors, Proc. Distributed Autonomous Robotic Systems (DARS 2010), pages 431–444, Berlin, Germany. Springer-Verlag.
- Bjerknes, J. D., Winfield, A., and Melhuish, C. (2007). An analysis of emergent taxis in a wireless connected swarm of mobile robots. In Shi, Y. and Dorigo, M., editors, IEEE Swarm Intelligence Symposium, pages 45–52, Los Alamitos, CA. IEEE Press.
- Breder, C. M. (1954). Equations descriptive of fish schools and other animal aggregations. Ecology, 35(3):361–370.
- Camazine, S., Deneubourg, J.-L., Franks, N. R., Sneyd, J., Theraulaz, G., and Bonabeau, E. (2001). Self-Organizing Biological Systems. Princeton University Press.
- Deneubourg, J.-L., Aron, S., Goss, S., and Pasteels, J. M. (1990). The self-organizing exploratory pattern of the argentine ant. Journal of Insect Behavior, 3(2):159–168.
- Dussutour, A., Beekman, M., Nicolis, S. C., and Meyer, B. (2009). Noise improves collective decision-making by ants in dynamic environments. Proceedings of the Royal Society London B, 276:4353–4361.
- Dussutour, A., Fourcassié, V., Helbing, D., and Deneubourg, J.-L. (2004). Optimal traffic organization in ants under crowded conditions. Nature, 428:70–73.
- Edelstein-Keshet, L. (2006). Mathematical models of swarming and social aggregation. Robotica, 24(3):315–324.
- Ehrenfest, P. and Ehrenfest, T. (1907). Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem. Physikalische Zeitschrift, 8:311–314.
- Eigen, M. and Winkler, R. (1993). Laws of the game: how the principles of nature govern chance. Princeton University Press.
- Galam, S. (2004). Contrarian deterministic effect on opinion dynamics: the “hung elections scenario”. Physica A, 333(1):453–460.
- Gardiner, C. W. (1985). Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences. Springer-Verlag, Berlin, Germany.
- Gautrais, J., Theraulaz, G., Deneubourg, J.-L., and Anderson, C. (2002). Emergent polyethism as a consequence of increased colony size in insect societies. Journal of Theoretical Biology, 215(3):363–373.
- Goldberg, D. and Matarić, M. J. (1997). Interference as a tool for designing and evaluating multi-robot controllers. In Kuipers, B. J. and Webber, B., editors, Proc. of the Fourteenth National Conference on Artificial Intelligence (AAAI’97), pages 637–642, Cambridge, MA. MIT Press.
- Graham, R., Knuth, D., and Patashnik, O. (1998). Concrete Mathematics: A Foundation for Computer Science. Addison–Wesley, Reading, MA.
- Grinstead, C. M. and Snell, J. L. (1997). Introduction to Probability. American Mathematical Society, Providence, RI.
- Hamann, H. (2006). Modeling and investigation of robot swarms. Master’s thesis, University of Stuttgart, Germany.
- Hamann, H. (2010). Space-Time Continuous Models of Swarm Robotics Systems: Supporting Global-to-Local Programming. Springer-Verlag, Berlin, Germany.
- Hamann, H. (2012). Towards swarm calculus: Universal properties of swarm performance and collective decisions. In Dorigo, M., Birattari, M., Blum, C., Christensen, A. L., Engelbrecht, A. P., Groß, R., and Stützle, T., editors, Swarm Intelligence: 8th International Conference, ANTS 2012, volume 7461 of Lecture Notes in Computer Science, pages 168–179, Berlin, Germany. Springer-Verlag.
- Hamann, H., Meyer, B., Schmickl, T., and Crailsheim, K. (2010). A model of symmetry breaking in collective decision-making. In Doncieux, S., Girard, B., Guillot, A., Hallam, J., Meyer, J.-A., and Mouret, J.-B., editors, From Animals to Animats 11, volume 6226 of Lecture Notes in Artificial Intelligence, pages 639–648, Berlin, Germany. Springer-Verlag.
- Hamann, H., Schmickl, T., Wörn, H., and Crailsheim, K. (2012). Analysis of emergent symmetry breaking in collective decision making. Neural Computing & Applications, 21(2):207–218.
- Hamann, H. and Wörn, H. (2007). Embodied computation. Parallel Processing Letters, 17(3):287–298.
- Hamann, H. and Wörn, H. (2008). Aggregating robots compute: An adaptive heuristic for the Euclidean Steiner tree problem. In Asada, M., Hallam, J. C., Meyer, J.-A., and Tani, J., editors, The tenth International Conference on Simulation of Adaptive Behavior (SAB’08), volume 5040 of Lecture Notes in Artificial Intelligence, pages 447–456. Springer-Verlag.
- Ingham, A. G., Levinger, G., Graves, J., and Peckham, V. (1974). The Ringelmann effect: Studies of group size and group performance. Journal of Experimental Social Psychology, 10(4):371–384.
- Jeanne, R. L. and Nordheim, E. V. (1996). Productivity in a social wasp: per capita output increases with swarm size. Behavioral Ecology, 7(1):43–48.
- Jeanson, R., Fewell, J. H., Gorelick, R., and Bertram, S. M. (2007). Emergence of increased division of labor as a function of group size. Behavioral Ecology and Sociobiology, 62:289–298.
- Karsai, I. and Wenzel, J. W. (1998). Productivity, individual-level and colony-level flexibility, and organization of work as consequences of colony size. Proc. Natl. Acad. Sci. USA, 95:8665–8669.
- Kennedy, J. and Eberhart, R. C. (2001). Swarm Intelligence. Morgan Kaufmann.
- Klein, M. J. (1956). Generalization of the Ehrenfest urn model. Physical Review, 103(1):17–20.
- Krafft, O. and Schaefer, M. (1993). Mean passage times for triangular transition matrices and a two parameter Ehrenfest urn model. Journal of Applied Probability, 30(4):964–970.
- Lerman, K. and Galstyan, A. (2002). Mathematical model of foraging in a group of robots: Effect of interference. Autonomous Robots, 13:127–141.
- Lerman, K., Martinoli, A., and Galstyan, A. (2005). A review of probabilistic macroscopic models for swarm robotic systems. In Şahin, E. and Spears, W. M., editors, Swarm Robotics - SAB 2004 International Workshop, volume 3342 of Lecture Notes in Computer Science, pages 143–152. Springer-Verlag, Berlin, Germany.
- Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. Quarterly of Applied Mathematics, 2:164–168.
- Lighthill, M. J. and Whitham, G. B. (1955). On kinematic waves. II. A theory of traffic flow on long crowded roads. Proceedings of the Royal Society of London, A229(1178):317–345.
- Mahmassani, H. S., Dong, J., Kim, J., Chen, R. B., and Park, B. (2009). Incorporating weather impacts in traffic estimation and prediction systems. Technical Report FHWA-JPO-09-065, U.S. Department of Transportation.
- Mallon, E. B., Pratt, S. C., and Franks, N. R. (2001). Individual and collective decision-making during nest site selection by the ant leptothorax albipennis. Behavioral Ecology and Sociobiology, 50:352–359.
- Marquardt, D. (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics, 11(2):431–441.
- Milutinovic, D. and Lima, P. (2007). Cells and Robots: Modeling and Control of Large-Size Agent Populations. Springer-Verlag, Berlin, Germany.
- Miramontes, O. (1995). Order-disorder transitions in the behavior of ant societies. Complexity, 1(1):56–60.
- Mondada, F., Bonani, M., Guignard, A., Magnenat, S., Studer, C., and Floreano, D. (2005). Superlinear physical performances in a SWARM-BOT. In Capcarrere, M. S., editor, Proc. of the 8th European Conference on Artificial Life (ECAL), volume 3630 of Lecture Notes in Computer Science, pages 282–291, Berlin, Germany. Springer-Verlag.
- Nembrini, J., Winfield, A. F. T., and Melhuish, C. (2002). Minimalist coherent swarming of wireless networked autonomous mobile robots. In Hallam, B., Floreano, D., Hallam, J., Hayes, G., and Meyer, J.-A., editors, Proceedings of the seventh international conference on simulation of adaptive behavior on From animals to animats, pages 373–382, Cambridge, MA, USA. MIT Press.
- Nicolis, S. C., Zabzina, N., Latty, T., and Sumpter, D. J. T. (2011). Collective irrationality and positive feedback. PLoS ONE, 6:e18901.
- Okubo, A. (1986). Dynamical aspects of animal grouping: Swarms, schools, flocks, and herds. Advances in Biophysics, 22:1–94.
- Okubo, A. and Levin, S. A. (2001). Diffusion and Ecological Problems: Modern Perspectives. Springer-Verlag, Berlin, Germany.
- Østergaard, E. H., Sukhatme, G. S., and Matarić, M. J. (2001). Emergent bucket brigading: a simple mechanisms for improving performance in multi-robot constrained-space foraging tasks. In André, E., Sen, S., Frasson, C., and Müller, J. P., editors, Proceedings of the fifth international conference on Autonomous agents (AGENTS’01), pages 29–35, New York, NY, USA. ACM.
- Prorok, A., Correll, N., and Martinoli, A. (2011). Multi-level spatial models for swarm-robotic systems. The International Journal of Robotics Research, 30(5):574–589.
- Saffre, F., Furey, R., Krafft, B., and Deneubourg, J.-L. (1999). Collective decision-making in social spiders: Dragline-mediated amplification process acts as a recruitment mechanism. J. theor. Biol., 198:507–517.
- Schmickl, T. and Hamann, H. (2011). BEECLUST: A swarm algorithm derived from honeybees. In Xiao, Y., editor, Bio-inspired Computing and Communication Networks. CRC Press.
- Schneider-Fontán, M. and Matarić, M. J. (1996). A study of territoriality: The role of critical mass in adaptive task division. In Maes, P., Wilson, S. W., and Matarić, M. J., editors, From animals to animats IV, pages 553–561. MIT Press.
- Seeley, T. D., Camazine, S., and Sneyd, J. (1991). Collective decision-making in honey bees: how colonies choose among nectar sources. Behavioral Ecology and Sociobiology, 28(4):277–290.
- Strogatz, S. H. (2001). Exploring complex networks. Nature, 410(6825):268–276.
- Vicsek, T. and Zafeiris, A. (2012). Collective motion. Physics Reports, 517(3-4):71–140.
- Wong, G. and Wong, S. (2002). A multi-class traffic flow model – an extension of LWR model with heterogeneous drivers. Transportation Research Part A: Policy and Practice, 36(9):827–841.
- Yates, C. A., Erban, R., Escudero, C., Couzin, I. D., Buhl, J., Kevrekidis, I. G., Maini, P. K., and Sumpter, D. J. T. (2009). Inherent noise can facilitate coherence in collective swarm motion. Proc. Natl. Acad. Sci. USA, 106(14):5464–5469.