Quantized Dissensus in Networks of Agents subject to Death and Duplication
Dissensus is a modeling framework for networks of dynamic agents in competition for scarce resources. Originally inspired by biological cells behaviors, it fits also marketing, finance and many other application areas. Competition is often unstable in the sense that strong agents, those having access to large resources, gain more and more resources at the expense of weak agents. Thus, strong agents duplicate when reaching a critical amount of resources, whereas weak agents die when loosing all their resources. To capture all these phenomena we introduce systems with a discrete time gossip and unstable state dynamics interrupted by discrete events affecting the network topology. Invariancy of states and topologies and network connectivity are explored.
Keywords: Consensus Protocols, Quantized Control, Dynamic Programming, Network based marketing, Dynamic Pie Diagram.
Recently, a great interest has been devoted to consensus problems (see, e.g., ,  and the literature cited within). Consensus refers to a scenario where a number of agents with interdependent dynamics converge to a common value. When this happens we say that the agents reach an agreement or consensus. If state is the resource owned by agent , consensus means that strong agents, those with large resources, transfer part of these resources to weak agents. Interdependency has a “local” flavor, in the sense that each agent can exchange resources only with a subset (say it neighborhood) of adjacent agents. In its simplest form the consensus problem can be modeled as an autonomous cooperative linear system with a Laplacian dynamic matrix.
In this work, we study a discrete event system (, ) that behaves in a complementary manner with respect to the above system. We model a competitive scenario where strong agents, gain more and more resources at the expenses of weak ones. This unstable dynamics can be captured by assuming that between two consecutive events, the system evolves as a discrete time autonomous linear system as before, but now the dynamic matrix is the opposite of a time dependent Laplacian matrix. Deviation between neighbors’ states/resources increases and for this reason the system is named competitive system.
With this premise, it makes sense to assume that when an agent looses all its resources, its state goes to zero, then it “dies”. The dead agent is then removed from the system together with all its connections. Death introduces a discrete event into the system dynamics.
On the contrary, when an agent gains a critical amount of resources, then it duplicates. The agent, say it parent, divides in two new agents, say it childrens. Both the childrens’ initial states and neighborhoods are functions of the parent state and neighborhood respectively. Duplication introduces furtherly a second discrete event into the system dynamics. Such a modeling framework which we formalize in this paper for the first time, goes under the name of Dissensus. The law ruling the discrete time evolution will be called dissensus protocol.
Dissensus is motivated by applications where multiple agents compete for scarce resources. In marketing, as an example, agents represent firms or companies and resources are market shares or number of customers. Competition often results in firms with a large customer base attracting more and more customers at the expenses of the smaller firms [14, 7].
Death describes a firm abandoning the market for lack of customers. Duplication captures the situation where a firm expands and then divides into smaller firms.
Duplication may be forced by different causes. For example, the intervention of an anti-trust authority; the presence of physical constraints in large infrastructures such as airports; the departure of apprentices/young partners from the mother firm to open a new one, in case of service businesses or handicraft industry.
Other applications are in biological networks, e.g., adjacent cells compete for nutrients [4, 9]. The state of an agent describes its size, that in turn is (approximately) proportional to the amount of nutrients the cell subtracts to its adjacent cells.
Finally, as a theoretical example, let us introduce what we call the dynamic pie diagram. In a dynamic pie diagram, the amplitude of each slice evolves over time. The larger slices become ampler and ampler at the expenses of their adjacent slices. Whenever the amplitude of a slice becomes zero, the slice disappears (dies). On the other hand, whenever the amplitude of a slice reaches a given angle the slice is partitioned (duplicates).
Resources flows are quantized and involve at each time a single pair of agents selected according to a gossip rule. Dissensus in continuous time and continuous/unquantized flows have been introduced by the same author in .
2 The Dissensus Protocol
The system evolves according to a quantized discrete time dynamics interrupted at critical times by some discrete events which we will call critical events. Between each two consecutive critical times and , with , the system is made of a set of agents
whose cardinality is a function of the time instants . The agents are characterized by states that assume only discrete values in . The same agents are organized in a single component connection network whose edgeset includes all the non oriented pairs of agents that bilaterally exchange information about their respective states. At the occurrence of critical event both the number of agents and the topology of the connection network are modified.
We describe how the system state evolves in the following subsection, whereas we describe how the system structure is modified at the critical events in Subsection 2.2.
Hereafter, for each , we call the set neighborhood of agent . In addition, for the easy of notation, we omit the dependence on when there is no risk of ambiguity. Then, e.g., we write instead of .
2.1 Quantized discrete time dynamics
The evolution of each agent state depends on local information meaning that it depends only on the state of the agent’ neighbors. It is governed by a Quantized Gossip Algorithm (QGA) in the spirit of what described  for cooperative systems.
At each time , for any , one edge is selected, see, e.g., in Fig. 1, and the states and of agents and are updated as follows:
for some integer such that .
Dynamics (1) makes the deviation between neighbor agents diverge meaning that if, e.g., , the greater state tends to increase , and lower state tends to decrease . Also, dynamics (1) is zero-sum meaning that a same quantity is gained by and lost by , thus resulting in the local invariance property .
Differently from , we additionally require that edges are selected in a deterministic way according to the next assumption.
There exists a big enough integer value such that, within each time interval , either each edge is selected at least once or a critical event occurs.
Had we assumed, in line with , that each edge in is selected randomly and with a positive probability, some of the results in this paper would have held in probability.
2.2 Critical events: death and duplication
Upper and lower thresholds are given: the upper threshold is the positive integer value , the lower threshold is the value zero. A critical event occurs at time if an agent reaches a threshold at time , i.e, when either or . More formally,
Hereafter, when it is not stated otherwise, we assume without loss of generality that, within each interval , the agents are renumbered so that the agent reaching a threshold value is always agent .
On the occurrence of a critical event, the number of agents and the connection network are modified. In this context, we denote as (respectively ) the set of agents (respectively edges) involved in a critical event and still present in the network at the end of the event. At the system structure modifies as follows.
If , we say that agent dies and is consequently removed from the system. The agents in the neighborhood of inherit the connections of as depicted in Fig. 2.
In particular, if, for each agent , we denote with , the set of new neighbors that inherits from the dead agent , we can write:
Then, in the occurrence of a death, the set of involved agents is and the set of inherited links is .
As it will be clearer in Section 3, in order to preserve the connectivity of the network, we require that i) all neighbors of the dead agent are linked to at least one other neighbor, ii) the set of neighbors of the dead agent, together with the inherited links, form a connected subnetwork.Formally,
Regarding the connection network , the dead agent is removed from the set of agents , all links of the dead agent are removed from the set of edges , finally the new links in are added to :
If , the parent agent divides producing two children agents and . The two children agents inherit the parent connections and state (see, e.g., Fig. 3).
This means that, if we denote with and the new neighbors of the two children agents and , we can write:
Then, in the occurrence of a duplication, the set of involved agents is and the set of the links to the children agents is .
In order to preserve the connectivity of the network, we impose that one of the following two rules hold
Furthermore, in order to maintain the sum of the states invariant, we impose also that the values and are integers such that and
Regarding the connection network , a new agent , replacing the old one, and an additional agent are added to the set of agents , all the links to the father agent are removed from the set of edges and the set of new links to the children agents are added to :
Figure 4 shows the possible evolution of a dynamic pie diagram. An agent/slice divides at time , having reached an amplitude of at the previous time instant. Differently, an agent/slice dies at time .
In the next session, we discuss invariancy and connectivity associated to death and duplication rules.
3 Connectivity and Invariancy
In this section, we introduce some bounds on the number of agents. To this end, we initially discuss the connectivity and invariancy properties that are exploited in the bounds determination.
(Local connectivity) is locally connected if the subnetwork is connected.
Note that we can impose the local connectivity of exploiting the knowledge of , i.e., the information available to the agent just before its death or duplication. Conditions for the local connectivity to imply the connectivity of are established in the next theorem.
If is connected, then both the death (3) and duplication rules (6) are sufficient conditions for the connectivity of . These rules become even necessary to guarantee the connectivity of on the basis of the only knowledge of neighborhood available to agent and disregarding any information on other agents’ neighborhoods for .
The proof is based on the following two facts:
differs from only in the subnetwork induced by the agents in ,
This latter fact hold true by definition of rule (6) in case of a death. In the case of a duplication, we note that and that both rules (6) imply that each is connected with either or . In addition, if the first condition of (6) holds, then and , hence, includes . If the second condition of (6) holds, then there exists such that includes and . In both cases, we can conclude that each is connected to through the edges in . By transitivity is connected.
(Sufficiency) As has a single component, we know that, in , each is connected to at least an agent through a path that does not include agent and any other agent in different from . These paths connecting with remain identical in . Then, as , all the agents are connected in with at least an agent in . As is connected, all the agents in , and hence the agents in , are connected to each others in . Finally, as the set of the agents in is , we can conclude that is connected.
(Necessity) If we have no knowledge of neighborhoods for , we must conservatively assume that all the links between and the agents are necessary for the connectivity of , since may be, e.g., a tree. Consequently, to guarantee the connectivity of , we must guarantee that the edges in keep the agent in connected to each others and to the new agents eventually added in . In other words, we require that is connected.
Figure 5 left shows that if (6) does not hold may turn to be disconnected. However, in general, rules (3) and (6) are not necessary to the connectivity of . As an example, in case of a duplication, the connectivity of holds even if for all , if all are adjacent to a common agent (see, e.g., Fig. 5 right).
Invariancy is a necessary condition to avoid that the number of agents neither diverges to infinity nor converges to one. Assume that in consequence of a duplication at time the sum of the states decreases over time, i.e, or that an agent may die having a state . It is immediate to see, that under these hypotheses either the system reaches consensus or . Differently, if the sum of the states increases over time, which corresponds to saying , then either the system reaches consensus or .
Invariancy is also used in the next result to establish that the number of agents never goes above neither below certain bounds.
Let be the critical time, possibly infinite, at which the first duplication occurs. Then the following bounds holds on the number of agents:
The lower bound is a straightforward consequence of the invariancy (9). As regards to the upper bounds, we can have in the following trivial situation. At the initial time the agents’ states have value for all . Differently, from on, the maximum number of agents can be reached only when a duplication occurs. In this case, the agents’ state invariancy imposes
In (11), the inequality holds as, after a duplication event, at least one of the system agents has state equal to , a second agent has state equal to , and the remaining agents have state . Finally, the bound in (10) trivially follows from (11).
Note that the fact that , and hence the cardinality of , is bounded from the above for all guarantees the possibility that the value required by Assumption 1 exists.
Invariancy and connectivity play a critical role when proving that under a QGA the intervals between consecutive critical events is finite. Indeed, this fact holds true unless the agents’ reach consensus. We say that the agents reach consensus at time if all states are equal, i.e., for all .
The next lemma is a prtelude to show that the interval between consecutive critical events is finite.
Consensus can be reached only at times , for .
For each time , let be the agent with minimum state, be the agent with maximum state. By definition of QGA we have that the values of , respectively , may vary but cannot increase, respectively decrease, for . As a consequence, the difference between and cannot reduce to zero for .
Assume that the agents do not reach consensus in , then a critical event will occur in a finite time .
We prove the statement by contradiction. In particular, we show that if no critical event occurs in a finite time the system state diverges. To this end, we assume that the agents have not reached consensus at time and that no critical event occurs from on. Let , be the agents selected by the QGA at the generic time and consider difference
In view of Assumption 1, we have that for all , as, if i) no critical events occurs after , ii) is connected and iii) the agents have not reached consensus in , within every instants at least a couple of agents with different states is selected. Then, as , we have that . The value of the latter limit is in contradiction with the fact that must hold for any . Hence, either the agents have reached consensus in or a critical event occurs after .
An immediate consequence of Theorems 2 and 3 is that a first duplication event occurs at time after at maximum death events. In addition, the is finite unless the agents reach consensus at time or at one the above death events.
In the rest of this section, we analyze possible scenarios where the agents reach consensus.
Consider systems evolving according a protocol in which the duplication rule (5) requires that and that both the two children inherit all their parent connections and connect to each other (see, e.g., rule (16) discussed in the next section). Such systems may reach consensus. Note that in this case each duplication generates twin agents. Consider, as an example, the evolution of a system where and and . By Theorem 2, we have for all . We can easily define a QGA such that at time , we have and . At time , the agent dies (we have not renumbered the agents for the easy of exposition) and the remaining two twin agents reach an equilibrium corresponding to a state value equal to .
The next theorem establishes some necessary conditions for the agents to reach consensus.
Consensus can be reached provided that for all . Furthermore,
where is the critical time of the first duplication event.
The first part of the theorem is a trivial consequence of the definition of consensus and of the invariancy property.
The first lower bound is true as, for , we have and, obviously, the invariancy property holds. In proving the second lower bound, we observe that, for , we have . Actually, any QGA prevents from decreasing between two consecutive critical duplication events.
A direct consequence of Theorem 4 is that there exist initial states such that the agents will never reach a consensus as stated in the following corollary.
Let be the critical time of the first duplication event. No consensus can be attained for , if is not divisible for some integer less than or equal to . No consensus can be attained for , if is not divisible for some integer greater than or equal to .
The corollary thesis is an immediate consequence of of Theorem 4 and the fact that both the number of agents that reach consensus and their states must assume integer values.
We conclude this section by observing that trajectory may become periodic in the long run when the agent do not reach a consensus. As an example, this situation occurs if the system evolution evolves according to death/duplication rules that depend on the agents’ states and on the connection network topology and no stochasticity or time dependency is allowed.Indeed, we can observe that our systems evolve in a space defined by the number of agents, their states and the topologies of their possible connection networks.As both the number of agents and their states may assume only integer values bounded from above, then the number of topologies of the possible connection networks is also finite, hence the thesis follows.
4 Topology invariancy
Let us now investigate conditions on death and duplication rules that result in the invariancy of specific network topologies like complete, hole or chain networks. As we study the asymptotic evolution of a system, let us initially observe that Theorem 2 implies that there always exists a critical time an two integer values and such that, for each satisfying
we have an infinite sequence of critical time instants such that . In this context, we say that the density of the connection network of a system does not increase asymptotically if there exists a value such that for each satisfying condition (12), the value does not increase over the critical times in and greater than or equal to .
We start by considering the duplication rule which equally divides the parent’s connections between the two children. Formally,
where function returns a random subset of elements of .
With such a rule, if is a hole network, respectively a chain network (see Fig. 6), then are hole network, respectively chain network, for every , whatever death rule is implemented. Again, this corresponds to saying that hole and chain networks are invariant as established in the next theorem. Before giving the theorem, we recall that a network is a hole, respectively a chain network, if it is connected and all the agents have degree two, respectively all the agents have degree two a part from two agents at the extreme of the chain whose degree is one (see, e.g., Fig. 6).
Hole and chain networks are invariant under duplication rule (13).
The idea of the proof is that critical events do not change, in general, the degree of the agents. This is always true unless the dying or duplicating agent is an extreme of the chain. Actually, when an extreme agent dies, its neighbor, whose degree changes from two to one, becomes the new extreme. Similarly, when an extreme agent duplicates, one children is the new extreme with degree one, while the other children has degree two as all other agents.
Consider now the death rule (2) that assign all the connections of a dead agent to just one of its neighbors. This can be formally described by a death rule that, for each , satisfies:
where is arbitrarily picked.
From a practical point of view, these choices for are the simplest ones to implement that guarantee the connectivity of after the removal of the dying agent.
the thesis is trivially true if the agents reach consensus. Otherwise, it is immediately to see that, if at a duplication occurs, ; if at a death occurs,
as at least the connection is not substituted by a new connection. Now, observe that, for each satisfying condition (12), an equal number of death and duplication events must occur between each two critical times and , with , hence . Then, we can conclude that the density of the connection network cannot increase over time. Finally, the observation that the considered duplication and death rules forbid the creation of new cycles in the connection network concludes the proof.
Authors’ computational experiments show that the density of the connection network usually indeed decreases until presents a single or no cycle at all if the assumptions of Theorem 6 hold and the agents do not reach consensus. Although no formal proof confirms such observations, it appears reasonable to expect that the the connection network becomes sparser and sparser. Indeed, inequality (15) in the proof of Theorem 6 holds strictly in the occurrence of a death event whenever , situation quite common if the network is not sparse.
Let us now consider the duplication rule which makes both children inherit all the parent connections and connect to each other:
Observe that the above rule preserves complete connectivity of the subnetwork induced by the critical set after a duplication event. This also induces the fact that if is a complete network then are complete networks for every , whatever death rule is implemented. This corresponds to saying that the complete network is invariant under the duplication rule (16) as established in the next theorem.
The complete network is invariant under the duplication rule (16).
Assume is a complete network. If a death occurs at time , each agent is already adjacent to all the other agents. If a duplication occurs at time , because of rule (16), the two new agents are adjacent to each other and to all the other agents.
5 Final Remarks
In this paper we have introduced systems of competitive agents exchanging quantized flows according to a dissensus protocol. We have discussed some invariancy properties of the systems and observed that the number of agents is bounded from below and above over time.
As a future direction of research we are oriented toward the definition of more general rules for the inheritance of the connections. Indeed, the main limit of the current version of the dissensus protocol seems the fact that the inheritance rule of the connections applies only when an agent dies. In many real competitive systems the larger agents may acquire the connections of the smaller agents when the latter ones are still alive. Consider the following two examples. The number of the cells adjacent to (that is, in the neighborhood of) a cell of a tissue is a function of the size of the cell and not only of the possible occurrence of a death of an adjacent cell. Similarly, the rules used for the dynamic pie diagram in Fig. 4 cannot trivially be extended to a general dynamic tessellation.
- thanks: Research was supported by grant MURST-PRIN 2007ZMZK5T
- D. Bauso, L. Giarré, R. Pesenti. “Dissensus, Death and Division”, IEEE ACC, ACC2009 St. Louis, Missouri, USA June 10 - 12, pp. 2307-2312, 2009.
- D. Bauso, L. Giarré, R. Pesenti. “Quantized Dissensus in Switching networks with nodes Death and Duplication2, Proc. of the 1st IFAC Workshop on Estimation and Control of Networked Systems, Venice, I, Sept. 24-26, pp. 1-6, 2009.
- S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. “Randomized gossip algorithms”, IEEE T. Inform. Theory. vol. 52, no. 6, pp. 2508-2530, 2006.
- F. Chung, L. Lu, T.D. Dewey, G. Dewey, ”Duplication models for biological network”, J. Comput. Biol., vol. 10, no.5, pp. 677-687, 2003.
- P. Frasca, R. Carli, F. Fagnani, S. Zampieri, “Average consensus by gossip algorithms with quantized communication” Proc. 47th IEEE Conference on Decision and Control, Cancun, MEX, pp. 4831-4836, 2008.
- J. P. Hespanha, D. Liberzon, A. R. Teel, Lyapunov conditions for input-to-state stability of impulsive systems. Automatica, vol. 44, no. 11, pp. 2735-2744, 2008.
- S. Hill, F. Provost, and C. Volinsky, “Network-Based Marketing: Identifying Likely Adopters via Consumer Networks”, Stat. Sci., vol. 21, no. 2, pp. 256-276, 2006.
- A. Kashyap, T. Basar, R. Srikant, “Quantized consensus”, Automatica, vol. 43, no. 7, 1192-1203, 2007.
- G. Karev, Y. Wolf, A. Rzhetsky, F. Berezovskaya, E. Koonin, “Birth and death of protein domains: a simple model of evolution explains power law behavior”, BMC Evol. Biol., vol. 2, no. 1, pp. 1-8, 2002.
- D. Liberzon, Switching in Systems and Control. Systems and Control: Foundations and Applications. Birkhauser, Boston-MA, 2003.
- A. Nedić, A. Olshevsky, A. Ozdaglar, J.N. Tsitsiklis, “On distributed averaging algorithms and quantization effects?”, Proc. 47th IEEE Conference on Decision and Control, Cancun, MEX, pp. 4825-4830, 2008.
- R. Olfati-Saber, J. Fax, R. Murray. “Consensus and cooperation in networked multi-agent systems”, Proc. IEEE, vol. 95, no.1, pp. 215-233, 2007.
- W. Ren, R. Beard, E.M. Atkins, “Information consensus in multivehicle cooperative control: Collective group behavior through local interaction”, IEEE Cont. Syst. Mag., vol. 27, no. 2, pp. 71-82, 2005.
- C. van den Bulte, G. L. Lilien, “Medical Innovation Revisited: Social Contagion versus Marketing Effort”, Am. J. Sociol., vol. 106, no. 5, pp. 1409-1435, 2001.