Strong Attractors in Stochastic Adaptive Networks: Emergence and Characterization
We propose a family of models to study the evolution of ties in a network of interacting agents by reinforcement and penalization of their connections according to certain local laws of interaction. The family of stochastic dynamical systems, on the edges of a graph, exhibits good convergence properties, in particular, we prove a strong-stability result: a subset of binary matrices or graphs – characterized by certain compatibility properties – is a global almost sure attractor of the family of stochastic dynamical systems. To illustrate finer properties of the corresponding strong attractor, we present some simulation results that capture, e.g., the conspicuous phenomenon of emergence and downfall of leaders in social networks.
We propose a family of models to study the evolution and long term formation of networks of interacting agents whose ties vary over time as a result of their interaction. The connection between two agents is assumed to be reinforced or penalized due to their interaction (or lack of it). Namely, if an agent attempts cooperation with an agent – or simply, calls – and is cooperative – or simply, responds to , – then the tie is reinforced, otherwise it is penalized. If the connection is not excited, i.e., does not call , then the tie fades away. In Section II, we detail our model.
To illustrate, we observe that videos on media networks such as Youtube or products at Amazon are networked by the recommended list display: whenever one browses a video or purchases a product, one is shown a list of recommended videos or products as depicted in Fig.1. It is natural to expect that once a viewer selects one of the recommended videos, the link from the video to the original one should be somehow reinforced, otherwise, it is penalized. In other words, the video is more likely to show up in the recommended list in future views of that video. The same happens with products at Amazon and the like111While our focus is not on the exact inner working mechanism of Amazon, Youtube, or any specific social media network, we assume that, in broad terms, this should be the prevailing dynamical law for the evolution of ties of any unbiased (or not so biased) algorithm running underneath such platforms.. The study of such systems shed light, e.g., on the often elusive and universally observed viral behavior. An empirical study of this viral behavior on Youtube can be found [viralyoutube].
Other examples of networked systems whose ties change over time by reinforcement, penalization, and fading are:
Social Networks. Individuals exert influence on other individuals for various reasons; one possible cause may be by reinforcement: if individual calls individual , and responds, then there is a reinforcement of the tie from to . Call and response here may abstract broader considerations, e.g., individual reads a book from author (call) and is satisfied with the experience (response) – in this case, the reinforcement of the tie means that the reader is more likely to read future books of the author, or prone to be influenced by the opinions of that author. Moreover, a gossip effect may take place: can now recommend to other individuals that may also reinforce or undermine their connection to . Other aspects of reinforcement of ties in social networks, such as homophily, are considered in the reference [krishnan]. One important aspect to address in these systems is the observed emergence of few opinion leaders/makers versus a crowd of non-influential followers. Reference [yingdalu] addresses the emergence of opinion leaders.
Emergent Multi-Organizational Network (EMON). References [ritahurricane], [anisya] [katrina] emphasize the fact that natural disasters challenge centralized management emergency systems, and a cooperative network of many different organizations is crucial to handle the critical demands during severe crisis. Such emergent multi-organizational network (EMON) evolves over time and it may lead to the emergence of leaders, i.e., organizations with high degree of connectivity, betweeness or closeness centrality (as specially leveraged in [katrina] for the Katrina Hurricane). These organizations play an important role on the overall distributed decision making of the network, and the dynamical laws for the evolution of such collaborative networks clearly emerge from a reinforcement-penalization collaborative principle.
In this paper, we develop a simple family of stochastic dynamical models on the edges of a graph combining reinforcement and penalization rules, and we show that this family exhibits strong convergence properties, in the limit as time goes to infinity, namely, the underlying evolving network of tie strengths converges almost surely to a subset of binary matrices (or unweighted directed graphs). In other words, a collaborative network emerges almost surely from the various local interactions among agents. To prove our results, we develop a stochastic LaSalle-like principle: i) show that certain binary matrices (or directed graphs) are local strong-attractors to the dynamical system; and ii) show that the system tends to be attracted to a neighborhood of these graphs.
Time-varying networks model complex networked systems whose underlying topology evolves over time. Several models for time-varying networks in the literature are built mostly to capture robust self-organizing behavior – universally observed in nature – as in the adaptive coevolutionary networks [adaptivenet, adaptive2, adaptive3, adaptive4], or to estimate the underlying network evolution [kolar, Xing], or, in the case of infinite graphs, are built upon certain tractable regularity assumptions induced by invariant properties – such as exchangeability and càdlàg property – as in [Crane, Crane2]. Our family of models may be framed in the context of adaptive networks, due to its qualitative property that the strength of ties may affect the state of nodes and vice-versa. The field of adaptive networks, in particular, is mostly concerned with the study of emergence of robust self-organizing behavior in complex systems. We illustrate in Section V-B, via numerical simulations, that the emergent networks of collaborative ties from our family of models exhibit an important robust self-organizing configuration: the emergence of new latent leaders and downfall of veteran ones. Indeed, emergence and downfall of leaders leads to real-time ‘division of labour’ (to quote the term from [adaptivenet]) in EMON systems, – in that high degree, betweeness, or centraility nodes may play major roles in overall distributed decision-making, – new viral videos on Youtube, or popular books at Amazon. The referred phenomenon is an important factor that influences financial markets and the economy as well.
To conclude, in this paper, we develop a family of stochastic adaptive network models: i) that exhibits the property of almost sure convergence to a graph in the long run (to the best of our knowledge, this property is either absent or has not been analytically established in other models for adaptive networks); ii) that captures some of the important features of reinforcement and penalization of ties associated with the dynamics of social networks; iii) that exhibits a robust self-organizing behavior associated with the emergence and downfall of leaders; and iv) is broad enough to find applications in various scenarios (other than the ones mentioned in this section) such as Synapse plasticity – from the Hebbian plasticity principle [hebb] –, and the emergence of network trail pheromone in ant colonies (e.g., [debora1], [antevolution], [Dorigo]).
Outline of the paper. Section II introduces the family of models. Section III presents definitions, notation, and discusses our approach to the problem, namely, it provides a sketch of the proof of the main results. Section IV analytically establishes that the dynamical system exhibits a strong attractor, a proper subset of the binary matrices. Section V provides simulation results that illustrate finer characterization of the resulting attractors and interesting long-term behavior properties of the system such as the emergence and downfall of latent leaders in social networks. Section VI concludes the paper.
Ii Problem Formulation
In this section, we propose a model for the evolution of ties among a set of interacting agents. The interactions are performed in two instances: calls and responses, and, as a result, the tie between two nodes is reinforced or penalized. More precisely, the dynamics evolve as follows: at each time step , if node calls and does not respond, then the tie from to is penalized, i.e., (meaning that is less likely to call at time ), otherwise, the tie is reinforced, i.e., . In other words, if node calls node and there is (respectively, there is no) response, then is more (respectively, less) likely to call in the next iteration. We also assume fading due to idleness in the connection , i.e., if does not call , then . This Reinforce-Penalize-Fade (RPF) is the defining building block of our family of dynamical systems of interacting networked agents. Fig. 2 illustrates the dynamical model.
We model these dynamics formally as follows:
where and represent the reinforcement and penalization rules at node , respectively; is the fading law triggered when no call from to is performed; the state variable is the indicator of a call from node to node at time , i.e., if calls at time , or zero, otherwise; , if responds to at time . Further comments and assumptions follow:
(Reinforcement) for all with being an increasing222To be precise, the term increasing refers to strictly increasing in this paper. function and ;
(Penalization or Fading) for all with being an increasing function and , and the same goes for ;
(Calls) is the random process associated with calls over time and such that
i.e., is memoryless and, in a sense, it is a Bernoulli random variable at each time step conditioned on , and is in, latus sensus, the probability that a call from to will be performed at time . As it will be clear in Section IV, we will also consider (fading) perturbations on the conditional distribution of ;
(Responses) We assume that nodes have limited capacity and tend to utilize all of their bandwidth for response. Specifically: i) (limited capacity) each node can only respond to at most calls at each time ; ii) if the number of incoming calls to is below its capacity , then responds to all callers. More compactly,
We also assume memorylessness in the response policy. Let be the -th row of and be the -th column of . Then,
for any binary vector , and the sum in (4) is over all binary vectors 333The inequality is to be interpreted componentwise. such that the support of is contained in the support of . This way of decomposing the conditional probability expression will enable us to integrate the constraint in (2) readily. In other words, the conditional (on ) distribution of is uniquely characterized by the choice of a particular function . To sum up, as explained below, the only requirements on the response selection process for each are the capacity constraint in (2) and the memorylessness property. The conditional distribution of is indexed by the class of functions
with the following restrictions:
i) and ;
ii) if and then,
iii) if , then
Note on the response policy: A specific response policy for a node is fully characterized by one specific choice function in the class of functions . The convergence results presented in this paper hold for any such choice. In particular, if a node receives more calls than its capacity, the specific mechanism or (possibly random) algorithm to select the callers to respond to are not relevant for the validity of the theorems presented.
Equation (1) seems to lead to a decoupled dynamical system . In fact, the system is coupled due to the limited capacity of response of each node . For instance, whether a node reinforces – by responding to a call, – the connection from a neighbor , depends on the other callers to , i.e., on as only connections can be reinforced. One may, at first, still argue that this conforms to only a local coupling, i.e., the only coupling is among the edges pointing to , that is, one could study the dynamics (1) by looking independently to the evolution of each column of . Even though this holds when each node selects unbiasedly (uniformly randomly) callers to respond to, this is not true in general. To see this, whether is reinforced or not, affects the out-flow degree of node . If, in turn, the response law is out-flow degree biased, then this will affect the evolution of the other out-flow probabilities . But, is also coupled with the other weights pointing to by the local argument just mentioned. In other words, is dependent on and is, in general, a coupled dynamical system whose qualitative behavior may not be studied by partitioning the set of state variables and tracking each partite separately. In summary, under unlimited capacity, the system is uncoupled; limited capacity under an unbiased response leads to a column-decoupled system; but in general, the system is coupled.
From the memoryless assumptions on the state variables and , and the dynamics (1), we note that is a Markov process. This section constructs a family of stochastic dynamical systems on the Markov process , broadly determined by: i) the Reinforce-Penalize-Fade (RPF) rules , and ; and ii) the class of functions just described on the response . Our goals for the rest of the paper are to establish two convergence results, in Section IV, associated with the long-term behavior of the family of dynamical systems (1); and to explore, via numerical simulations in Section V, the finer aspects of the attractors of the system that capture many of the features of the evolution of social networks – such as emergence of latent leaders and downfall of veteran ones.
In what follows, we assume that all processes and random variables are defined on a rich enough probability space . Whenever we refer to the set , we take it to be as the probability space . For instance, almost all means in fact -almost all . We will also refer to as the natural filtration of the process up to time .
Iii Dynamical Systems Approach
The stochastic dynamical system (1) is captured compactly by the following stochastic recursive equation
We may refer to the map as a random (discrete-time) dynamical system since it defines a (discrete-time) stochastic process as
for each , or, in other words, , where is the -fold iterate of with respect to the same realization . In other words, under this representation, for each realization , the -fold iterate gives the state of the system at time with initial condition . The evolution of such a system is shown in Fig.3.
We study the evolution of ties under this dynamical system. We are concerned with the long term behavior of the dynamical system (1). This leads to the notion of strong-fixed point and strong-attractor that we introduce next.
Definition 1 (Strong-fixed point).
Let be a discrete-time random dynamical system. We call a strong-fixed point of , whenever
for almost all .
be the set of strong-fixed points of the stochastic dynamical system . From now on, we drop the subindex to write instead of , as the underlying random vector field is assumed to be the one in (1).
A goal in this paper is to prove that, under certain conditions on the update rules , , , and the conditional distributions of and , a finite proper subset of the set of binary matrices
is a strong-attractor to the dynamical system (1), i.e.,
where is the Euclidean metric. Refer to Theorem 8 (or Corollary 9) or its more general version (with sparse in time perturbations) in Theorem 10. In other words, given a model in the family (1), a deterministic network or graph emerges in the long-run almost surely. Technically speaking, as will be illustrated later, standard convergence arguments for Markov processes are not directly applicable to the class of dynamical systems given by (1) and we develop new techniques that might be of independent interest. For instance, we observe that standard absorbing-like arguments for Markov processes (e.g., irreducibility plus ergodicity) do not apply to our setup. In fact, if the process departs from the interior , and the reinforcement and penalization laws , are soft, i.e., for all (e.g., (9)), then never coalesces with extreme points in finite time; rather, it accumulates onto it. We prove such convergence results by showing the following: i) any is a local attractor with positive probability (bounded away from zero in a small neighborhood of ) (Theorem 4); ii) any arbitrarily small cover of is recurrent (Theorem 6). These two assertions will imply the strong-convergence via Theorem 7.
Remark: As discussed in the third-to-last paragraph in Section II, if the capacity of response of all nodes is unlimited, then the dynamical system is decoupled. In this case, the convergence referred to above follows as a corollary to the Doob’s (sub-)martingale convergence theorem [williamsprobability, Diffusion], as each coordinate evolves independently as a sub-martingale or a super-martingale – depending on whether the reinforcement rule is stronger than the fading or vice-versa. When limited capacity is assumed, the coordinates are coupled and the process loses the martingale structure. Doob’s convergence does not apply to the resulting coupled stochastic dynamical system .
Iv 0-1 Convergence Law
where is the identity map . For instance, for
we have and .
(RPF laws) , and for all nodes 444Note that we do not assume that the -fold iterates of the penalization rules are Cesaro summable, i.e., .;
(Responses) We assume that nodes have limited capacity and tend to utilize all of their bandwidth for response. Specifically: i) each node can only respond to at most calls at each time ; ii) if the number of incoming calls to is below its capacity , then responds to all callers. Mathematically,
where is the vector of capacities. We also assume that is memoryless. The in (10) is taken entry-wise. Equivalently, in terms of the conditional distribution of , and as discussed earlier (see (3)-(7)), a particular response policy is characterized by a specific choice in the class . As noted before, the criterion (biased or not) to select which set of nodes to respond to, in case receives more calls than the capacity is not relevant for our convergence theorems, and thus the analytical results presented hold for a broad class of models obeying the stochastic dynamical system (1);
(Calls) in Subsection IV-A, we assume , and in Subsection IV-B we consider (fading) perturbations on the conditional distribution of as described later in (26). In both cases, we exclude self-calls, that is, .
Under the above assumptions, it can be verified that the subset of binary matrices
is the set of strong-fixed points of the stochastic dynamical system (1), where is the capacity of response to node . In this section, we show that is in fact a global strong-attractor to (1), i.e., the process converges almost surely to a matrix in from arbitrary initial condition .
Iv-a Main Convergence Result
The following lemma and corollary are crucial to what follows.
Let for all . Then,
The sequence necessarily converges as it is monotonic. Also, if for all (or at least for infinitely many ) for some , then . Therefore, a necessary condition to have is that converges to . Lemma 2 provides a sufficient condition: shall converge to fast enough.
The result is adapted from [williamsprobability], but since it is left as an exercise in that reference, we provide our own proof. Define . To start with, assume that . Expand the product to observe that
Now, assume that , then
and the result follows. ∎
Let be an increasing function. Assume that
for some . Then, there exists , such that
i.e., does not depend on the choice of (though it may depend on ).
Corollary 3 provides a uniform boundness property for the infinite product over some subinterval , which will be relevant latter.
In what follows, it is relevant to recall that, to each binary matrix , there exists an underlying support graph whose set of edges is given by
Now, define the family of events (indexed by )
i.e., the realizations where, at time , for , calls and responds to , and does not call for all the remaining edges . The set is of positive probability if and only if for all ; for all ; and . Recall that and are in one-to-one correspondence given by (13).
as the closed ball centered at the probability matrix and of radius restricted to the set .
The next theorem states that any (recall (11)) is a local attractor with strictly positive probability in a small enough neighborhood of , for the stochastic dynamical system .
There exists such that
for some small enough and for all .
To establish the main result of this subsection, Theorem 8, or its dual formulation, Corollary 9, it will be crucial to show that the hitting time to is integrable – hence, almost surely finite – i.e., is recurrent. For this, the next theorem will be useful.
Theorem 5 (Lemma in Section in [williamsprobability]).
Let be a hitting time with respect to a filtration . If for some and some we have
This theorem is adapted from [williamsprobability], but since it is left there as an exercise, we prove it here for completion. We will prove that, as a consequence of assumption (17),
for all . Note that this will conclude the proof of this theorem as
and thus, . We prove (18) by induction on . For , the assertion is clear. Now, assume it is true for an integer . Then,
where the first inequality follows from (17). This concludes the proof. ∎
The next theorem, Theorem 6, states that any arbitrarily small (with non-empty interior) cover to is a recurrent set.
We have that
for all .
Let , and define the set
where is the -th column of the probability matrix , and is the set of permutation matrices. Due to the limited capacity of response of each node , we observe that for all . Let and observe that
Consider its disjointification
as the number of iterates to go from to by n-fold iterates of . Also, define
to be the number of iterates to go from to by iterates of . Choose
be the hitting time to the set by the process . For we have
where the first inequality follows from the choice of , and the second inequality follows similarly to as done in (16). Due to the monotonicity of and , all the terms in the above finite product are positive, and, thus, and it does not depend on . The theorem now follows from Theorem 5. ∎
Let be a time-homogeneous Markov process on a set . Let . Then,
In other words, if is recurrent and invariant with positive probability, then the tail of lies in .
Define the sequence of stopping times , where is the time of first return to the set ; is the time of the -th return to the set . Note that . Define as the random variable associated with the total number of exits from the set over all time . Note that if exits , it will return in finite time as is recurrent. In particular, we have the equality of the events
for all . We show by induction on that
for all . Indeed, for
where the fifth equality comes from the strong Markov property, and the last inequality results from the induction hypothesis. Thus,
Define as the stopping time associated with the first time the Markov process is in – note that, in general, is different from as the process can start in , and in this case , but . Since is recurrent, we have that almost surely and
where the last inequality follows from (23), the homogeneity of , and the strong Markov property. Hence, we conclude that
We are now prepared to obtain Theorem 8.
Let be solution to the stochastic dynamical system (1), with , and satisfying the assumptions presented in the beginning of this section, with , and . Then,
Note that, is a stochastic process whose limiting behavior is given by a random variable (with support on ). The initial condition is also a random variable, and therefore, the proper setting to characterize the dynamics in terms of the attractors and the corresponding basin of attraction is the dual space of probability measures on , as established in Corollary 9.
Let be a probability measure on associated with the distribution of and let . Then, under the same assumptions of Theorem 8, there exists such that
where means convergence with respect to the weak- topology on the (compact) space of probability measures on (refer to [billi]); and stands for the convex hull of a set (refer to [convex]).
Consider the partition where
Now, let be any continuous bounded function and note that
where is the pushforward of by , i.e., for any . Since the above holds for all bounded and continuous and , the desired convergence in distribution follows. ∎
Corollary 9 asserts the existence of a map
from the set of probability measures on to the set of probability measures with support on , that maps any initial distribution