Hierarchical Economic Agents and their Interactions

Hierarchical Economic Agents and their Interactions

Abstract

We present a new type of spin market model, populated by hierarchical agents, represented as configurations of sites and arcs in an evolving network. We describe two analytic techniques for investigating the asymptotic behavior of this model: one based on the spectral theory of Markov chains and another exploiting contingent submartingales to construct a deterministic cellular automaton that approximates the stochastic dynamics. Our study of this system documents a phase transition between a sub-critical and a super-critical regime based on the values of a coupling constant that modulates the tradeoff between local majority and global minority forces. In conclusion, we offer a speculative socioeconomic interpretation of the resulting distributional properties of the system.

1 Introduction

The primary goal of this paper is to describe the potential of a new economic modelling environment, populated by multi-layered agents, hierarchical objects that probe the boundary between individual and group, institution and society. Bypassing questions of aggregation, the proposed paradigm seeks coordination through hierarchical, heterogeneous agents, influencing one-another through their opinions and actions. Importantly, the agents’ limited rationality permits pockets of inconsistent allegiances to percolate through their interaction network.

The proposed modeling environment extends work over the past decade on agent-based models of the economy [21,20,8,17]. Progressively, such models have shown how heterogeneities in the agents’ endowments, preferences and interactions can persist and lead to observable deviations from the efficient market hypothesis, a collection of so-called stylized facts [16,13,18,7,26,2]. The extension proposed here invites us to broaden our notion of heterogeneity to encompass attributes that aren’t reducible to individuals, but instead arise at different levels of aggregation. But instead of seeking to extract them from properties of the individuals, we posit them as part of the evolving state of ‘meta-individualist’, multi-layer agents that populate the economy [19,31].

To illustrate this broader modelling paradigm, we proceed to extend a specific agent-based model of the economy, first proposed by Bornholdt and subsequently studied both numerically and analytically by different authors [22,32]. This model is based on an interaction potential that trades off two components, the desire to belong to a local majority and simultaneously to the global minority. These two terms are balanced by a coupling constant . The study of the statistical mechanics of this model led to the identification of an explicit phase transition [32], controlled by , whereby sufficiently strong coupling leads to non-self-averaging behavior [3,4] and persistent opinion mixing. Furthermore, this framework allowed us to identify a fundamental, irreducible limit to the observability of various measures of excess demand [32].

In the original Bornholdt model, states of the economy were represented by spin configurations of a fixed lattice, or network more generally. Configurations of this kind can be denoted by vectors of s and s, , for some . These vectors are then propagated following a Markov process, driven by an interaction potential, . The resulting stochastic evolution is seeking minimum energy states, which represent equilibria. This minimization is controlled by a ‘temperature’ parameter, in an analogy to the simulated annealing process of non-equilibrium statistical mechanics. At high temperatures, state transitions are largely random. As the temperature is lowered, transitions that locally reduce the interaction potential are progressively favored. In the frozen phase, the system picks out some equilibrium state, and is subsequently trapped there.

This framework is often interpreted as describing the evolution of individual agents, represented by the different sites on the lattice or network, with the spins at each site denoting the evolving opinions or actions (buys vs. sells) of the agent on that site. However, such an interpretation, which attempts to reduce the resulting market dynamics to the interactions of individual agents, has come repeatedly under fire, from different perspectives [23,19]. Most problematic, from the point of view of the current paper, is the inability of this framework to reproduce any of the myriad intermediate structures, from coalitions to firms, that populate the real economic landscape. More precisely, present instantiations of this paradigm reserve heterogeneity for individual agents, relegating any higher structures to the realm of transient epiphenomena.

Here, we propose to extend the standard spin market framework in an explicit effort to bring out the irreducible relevance of structures in the economy. We choose to see these intermediate structures as endowed properties of the economic state, largely indecomposable to their constituents, albeit spontaneously evolving, in interaction among themselves and their constituents. To help visualize our proposed scheme, we propose the following abstraction: agents are analogous to simplicial complexes in topology, consisting of locally matching components of different dimensionality and degree of complexity [27]. Such an object is generally indecomposable to a listing of its constituents. Instead, it depends crucially on details of the ‘gluing map’ that put it together. Extending the analogy further, we posit a generalized interaction potential, which allows such hierarchical objects to ‘act’ on one-another, without this action being describable as the interaction between individual components, e.g. an edge interacting with an edge or a tetrahedron interacting with a triangle.

The goal of this work is to introduce this new modeling paradigm, in which the agents and the network on which their opinions evolve are indissolubly coupled. Unlike earlier agent-based studies of economic interactions, we don’t attempt to generate a price process that can be calibrated against empirical statistics. The translation of agents’ opinions to observable aggregates depends sensitively on market microstructure, from details of the double-auction to explicit market making [9,31,1,6]. Instead, the current work focuses on the rich array of stochastic convergence effects that arise in models of heterogeneous economic agent, particularly when we endogenies the evolution of the interaction network [15,11]. Specifically, while much emphasis has been placed on conditions for guaranteeing ergodicity [5,28], the focus here is on the effective lack of ergodicity under certain parametric regimes for our model, and its economic consequences, both theoretical and empirical. Along the way, we introduce techniques from symbolic dynamics and the spectral analysis of Markov chains to enrich the economic toolkit.

We begin with an introduction to our hierarchical agent model, including a description of the interaction potential that couples the spin configurations on the nodes and arcs of our evolving network. This interaction potential drives a Markov process whose hypergeometric state transitions are described in Section 3, along with some sample paths that hint at the non-ergodic behavior we are after. The following section introduces the contingent submartingale representation, a technique that allows us to extract a deterministic skeleton underlying our stochastic dynamics. We then proceed to investigate the invariant measure of the Markov process and its sensitivity to the model’s parameters. The discrepancies between the limiting distribution and the deterministic attractors we identified earlier leads us to pursue a spectral analysis of the underlying Markov chain, which uncovers and quantifies a source of persistent path dependence. The paper concludes with a set of phenomenological conjectures that govern the paths of our hierarchical agent model, as well as a putative socioeconomic interpretation [10] for the three distinct dynamic regimes that the model exhibits. All along, we relegate the more technically demanding details of our exposition to an Appendix.

2 Model Description

More concretely, we proceed to describe in detail an extension of the earlier Bornholdt spin market model, where the states of the economy are represented by an object with two components: binary configurations on sites and arcs. To begin with, there is a set of sites for some and a related set of arcs . The first component of the state of the economy are spin configurations of sites, i.e. vectors of s and s, , while the second component are spin configurations over the set of arcs, . Thus, the state can be described as

as shown in the examples illustrated in Figure 1. Let and denote the number of positive sites and arcs respectively. For notational convenience, we extend to the whole , by imposing symmetry, i.e. for all .

Figure 1: Two different configurations of sites and arcs with .

We construct a continuous time Markov process with transitions occurring at exponentially distributed epochs, , with rate [14]. At time (i.e. the epoch) a random member of is chosen uniformly and its spin is changed to or depending on interactions between the two components of the current market configuration. These interactions between these two components of the objects that populate our model rely on a tradeoff between the desire to align with the majority within a local neighborhood and a need to react to the opportunities created by global imbalance. The neighborhood structure of sites is based on the current configuration of the arcs, and vice versa, exploiting the duality between sites and arcs. In fact, as we will discuss in more detail later, this is one of the special features of our two-tiered agent that we explicitly exploit, and which is substantially more convoluted in higher-order extensions of our framework.

More specifically, let denote a mapping that assigns to every site and every epoch a subset of given by

Similarly, let denote a mapping that assigns to every arc and every epoch a subset of given by , where

The treatment of the model presented in this paper is based on applying a version of ‘rapid stirring’ [12] by randomizing the neighborhood structure generated by and . Appendix 1 provides more details about how these random neighborhoods are drawn. Once the neighborhoods have been assigned for each site and arc at a point in time, each member of each neighborhood is mapped to or using further hypergeometric random variables, independent of the earlier ones and of each other, with as many draws as there are members of the neighborhood under consideration. These draws are without replacement, out of a population which depends on whether the chosen element is a site or an arc, and number of successes depending on the sign of the base site or arc. In particular, the interaction potential for site and arc is given by

(1)

where

measures the global imbalance, in sites and arcs.

The dynamics of the state proceed as follows. We set a temperature parameter, which controls the amount of randomization that interferes with the minimization of the interaction potentials described in (1) above. As usual, we denote by the inverse temperature, and eventually we let it increase towards infinity. At every point in time , a site or arc is chosen uniformly at random and a coin is flipped. The chosen site or arc is assigned a if the coin comes up HEADS and otherwise. If site is chosen, then the probability that the coin comes up HEADS is equal to

On the other hand, if arc is chosen, then the probability that the coin comes up HEADS is equal to

Note that, naturally, arcs are chosen more often than sites, because there are quadratically more arcs than sites, but in the long run, this imbalance in refresh rates guarantees that the more arcs will have had an equal opportunity of settling down to their invariant marginal distribution as the significantly fewer sites.

3 Transition Probabilities

In order to proceed with our analysis, we will compute the probabilities that at any point in time, the site or arc that is chosen will not change its sign. We will restrict our attention to the ‘frozen phase’, i.e. we will consider the limit . This choice simplifies our analysis because, in this limit, all ‘thermal’ randomness, which would oppose the minimization of the interaction potential, disappears, and the only randomness that remains stems from the sampling of random neighborhoods. This persistent randomization gives rise to hypergeometric random variables.

More precisely, consider , the probability that, having chosen a site, it remains after the update, assuming the system is in a state with and . Let

where is the floor of , i.e. the largest integer no greater than . Then, the site transition probability is given by the following partial sum of conditionally hypergeometric random variables:

(2)

where

and and . More details about the derivation of these transition probabilities are given in Appendix 2.

Similarly the probability that, having chosen a site, it remains after the update, assuming the system is in a state with and , is based on guaranteeing that and is given by

(3)

where ,

and is the ceiling of , i.e. the smallest integer no less than .

On the other hand, if we choose an arc instead of a site, the computation is somewhat different. As before, we are looking to compute the probability, that, having chosen a arc, it remains after the update, assuming the system is in a state with and . Let

and

Then, the arc transition probability is given by the following partial sum of conditionally hypergeometric random variables:

Similarly, if the chosen arc is , the probability that it remains negative after the update is given by

where

and

The technical details of these derivations are also described in Appendix 2.

We can perform a Monte Carlo simulation based on the expressions (2), (3), (LABEL:eq:qpp) and (LABEL:eq:qmm). In particular, we can consider a two-dimensional Markov Chain on , with transition probabilities given by

where

Figure 2 shows a simulation of this Markov Chain for steps, with and , starting at and . After about steps, this simulation was trapped in the attractor .

Figure 2: Monte Carlo simulation of 2D Markov Chain with and .

This Markov Chain is a form of two-dimensional random walk in a random environment (RWRE), i.e. a random walk in which the probability of going UP/DOWN and LEFT/RIGHT depends on your current location [12]. Such stochastic processes may exhibit path dependence and lack of ergodicity, as supported by the simulation of the same Markov Chain shown in figure 3.

Figure 3: Another Monte Carlo simulation of 2D Markov Chain with and , started at a different point.

This simulation also lasts for steps, and the parameters are identical with the earlier simulation. The only difference is that this time the simulation was started at and . Instead of becoming trapped in the attractor , this time the Markov chain appears to be stochastically switching between two states, one with and another with . This phenomenology is consistent with a non-ergodic process, for which the dependence on initial conditions doesn’t disappear in the limit.

One of the most prominent characteristics of empirical price series is their long range memory [9,1]. This can indicate path-dependence, a tell-tale sign of non-ergodicity in the underlying process. In lieu of generating price dynamics, which could be compared with empirically determined statistics, we explore the serial correlation of the and paths generated by our process for signs of similar qualitative behavior. We quantify the serial correlation using the modified R/S index [25,24]. This index is a function of a delay window, , and is normalized so that when its value is equal to for a particular delay, the process is memoryless at that horizon. On the other hand, when the R/S index is more than , the process exhibits persistent behavior, while values of the index below indicate anti-persistent behavior, at the corresponding horizons.

Figure 4: The stochastic paths of and exhibit persistent serial correlation, as shown by the slowly decaying R/S index. The lower edge corresponds to a value of R/S equal to . : Blue Cirlces (), Magenta Squares (), Black Left Triangles (; : Red Diamonds (), Green Stars (), Cyan Right Triangles ().

Figure 4 shows the R/S index for three Monte Carlo simulation runs, each with time steps. Note that the figure is cropped so that the bottom edge corresponds to a value of . In all three cases and the Markov Chain was started at and . Three different values of were used: , and respectively. In the first and the last case, both and exhibit strong persistence even at . In the middle case, the memory disappears for both and after about and steps respectively. It is worth noting that in all cases the arc process has longer memory than the site process.

To further explore the long range memory of our stochastic process, we ran Monte Carlo simulations with steps each, all started at and , with six different values of . Figure 5 shows the R/S index at . In each case the solid lines represent the mean values of the index and the dashed lines correspond to one standard deviation above and below the mean (blue corresponds to and red to ). More than of the data remain persistently serially correlated even beyond the step horizon, indicating a remarkably long memory, which may well contribute to the observed long memory of empirical economic price series.

In what follows, we will employ a novel methodology to probe the dynamical attractors of this stochastic process. This technique, based on a contingent submartingale representation [33], will allow us to construct a cellular automaton [34] approximation of the full stochastic dynamics, identify all equilibria of this deterministic dynamical system and examine their stability properties. For linear stochastic systems, the resulting deterministic paths act as a ‘skeleton’ around which the stochastic state paths oscillate, ultimately converging to invariant measures supported in the neighborhoods of the deterministic attractors.

Figure 5: Over 60 simulations with different values of the coupling constant , the R/S index remained above more than of the time for both (Blue Circles) and (Red Diamonds). The left panel shows the resulting 120 values of the R/S index by trial, while the right panel shows the same data by value of the coupling constant .

On the other hand, stochastic systems with nonlinear interaction terms like those in our system often possess more complex, non-classical limiting behaviors in path space, e.g. involving path dependence and lack of ergodicity, that aren’t reducible to limiting averages over longer times. In the following sections of this paper we will illustrate qualitative deviations between the paths of states (primal objects), which exhibit a bewildering array of asymptotic behaviors, from limit point to intertwining periodic orbits and chaotic attractors, and the evolution of measures (dual objects), which unambiguously converge to well-described distributions, whose properties and convergence rates we are in a position to characterize.

4 Contingent Submartingale Representation

In order to analyze the long term behavior of this decidedly complicated, inhomogeneous Markov chain, we begin by characterizing the conditional expectation of the increments of this process in each dimension. This analysis gives us the following deterministic nonlinear dynamics:

(7)

where we’ve taken to using and for notational simplicity. In general, we are interested in the sign of and , because that determines whether the two components of increase or decrease, on average.

Incidentally, the fractional factors in front of the conditional expectations reflect the fact that the moves in will be either in the first () or second () dimension, not both simultaneously, with proportions , as we mentioned already at the end of section 2. In other words, the arc configurations are updated more frequently than the site configurations, and therefore the conditional expectations of the increments of will be much lower than the ones for , and they both will be lower than they would have been if we allowed simultaneous moves in both directions. It is these latter (simultaneous) expected increments that the functions and compute, so they need to be adjusted accordingly, to bring the weighted averages in line.

The reason why we chose to define the functions and in this way, despite the superficial conflict with the definition of our 2D stochastic process, is because we intend them to banish all stochasticity and represent the deterministic kernel around which the stochastic process evolves, as discussed in the last paragraph of section 3. Since both and depend on both components, they will generally point in directions that combine movement in both directions. Had we insisted to choose one direction over the other at every step, we would have had to introduce some randomization scheme that chooses the directional signal from the pair of and to obey at every point in time. The only way to extract a completely deterministic dynamic is to accept the possibility of simultaneous (though deterministic) moves in both directions.

More specifically, it is situations like this that gave rise to the concept of a contingent submartingale [33]. Here we slightly generalize the definition given in [32] to accommodate our needs2. Specifically, let and be two stochastic processes, and be a subset of the range of . Let and define for the following two sequences of integer-valued stopping times:

Finally, consider a two-dimensional process defined by

We will say that is a contingent submartingale with respect to if is a submartingale for each [14].

Figure 6: Signs of the conditional first moments of the increments of for and .

Figure 6 shows the signs of the conditional expectations in (7) and (LABEL:eq:thetaincr). In particular, the top left panel shows the sign of the (conditionally expected) increments in , with red, blue and yellow indicating positive, negative and zero respectively. The top right panel similarly illustrates the sign of the (conditionally expected) increments in . In particular, if is the red region in the top left panel, then we can see that is a contingent submartingale with respect to . Similarly, if is the red region in the top right panel, then is a contingent submartingale with respect to .

The bottom two panels indicate the boundaries between the regions of different signs. Moreover, they include arrows to better illustrate the direction of the expected flows at each point in the state space. Specifically, the boundaries of the different regions represent states where the process behaves locally like a martingale. Intersections of the boundaries in the left and right bottom panels indicate states that are stationary, from an expectation point of view.

At this point, it may be instructive to consider a thought experiment. Imagine that we could do away with all residual randomness, and allow the system to follow purely the deterministic dynamics indicated by the vector fields in (7) and (LABEL:eq:thetaincr) as shown in figure 6:

(9)
(10)

where is the signum function. This discrete dynamical system is what mathematicians call a cellular automaton [34]. This cellular automaton possesses several fixed point equilibria, located at the intersections of the curves and . Computationally, we can specify the approximate grid locations of the seven such equilibria when

as can be seen in figure 7, which superimposes the two lower panels in figure 6 and indicates their seven intersection points.

Figure 7: Attractors of the cellular automaton abstraction.

The question we now have to ask ourselves is whether these attractors are really there (because of the discretization effects which may diffuse the resulting local minima) and what their stability properties are. More specifically, there are two separate effects to consider. The first is due to the discrete nature of the underlying state space, which we have disregarded in our contingent submartingale analysis. It is conceivable that a non-lattice point equilibrium may be unstable in the context of the discrete dynamics because all neighboring lattice points are repelling. The second effect we need to consider is whether small perturbations would irreversibly escape the neighborhood of the putative attractor.

Table 1 shows the values of and at each of the putative attractors and at grid points in their neighborhood, and uses this information to assess their stability properties. The table shows that there is only on stable point attractor, namely , while there are also three separate stable period-2 attractors for the dynamics when .

Attractor Lattice neighbors Stability assessment
(5,36)
(4,15)
Stable periodic attractor (period 2)
Attractor Lattice neighbors Stability assessment
(6,41)
(10,45) Stable
(4,33)
Table 1: Stability analysis of putative attractors of the cellular automaton model (9) and (10)

So, the deterministic dynamics of the cellular automaton approximation are non-ergodic, because they lack uniqueness of asymptotic behavior. The effect of initial conditions never disappears. Next we proceed to shift our perspective from the time evolution of the individual paths the system may follow to the evolution of measures on the space . We investigate the fully stochastic system by mapping it into a Markov chain and using spectral methods to study its convergence properties. Throughout the following discussion, it is instructive to keep in mind that the objects propagated by the Markov chain aren’t individual states, but measures [29].

5 Invariant Measure

In order to take advantage of linear algebra, we will recast the 2D Markov chain in terms of propagating state vectors. Specifically, let

(11)

and

(12)

Then, using (LABEL:eq:MCtransitions) we obtain

where is defined after (LABEL:eq:MCtransitions). For example, consider the case and . The state is described by elements of , a set with cardinality 16. Thus, the resulting state transition matrix will be given by:

(14)

You can in fact check that this is a Markov matrix, as all rows sum up to 1. From the general theory of Markov chains [30] we know that at least one eigenvalue of this matrix has modulus 1 and that the moduli of all eigenvalues are no more than 1. Moreover, the left eigenvector corresponding to each eigenvalue equal to 1 solves the ‘balance equations’, i.e.

Thus, such an eigenvector represents the invariant measure of the Markov chain. Specifically, for the Markov transition matrix shown in (14), this eigenvector is given by

Returning to the more readily interpretable representation in the state space, the resulting invariant measure is given by

where the horizontal axis corresponds to the number of sites that are , from to , while the vertical axis corresponds to the number of arcs that are , from to . This means that, for example, the steady state probability of finding the system in the state , i.e. with one site equal to and two arcs equal to is equal to . The most likely outcomes of this Markov chain are the states and , while the states and are the least likely ones.

Appendix 3 describes the technical details involved in the appropriate convergence concepts for a Markov chain. This discussion substantiates our use of linear algebra techniques to obtain information about the distributional path properties of and . We now proceed to investigate the changes to the resulting invariant measure as and are allowed to vary. Figure 8 shows how the invariant measure evolves for as is allowed to increase.

Figure 8: Invariant measures for and in the subcritical regime to in the supercritical regime. The critical transition for happens between and . As increases in the supercritical regime, the invariant measure evolves from unimodal to bimodal and back to unimodal, with the mode moving from the left edge towards the middle of the state space.

As a first observation, note that the case and , whose deterministic approximation gave rise to the four separate attractors we described in section 4, possesses in fact a unique invariant measure, concentrated entirely at . In fact, we can easily verify, using (LABEL:eq:MCtransitions), that when and ,

thereby guaranteeing that is a trapping state for the full stochastic dynamics. The only question that remains is one of ergodicity: might the corresponding Markov transition matrix possess a double eigenvalue equal to 1, in which case the two resulting eigenvectors will give rise to two distinct invariant measures, and the dynamics will be non-ergodic?

Figure 9: The modulus of the eigenvalues of the transition matrix for and decay almost linearly, with .

Figure 9 shows the rate of decay of the eigenvalues for the transition matrix in this case. Figure 10 shows the pattern generated by the location of the eigenvalues in the unit disc. It turns out that the second highest eigenvalue is extremely close to , at about !

Figure 10: The locations of the eigenvalues of the transition matrix for and in the unit disc. The four panels progressively zoom in to the spectral gap in a clockwise manner. The spectral gap is indeed very small, only equal to .

6 Transient Domain

Thus, we know that the invariant measure is unique, despite the multiplicity of attractors for the deterministic cellular automaton approximation. Why is that, and what can we infer from the nearness of the second eigenvalue to 1? There are two justifications for the discrepancy between these two apparently contradictory asymptotic behaviors. The first hinges on the observation, which we mentioned at the end of section 3, that the deterministic cellular automaton doesn’t in fact provide a skeleton around which the Markov chain evolves. Instead, these two dynamics deviate from one another systematically, in very instructive ways:

  • Propagating the average is not the same as the average of the propagated measure.

  • Rounding the state to the nearest integer doesn’t respect the sign of the expected increments.

The sources of these discrepancies are discussed in detail in Appendix 4.

Despite these important ways in which the underlying stochastic process deviates meaningfully from the deterministic cellular automaton approximation, in practice economic agents often use such approximation schemes. Specifically, stochastic dynamics are not uncommonly interpreted ‘quasi-deterministically’, taking one step at a time, depending on the sign of signals on average, rather than propagate the ensemble of paths with their respective probabilities and subsequently take the average.

We saw the first justification for the deviation in asymptotic behaviors of the Markov chain and the deterministic cellular automaton. According to it, we really didn’t have much reason to expect that they would approximate one another in the first place! The second justifi