Distributed Algorithms for Aggregative Games on Graphs
Abstract
We consider a class of Nash games, termed as aggregative games, being played over a networked system. In an aggregative game, a player’s objective is a function of the aggregate of all the players’ decisions. Every player maintains an estimate of this aggregate, and the players exchange this information with their local neighbors over a connected network. We study distributed synchronous and asynchronous algorithms for information exchange and equilibrium computation over such a network. Under standard conditions, we establish the almostsure convergence of the obtained sequences to the equilibrium point. We also consider extensions of our schemes to aggregative games where the players’ objectives are coupled through a more general form of aggregate function. Finally, we present numerical results that demonstrate the performance of the proposed schemes.
1 Introduction
An aggregative game is a noncooperative Nash game in which each player’s payoff depends on its action and an aggregate function of the actions taken by all players [37, 16, 17, 28].^{1}^{1}1Such games have been shown to be closely related with subclasses of potential games [51, 9] where a potential game refers to a Nash game in which the payoff functions admit a potential function [31]. The potential function of an aggregative game is a special case of the function employed in [29], where distributed algorithms for optimization problems with general separable convex functions are presented. NashCournot games represent an important instance of such games; here, firms make quantity bids that fetch a price based on aggregate quantity sold, implying that the payoff of any player is a function of the aggregate sales [8, 14]. The ubiquity of such games has grown immensely in the last two decades and examples emerge in the form of supply function games [17], common agency games [17], and power and rate control in communication networks [2, 3, 4, 62] (see [1] for more examples). Our work is motivated by the development of distributed algorithms on a range of gametheoretic problems in wired and wireline communication networks where such an aggregate function captures the linkspecific congestion [3, 4] or the signaltonoise ratio [40, 39, 56]. In almost all of the algorithmic research on equilibrium computation, it is assumed that the aggregate of player decisions is observable to all players, allowing every player to evaluate its payoff function without any prior communication.
In this paper, we consider aggregative games wherein the players (referred to as agents) compete over a network. Distributed computation of equilibria in such games is complicated by two crucial challenges. First, the connectivity graphs of the underlying network may evolve over time. Second, agents do not have ready access to aggregate decisions, implying that agents cannot compute their payoffs (or their gradients). Consequently, distributed gradientbased [3, 40, 22, 23] or bestresponse schemes [52] cannot be directly implemented since agents do not have immediate access to the aggregate. Accordingly, we propose two distributed agreementbased algorithms that overcome this difficulty by allowing agents to build estimates of the aggregate by communicating with their local neighbors and consequently compute an equilibrium of aggregative games. Of these, the first is a synchronous algorithm where all agents update simultaneously, while the second, a gossipbased algorithm, allows for asynchronous computation:

Synchronous distributed algorithm: At each epoch, every agent performs a “learning step” to update its estimate of the aggregate using the information obtained through the timevarying states of its neighbors. All agents exchange information and subsequently update their decisions simultaneously via a gradientbased update. This algorithm builds on the ideas of the method developed in [47] for distributed optimization problems.

Asynchronous distributed algorithm: In contrast, the asynchronous algorithm uses a gossipbased protocol for information exchange. In the gossipbased scheme, a single pair of randomly selected neighboring agents exchange their information and update their estimates of both the aggregate and their individual decisions. This algorithm combines our synchronous method in (a) with the gossip technique proposed in [7] for the agreement (consensus) problem.
We investigate the convergence behavior of both algorithms under a diminishing stepsize rule, and provide error bounds under a constant steplength regime. Additionally, the results are supported with numerics derived from application of the proposed schemes on a class of networked NashCournot games. The novelty of this work lies in our examination of distributed (neighborbased) algorithms for computation of a Nash equilibrium point for aggregative Nash games, while the majority of preceding efforts on such algorithms have been spent towards solving feasibility and optimization problems. Before proceeding, a caveat is in order. While the proposed gametheoretic problem can be easily solved via a range of centralized schemes (see [11] for a comprehensive survey), any such approach relies on the centralized availability of all information, a characteristic that does not hold in the present setting. Instead, our interest is not merely in equilibrium computation but in the development of stylized distributed protocols, implementable on networks, and complicated by informational restrictions, local communication access, and a possibly evolving network structure.
Broadly speaking, the present work can be situated in the larger domain of distributed computation of equilibria in networked Nash games. First proposed by Nash in 1950 [32], this equilibrium concept has found application in modeling strategic interactions in oligopolistic problem settings drawn from economics, engineering, and the applied sciences [14, 12, 4]. More recently, gametheoretic models have assumed relevance in the control of a large collection of coupled nonlinear systems, instances of which arise in production planning [15], synchronization of coupled oscillators [60], amongst others. In particular, agents in such settings have conflicting objectives and the centralized control problem is challenging. By allowing agents to compete, the equilibrium behavior may be analyzed exactly or approximately (in large population regimes), allowing for the derivation of distributed control laws. In fact, gametheoretic approaches have been effectively utilized in obtaining distributed control laws in complex engineered systems [49, 50]. Motivated by the ubiquity of gametheoretic models, arising either naturally or in an engineered form, the distributed computation of equilibria has immense importance.
While equilibrium computation is a wellstudied topic [13], our interest lies in networked regimes where agents can only access or observe the decisions of their neighbors. In such contexts, our interest lies in developing distributed gradientbased schemes. While any such algorithmic development is well motivated by protocol design in networked multiagent systems, bestresponse schemes, rather than gradientbased methods, are natural choices when players are viewed as fully rational. However, gradientresponse schemes assume relevance for several reasons. First, increasingly gametheoretic approaches are being employed for developing distributed control protocols where the choice of schemes lies with the designer (cf. [25, 27]). Given the relatively low complexity of gradient updates, such avenues are attractive for control systems design. Second, when players rule out strategies that are characterized by high computational complexity [41] (referred to as a “boundedrationality” setting^{2}^{2}2This notion is rooted in the influential work by Simon [53] where it is suggested that, when reasoning and computation are costly, agents may not invest in these resources for marginal benefits.), gradientbased approaches become relevant and have been employed extensively in the context of communication networks [2, 3, 62, 40, 56].
The present work assumes a strict monotonicity property on the mapping corresponding to the associated variational problem. This assumption is weaker than that imposed by related studies on communication networks [2, 3] where strong monotonicity properties are imposed. From a methodological standpoint, we believe that this work represents but a first step. By combining a regularization technique, this requirement can be weakened [22] while extensions to stochastic regimes can also be incorporated by examining regularized counterparts of stochastic approximation [23]. However, all of these approaches are under the assumption that agents have access to the decisions of all their competitors.
Finally, it should be emphasized that the distributed algorithms presented in this paper draw inspiration from the seminal work in [57], where a distributed method for optimization has been developed by allowing agents to communicate locally with their neighbors over a timevarying communication network. This idea has attracted a lot of attention recently in an effort to extend the algorithm of [57] to more general and broader range of problems [35, 46, 45, 19, 18, 36, 54, 33, 26, 58, 48, 59, 5]. Much of the aforementioned work focuses on optimizing the sum of local objective function [35, 46, 45, 36, 54, 33, 26] in a multiagent networks, while a subset of recent work considered the minmax optimization problem [55, 6], where the objective is to minimize the maximum cost incurred by any agent in the network. Notably, extensions of consensus based algorithms have also been studied in the domain of distributed regression [48], estimation and inference tasks [58, 59]. While much of the aforementioned work focuses on consensusbased algorithms, an alternative distributed messaging protocol for consensus propagation across a network is presented in [30]. The work in this paper extends the realm of consensus based algorithms (and not consensus propagation) to capture competitive aspect of multiagent networks.
The remainder of the paper is organized as follows. In section 2, we describe the problem of interest, provide two motivating examples and state our assumptions. A synchronous distributed algorithm is proposed in section 3 and convergence theory is provided. An asynchronous gossipbased variant of this algorithm is described in section 4 and is supported by convergence theory and error analysis. In section 5.2, we present an extension to the problem presented in section 2 and suitably adapt the distributed synchronous and asynchronous algorithm to address this generalization. We present some numerical results in section 6 and, finally, conclude in section 7.
Throughout this paper, we view vectors as columns. We write to denote the transpose of a vector , and to denote the inner product of vectors and . We use to denote the Euclidean norm of a vector . We use to denote the Euclidean projection operator on a set , i.e. The expectation of a random variable is denoted by and a.s. denotes almost surely. A matrix is rowstochastic if for all , and . A matrix is doubly stochastic if both and are row stochastic.
2 Problem Formulation and Background
In this section we introduce an aggregative game of our interest and provide its sufficient equilibrium conditions. The players in this game are assumed to have local interactions with each other over time, where these interactions are modeled by timevarying connectivity graphs. We also discuss some auxiliary results for the players’ connectivity graphs and present our distributed algorithm for equilibrium computation.
2.1 Formulation and Examples
Consider a set of players (or agents) indexed by and let . The th player is characterized by a strategy set and a payoff function , which depends on player decision and the aggregate of all player decisions. Furthermore, denotes the aggregate of all players excepting player , i.e.,
To formalize the game, let denote the Minkowski sum of the sets , defined as follows:
(1) 
In a generic aggregative game, given , player faces the following parametrized optimization problem:
minimize  (2)  
subject to  (3) 
where and is the aggregate of the agent’s decisions , i.e.,
(4) 
with as given in (1), and . The set and the function are assumed to be known by agent only. Next, we motivate our work by providing an example of an aggregative game, whose broad range emphasizes the potential scope of our work.
Example 1 (NashCournot game over a network).
A classical example of an aggregative game is a NashCournot played over a network [8, 38, 22]. Suppose a set of firms compete over locations. In this situation, the communication network of our interest is formed by the players which are viewed as the nodes in the network. One such instance of connectivity graph is as shown in Figure 1. This graph determines how the firms communicate their production decision over locations. More specifically, the firm in the center of the graph has access to information from all the other firms, whereas all the other firms have access to the information of the firm in the center only. We consider other instances of connectivity graph in Section 6. To this end, let firm
’s production and sales at location be denoted by and , respectively, while its cost of production at location is denoted by . Consequently, goods sold by firm at location fetch a revenue where denotes the sales price at location and represents the aggregate sales at location . Finally, firm ’s production at location is capacitated by and its optimization problem is given by the following^{3}^{3}3Note that the transportation costs are assumed to be zero.:
minimize  (5)  
subject to  
(6) 
In effect, firm ’s payoff function is parametrized by nodal aggregate sales, thus rendering an aggregative game. Note that, in this example we have two independent networks, the first being used to model the communication of the firms and the second being used to model the physical layout of the firms production unit and locations. We allow the communication network to be dynamic but the layout network is assumed to be static.
2.2 Equilibrium Conditions and Assumptions
To articulate sufficiency conditions, we make the following assumptions on the constraint sets and the functions
Assumption 1.
For each , the set is compact and convex. Each function is continuously differentiable in over some open set containing the set , while each function is convex over the set .
Under Assumption 1, the (sufficient) equilibrium conditions of the Nash game in (2) can be specified as a variational inequality problem VI (cf. [11]). Recall that VI requires determining a point such that
where
(7) 
with , for all , and defined by (4). Note that, by Assumption 1, the set is a compact and convex set in , and the mapping is continuous. To emphasize the particular form of the mapping , we define as follows:
(8) 
The mapping is given by
(9) 
where the component maps are given by (8). With this notation, we have
(10) 
Next, we make an assumption on the mapping .
Assumption 2.
The mapping is strictly monotone over , i.e.,
Assumption 1 allows us to claim the existence of a Nash equilibrium, while Assumption 2 allows us to claim the uniqueness of the equilibrium.
Proposition 1.
Proof.
Strict monotonicity assumptions on the mapping are seen to hold in a range of practical problem settings, including NashCournot games [22], rate allocation problems [2, 3, 61, 62], amongst others. We now state our assumptions on the mappings , which are related to the coordinate mappings of in (7).
Assumption 3.
Each mapping is uniformly Lipschitz continuous in over , for every fixed i.e., for some and for all ,
where is as defined in (1).
One would naturally question whether such assumptions are seen to hold in practical instances of aggregative games. We will show in section 6 that the assumptions are satisfied for the NashCournot game described in Example 1.
Before proceeding, it is worthwhile to reiterate the motivation for the present work. In the context of continuousstrategy Nash games, when the mapping satisfies a suitable monotonicity property over , then a range of distributed projectionbased schemes [11, 3, 2, 42, 43] and their regularized variants schemes [61, 62, 21, 22] can be constructed. In all of these instances, every agent should be able to observe the aggregate of the agent decisions. In this paper, we assume that this aggregate cannot be observed and no central entity exists that can globally broadcast this quantity at any time. Yet, when agents are connected in some manner, then a given agent may communicate locally with their neighbors and generate estimates of the aggregate decisions. Under this restriction, we are interested in designing algorithms for computing an equilibrium of an aggregative Nash game (2).
3 Distributed Synchronous Algorithm
In this section we develop a distributed synchronous algorithm for equilibrium computation of the game in (2) that relies on agents constructing an estimate of the aggregate by mixing information drawn from local neighbors and making a subsequent projection step. In Section 3.1, we describe the scheme and provide some preliminary results in Section 3.2. This section concludes in Section 3.3 with an analysis of the convergence of the proposed scheme.
3.1 Outline of Algorithm
Our algorithm equips each agent in the network with a protocol that mandates that every agent exchange information with its neighbors, and subsequently update its decision and the estimate of the aggregate decisions, simultaneously. We employ a synchronous time model which can contend with a time varying connectivity graph. Consequently, in this section we consider a time varying network to model agent’s communications in time. More specifically, let be the set of underlying undirected edges between agents and let denote the connectivity graph at time Let denote the set of agents who are immediate neighbors of agent at time that can send information to , assuming that for all and all . Mathematically, can be expressed as:
We make the following assumption on the graph .
Assumption 4.
There exists an integer such that the graph is connected for all .
This assumption ensures that the intercommunication intervals are bounded for agents that communicate directly; i.e., every agent sends information to each of its neighboring agents at least once every time intervals. This assumption has been commonly used in distributed algorithms on networks, starting with [57].
Due to incomplete information at any point, an agent only has an estimate of in contrast to the actual We describe how an agent may build this estimate. Let be the iterate and be the estimate of the average of the decisions for agent at the end of the th iteration. At the beginning of the st iteration, agent receives the estimates from its neighbors . Using this information, agent aligns its intermediate estimate according to the following rule:
(11) 
where is the nonnegative weight that agent assigns to agent ’s estimate. By specifying for we can write:
Using this aligned average estimate and its own iterate , agent updates its iterate and average estimate as follows:
(12)  
(13) 
where is the stepsize, denotes the Euclidean projection of a vector onto the set and is as defined in (8). The quantity in (12) is the aggregate estimate that agent uses instead of the true estimate of the agent decisions at time . Under suitable conditions on the agents weights and the stepsize , the iterate vector can converge to a Nash equilibrium point and the estimates in (12) will converge to the true aggregate value at the equilibrium. These assumptions are given below.
Assumption 5.
Let be the weight matrix with entries . For all and all , the following hold:

for all and for ;

for all ;

for all
Assumption 5 essentially requires every player to assign a positive weight to the information received from its neighbor. Following Assumption 5 (ii)(iii), the matrix is doubly stochastic. We point the reader to [35] for the examples and a detailed discussion of the weights satisfying the preceding assumption.
Assumption 6.
The stepsize is chosen such that the following hold:

The sequence is monotonically nonincreasing i.e., for all ;

;

Such an assumption is satisfied for a stepsize of the form where .
3.2 Preliminary Results
We next provide some auxiliary results for the weight matrices and the estimates generated by the method. We introduce the transition matrices from time to , as follows:
where for all Let denote the th entry of the matrix and let be the column vector with all entries equal to 1. We next state a result on the convergence properties of the matrix The result can be found in [34] (Corollary 1).
Lemma 1 ([57] Lemma 5.3.1).
Next, we state some results which will allow us to claim the convergence of the algorithm. These results involve the average of the estimates , defined by :
(14) 
As we proceed to show, will play a key role in establishing the convergence of the iterates produced by the algorithm in (12)–(13). One important property of is that we have for all Thus, not only captures the average belief of the agents in the network but it also represents the true average information. This property of the true average has been shown in [48] within the proof of Lemma 5.2 for a different setting, and it is given in the following lemma for sake of clarity.
Lemma 2.
Let be such that for every and . Then, for all , where is defined by (14).
Proof.
It suffices to show that for all ,
(15) 
We show this by induction on . For relation (15) holds trivially, as we have initialized the beliefs with for all . Assuming relation (15) holds for as the induction step, we have
where the first equality follows from (13), the second inequality is a consequence of the mixing relationship articulated by (11), and the last equality follows from for every and . Furthermore, using the induction hypothesis, we have thus implying that ∎
As a consequence of Lemma 2, Assumptions 1 and 3, we have the following result which will be often used in the sequel.
Lemma 3.
Proof.
By Lemma 2, we have , where is compact since each is compact (Assumption 1). Since each is continuous over , the first inequality follows. To show that is bounded, we write
Using the Lipschitz property of of Assumption 3, we obtain
Let be the convex hull of the union set . Note that for all and that is compact (since each is compact).^{4}^{4}4Though we cannot claim the same for . Thus, is bounded. As already established, is also bounded, implying that is bounded as well. ∎
In the following lemma, we establish an error bound on the norm which plays an important role in our analysis.
Lemma 4.
Proof.
Using the definitions of and given in Eqs. (13) and (11), respectively, we have
which through an iterative recursion leads to
The preceding relation can be rewritten as:
By the definition of in Eq. (13), we have , through which we get
(16) 
Now, consider which may be written as follows:
By Lemma 2 we have for all , which implies
(17) 
where the last equality follows by the definition of (see (14)).
From relations (16) and (17) we have
(18)  
(19) 
where the last inequality follows from for all (cf. Lemma 1).
Now, we estimate From relation (12) we see that for any ,
(20)  
(21) 
where the first inequality follows by the nonexpansive property of projection map, and the second inequality follows by Lemma 3. Combining (3.2) and (3.2), we have
where in the last inequality, we use and which is finite since each is a compact set (cf. Assumption 1). ∎
From the right hand side of the expression in Lemma 4, it is apparent that the parameter for network connectivity, (cf. Assumption 4) determines the rate of convergence of player’s estimate of the aggregate to the actual aggregate. If the network connectivity is poor, is large implying is close to 1 resulting in a slower convergence rate.
3.3 Convergence theory
In this subsection, under our assumptions, we prove that the sequence produced by the proposed algorithm does indeed converge to the unique Nash equilibrium, which exists by Proposition 1. Our next proposition provides the main convergence result for the algorithm. Prior to providing this result, we state two lemmas that will be employed in proving the required result, the first being a supermartingale convergence result (see for example [44, Lemma 11, Pg. 50]) and the second being [47, Lemma 3.1(b)].
Lemma 5.
Let and be nonnegative random variables adapted to some algebra . If almost surely , , and
then almost surely converges and .
Lemma 6.
[47, Lemma 3.1(b)] Let be a nonnegative scalar sequence. If and then
In what follows, we use to denote the vector with components , , i.e., and, similarly, we write for the vector .
Proposition 2.
Proof.
By Proposition 1, VI has a unique solution . When solves the variational inequality problem VI, the following relation holds (see [11, Proposition 1.5.8, p. 83]). From this relation and the nonexpansive property of projection operator, we see that
By expanding the last term, we obtain the following expression:
(22)  
(23) 
To estimate Term 1, we use the triangle inequality and the identity , which yields
where is such that for all and (cf. Lemma 3) and is finite by Assumption 1. Next, we consider Term 2. By adding and subtracting in Term 2, where is defined by (14), we have