On the Characterization of Local Nash Equilibria in Continuous Games

# On the Characterization of Local Nash Equilibria in Continuous Games

Lillian J. Ratliff, Samuel A. Burden,and S. Shankar Sastry, The authors are with the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, 94720 USA e-mail: ratliffl, sburden, sastry@eecs.berkeley.eduThis work is supported by NSF CPS:Large:ActionWebs award number 0931843, TRUST (Team for Research in Ubiquitous Secure Technology) which receives support from NSF (award number CCF-0424422), FORCES (Foundations Of Resilient CybEr-physical Systems) which receives support from NSF (award number CNS-1239166).
###### Abstract

We present a unified framework for characterizing local Nash equilibria in continuous games on either infinite–dimensional or finite–dimensional non–convex strategy spaces. We provide intrinsic necessary and sufficient first– and second–order conditions ensuring strategies constitute local Nash equilibria. We term points satisfying the sufficient conditions differential Nash equilibria. Further, we provide a sufficient condition (non–degeneracy) guaranteeing differential Nash equilibria are isolated and show that such equilibria are structurally stable. We present tutorial examples to illustrate our results and highlight degeneracies that can arise in continuous games.

## I Introduction

Many engineering systems are complex networks in which intelligent actors make decisions regarding usage of shared, yet scarce, resources. Game theory provides established techniques for modeling competitive interactions that have emerged as tools for analysis and synthesis of systems comprised of dynamically–coupled decision–making agents possessing diverse and oft–opposing interests (see, e.g. [1, 2]). We focus on games with a finite number of agents where their strategy spaces are continuous, either a finite–dimensional differentiable manifold or an infinite–dimensional Banach manifold.

Previous work on continuous games with convex strategy spaces and player costs led to global characterization and computation of Nash equilibria [3, 4, 5]. Adding constraints led to extensions of nonlinear programming concepts, such as constraint qualification conditions, to games with generalized Nash equilibria [6, 7, 8]. Imposing a differentiable structure on the strategy spaces yielded other global conditions ensuring existence and uniqueness of Nash equilibria and Pareto optima [9, 10, 11]. In contrast, we aim to analytically characterize and numerically compute local Nash equilibria in continuous games on non–convex strategy spaces.

Bounding the rationality of agents can result in myopic behavior [12], meaning that agents seek strategies that that are optimal locally but not necessarily globally. Further, it is common in engineering applications for strategy spaces or player costs to be non–convex, for example when an agent’s configuration space is a constrained set or a differentiable manifold [13, 14]. These observations suggest that techniques for characterization and computation of local Nash equilibria have important practical applications.

Motivated by systems with myopic agents and non–convex strategy spaces, we seek an intrinsic characterization for local Nash equilibria that is structurally stable and amenable to computation. By generalizing derivative–based conditions for local optimality in nonlinear programming [15] and optimal control [16], we provide necessary first– and second–order conditions that local Nash equilibria must satisfy, and further develop a second–order sufficient condition ensuring player strategies constitute a local Nash equilibrium. We term points satisfying this sufficient condition differential Nash equilibria. In contrast to a pure optimization problem, this second–order condition is insufficient to guarantee a differential Nash equilibrium is isolated; in fact, games may possess a continuum of differential Nash equilibria. Hence, we introduce an additional second–order condition ensuring a differential Nash equilibrium is isolated.

Verifying that a strategy constitutes a Nash equilibrium in non–trivial strategy spaces requires testing that a non–convex inequality constraint is satisfied on an open set, a task we regard as generally intractable. In contrast, our sufficient conditions for local Nash equilibria require only the evaluation of player costs and their derivatives at single points. Further, our framework allows for numerical computations to be carried out when players’ strategy spaces and cost functions are non–convex. Hence, we provide tractable tools for characterization and computation of differential Nash equilibria in continuous games.

We show that non–degenerate differential Nash equilibria are structurally stable; hence, measurement noise and modeling errors that give rise to a nearby game do not result in drastically different equilibrium behavior—a property that is desirable in both the design of games as well as inverse modeling of agent behavior in competitive environments. Further, structural stability ensures that following the flow generated by the gradient of each player’s cost converges locally to a stable, non–degenerate differential Nash equilibrium. We remark that non–degenerate differential Nash equilibria are generic in the finite–dimensional case [17].

The rest of the paper is organized as follows. In section II we present the game formulation in both the finite–dimensional and infinite–dimensional case. We follow with the characterization of local Nash equilibria in Section III. Throughout the paper we carry an example that provides insight into the importance of the results and in Section V we return to the example in more detail. Finally, we conclude with discussion in Section VI. The necessary mathematical background and notation is contained in the Appendix.

## Ii Game Formulation

The theory of games we consider concerns interaction between a finite number of rational agents generally having different interests and objectives. We refer to the rational agents as players. Competition arises due to the fact that the players have opposing interests.

Let us begin by considering a game in which we have selfish players with competing interests. The strategy spaces are topological spaces for each . Note these can be finite–dimensional smooth manifolds or infinite–dimensional Banach manifolds. We denote the joint strategy space by . The players are each interested in minimizing a cost function representing their interests by choosing an element from their strategy space. We define player ’s cost to be a twice–differentiable function . The following definition describes the equilibrium behavior we are interested in: {definition} A strategy is a \bfelocal Nash equilibrium if there exist open sets such that and for each

 fi(u1,…,ui,…,un)≤fi(u1,…,u′i,…,un), (1)

for all . Further, if the above inequalities are strict, then we say is a \bfestrict local Nash equilibrium. If for each , then is a \bfeglobal Nash equilibrium. Simply put, the above definition says that no player can unilaterally deviate from the Nash strategy and decrease their cost.

Before we move on to the characterization of local Nash equilibria, we describe the types of games the results apply to and why they are important in engineering applications.

Continuous games with finite–dimensional strategy spaces are described by the player strategy spaces and their cost functions . They arise in a number of engineering and economic applications, for instance, in modeling one–shot decision making problems arising in transportation, communication and power networks [18, 19, 20]. On the other hand, continuous games with infinite–dimensional strategy spaces, regarded as open–loop differential games, are used in engineering applications in which there are agents coupled through dynamics. They arise in problems such as building energy management [21], travel-time optimization in transportation networks [22], and integration of renewables into energy systems [23].

Open–loop differential games often come in the following form. Let denote the space of square integrable functions from into . For an –player game, strategy spaces are Banach manifolds, for , modeled on . For each , let denote the state of the game. The state evolves according to the dynamics

 ˙x(t)=h(x(t),\uI1(t),…,\uIn(t))  ∀ t∈[0,T] (2)

where is player ’s strategy. We assume that is continuously differentiable, globally Lipschitz continuous and all the derivatives in all its arguments are globally Lipschitz continuous. We denote by player ’s cost function. The superscript notation on the state indicates the dependence of the state on the initial state and the strategies of the players. Each is assumed twice continuously differentiable so that each is –Fréchet–differentiable [16, Thm. 5.6.10]. We pose each player’s optimization problem as

 min\uIi^fi(x(x(0),\uI1,…,\uIi,…,\uIn)(T)). (3)

The costate for player evolves according to

 ˙pi(t)=−pi(t)\pdhx(x(t),\uI1(t),…,\uIi(t),…,\uIn(t)) (4)

with final time condition

 pi(T)=Dxfi(x(x(0),\uI1,…,\uIi,…,\uIn)(T)). (5)

The derivative of the –th player’s cost function is given by

 (Difi)(t)=pi(t)\pdh\uIi(x(t),\uI1,…,\uIi(t),…,\uIn(t)). (6)

Before we dive into the details, let us consider a simple example that exhibits very interesting behavior. {example}[Betty–Sue] Consider a two player game between Betty and Sue. Let Betty’s strategy space be and her cost function . Similarly, let Sue’s strategy space be and her cost function . This game can be thought of as an abstraction of two agents in a building occupying adjoining rooms. The first term in each of their costs represents an energy cost and the second term is a cost from thermodynamic coupling. The agents try to maintain the temperature at a desired set–point in thermodynamic equilibrium.

Definition II specifies that a point is a Nash equilibrium if no player can unilaterally deviate and decrease their cost, i.e. for all and for all .

Fix Sue’s strategy , and calculate

 D1f1=\pdf1u1=u1−q (7)

Then, Betty’s optimal response to Sue playing is . Similarly, if we fix , then Sue’s optimal response to Betty playing is . For all

 −q22

so that for all . Again, similarly, for all

 −p22

so that for all . Hence, all the points on the line in are strict local Nash equilibria—in fact, they are strict global Nash equilibria.

As the above example shows, continuous games can exhibit a continuum of equilibria. Throughout the text we will return to this example.

## Iii Characterization of Local Nash Equilibria

In this section, we characterize local Nash equilibria by paralleling results in nonlinear programming and optimal control that provide first– and second–order necessary and sufficient conditions for local optima.

The following definition of a differential game form is due to Stein [24].

{definition}

A \bfedifferential game form is a differential –form defined by

 ω=n∑i=1ψMi∘\dmfi. (10)

where are the natural bundle maps defined in (27) that annihilate those components of the covector not corresponding to . {remark} If each is a finite–dimensional manifold of dimension , then the differential game form has the following coordinate representation:

 ω\vphi=n∑i=1mi∑j=1\pd(fi∘\vphi−1)yjidyji (11)

where is a product chart on at with local coordinates and where and . In addition, is the coordinate representation of for . In particular, where each is a coordinate function so that is its derivative. The differential game form captures a differential view of the strategic interaction between the players. Indeed, indicates the direction in which the players can change their strategies to decrease their individual cost functions most rapidly. Note that each player’s cost function depends on its own choice variable as well as all the other players’ choice variables. However, each player can only affect their payoff by adjusting their own strategy.

{definition}

A strategy is a \bfedifferential Nash equilibrium if and is positive–definite for each . The second–order conditions used to define differential Nash equilibria are motivated by results in nonlinear programming that use first– and second–order conditions to assess whether a critical point is a local optima [16], [15].

The following proposition provides first– and second–order necessary conditions for local Nash equilibria. We remark that these conditions are reminiscent of those seen in nonlinear programming for optimality of critical points. {prop} If is a local Nash equilibrium, then and is positive semi-definite for each .

###### Proof:

Suppose that is a local Nash equilibrium. Then,

 fi(u)≤fi(u1,…,u′i,…,un),  ∀ u′i∈Wi\bs{ui} (12)

for open , . Suppose that we have a product chart , where and , such that .

Let for each . Then, since is continuous, for each , we have that for all ,

 fi∘\vphi−1(v1,…,vi,…,vn)≤fi∘\vphi−1(v1,…,v′i,…,vn). (13)

Now, we apply Proposition 1.1.1 from [15], if is finite–dimensional, or Theorem 4.2.3(1) and Theorem 4.2.4(a) from [16], if is infinite–dimensional, to . We conclude that for each , and for all ,

 D2ii(fi∘\vphi−1)(v1,…,vn)(ν,ν)≥α∥ν∥2, (14)

i.e. it is a positive semi–definite bilinear form on .

Invariance of the stationarity of critical points and the index of the Hessian with respect to coordinate change gives us and is a positive semi–definite for each . \qed

We now show that the conditions defining a differential Nash equilibrium are sufficient to guarantee a strict local Nash equilibrium. {thm} A differential Nash equilibrium is a strict local Nash equilibrium.

###### Proof:

Suppose that is a differential Nash equilibrium. Then, by the definition of differential Nash equilibrium, and is positive definite for each . The second-derivative conditions imply that is a positive–definite bilinear form where for any coordinate chart , with , , and for each .

Using the isomorphism introduced in the appendix in (26), implies that for each , Let be the model space, i.e. the underlying Banach space, in either the finite–dimensional or infinite–dimensional case. Applying either Proposition 1.1.3 from [15] or Theorem 4.2.6 (a) from [16] to to each with fixed yields a neighborhood such that for all ,

 fi∘\vphi−1(v1,…,vi,…,vn)

Since is continuous, there exists a neighborhood of such that for and all ,

 fi(u1,…,ui,…,un)

Therefore, differential Nash equilibria are strict local Nash equilibria. Due to the fact that both and definiteness of the Hessian are coordinate invariant, this is independent of choice of coordinate chart. \qed

We remark that the conditions for differential Nash equilibria are not sufficient to guarantee that an equilibrium is isolated. {example}[Betty–Sue: Continuum of Differential Nash] Returning to the Betty–Sue \debacle, we can check that at all the points such that , and for each . Hence, there is a continuum of differential Nash equilibria in this game. We propose a sufficient condition to guarantee that differential Nash equilibria are isolated. We do so by combining ideas introduced by Rosen for convex games with concepts from Morse theory, in particular second–order conditions on non–degenerate critical points of real-valued functions on manifolds.

At a differential Nash equilibrium , consider the derivative of the differential game form

 dω=n∑i=1d(ψMi∘dfi). (17)

Intrinsically, this derivative is a tensor field ; at a point where it is a bilinear form constructed from the uniquely determined continuous, symmetric, bilinear forms .

{thm}

If is a differential Nash equilibrium and is non–degenerate, then is an isolated strict local Nash equilibrium.

###### Proof:

Since is a differential Nash equilibrium, Theorem III gives us that it is a strict local Nash equilibrium. The following argument shows that it is isolated. Choose a coordinate chart with and . Let denote the underlying model space of the manifold . Define the map by

 g(\vphi(u))=n∑i=1Di(fi∘\vphi−1)(\vphi(u)) (18)

Note that is the coordinate representation of the differential game form . Zeros of the function define critical points of the game and its derivative at critical points is . Since is a differential Nash equilibrium, . Further, since is non–degenerate—the map is a linear isomorphism—we can apply the Inverse Function Theorem [25, Thm. 2.5.2] to get that is a local diffeomorphism at , i.e. there exists an open neighborhood of such that the restriction of to establishes a diffeomorphism between and an open subset of . Thus, only could be mapped to zero near . Non–degeneracy of is invariant with respect to choice of coordinates. Therefore, is isolated. \qed

{definition}

Differential Nash equilibira such that is non–degenerate are termed non–degenerate differential Nash equilibria.

{example}

[Betty–Sue: Degeneracy and Breaking Symmetry] Return again to the Betty–Sue example in which we showed that there is a continuum of Nash equilibria; in fact, all the points on the line are differential Nash equilibria and at each of these points we have

 (19)

so that . Hence, all of the equilibria are degenerate. By breaking the symmetry in the game, we can make a non–degenerate differential Nash equilibrium; i.e. we can remove all but one of the equilibria. Indeed, let Betty’s cost be given by and let Sue’s cost remain unchanged. Then the local representation of the derivative of the differential game form of the game is

 (20)

Thus for any value of , is a non–degenerate differential Nash equilibrium. This shows that small modeling errors can remove degenerate differential Nash equilibria.

In a neighborhood of a non–degenerate differential Nash equilibrium there are no other Nash equilibria. This property is desirable particularly in applications where a central planner is designing incentives to induce a socially optimal or otherwise desirable equilibrium that optimizes the central planner’s cost; if the desired equilibrium resides on a continuum of equilibria, then due to measurement noise or myopic play, agents may be induced to play a nearby equilibrium that is suboptimal for the central planner. In Section V, we extend Example II by introducing a central planner. But first, we show that non–degenerate differential Nash equilibria are structurally stable.

## Iv Structural Stability

Examples demonstrate that global Nash equilibria may fail to persist under arbitrarily small changes in player costs [10]. A natural question arises: do local Nash equilibria persist under perturbations? Applying structural stability analysis from dynamical systems theory, we answer this question affirmatively for nonâdegenerate differential Nash equilibria subject to smooth perturbations in player costs.

Let and be player cost functions, the associated differential game form (10), and suppose is a non–degenerate differential Nash equilibrium, i.e. and is non–degenerate. We show that for all sufficiently close to there exists a unique non–degenerate differential Nash equilibrium for near .

{prop}

[Parameterized Structural Stability] Non–degenerate differential Nash equilibria are parametrically structurally stable: given , , and a non–degenerate differential Nash equilibrium for , there exist neighborhoods of and of such that for all there exists a unique non–degenerate differential Nash equilibrium for .

###### Proof:

Define by

 \tdfj(u,s)=fj(u)+sζj(u)

and by

 \tdω(u,s)=n∑i=1\tdψMi∘d\tdfi(u,s)

for all and and where . Observe that is invertible since is a non–degenerate differential Nash equilibrium for . Therefore by the Implicit Function Theorem [25, Prop. 3.3.13 (iii)], there exist neighborhoods of and of and a smooth function such that

 ∀s∈V,u∈W:\tdω(u,s)=0⟺u=σ(s).

Furthermore, since is continuously differentiable, there exists a neighborhood of such that is invertible for all . We conclude for all that is the unique Nash equilibrium for , and furthermore that is a non–degenerate differential Nash equilibrium. \qed

We remark that the preceding analysis extends directly to any finitely–parameterized perturbation. For an arbitrary perturbation, we have the following.

{thm}

[Structural Stability] Non–degenerate differential Nash equilibria are structurally stable: let be a non–degenerate differential Nash equilibrium for . Then there exist neighborhoods of and of and a Fréchet–differentiable function such that for all the point is the unique non–degenerate differential Nash equilibrium for in .

###### Proof:

Consider the operator defined by

 Ω((\tdf1, …,\tdfn),(u1,…,un))=n∑i=1ψMi∘d\tdfi(u1,…,un). (21)

Note that the right–hand side is the differential game form for the game . Suppose that is a non-degenerate differential Nash equilibrium. A straightforward application of Proposition 2.4.20 [25] implies that the operator is Fréchet–differentiable. In addition,

 \D2Ω((f1,…,fn),(u1,…,un))=dω(u1,…,un). (22)

Since is an isomorphism by assumption, we can apply the Implicit Function Theorem [25, Prop. 3.3.13 (iii)] to to get an open neighborhood of and of and a smooth function such that

 ∀~f∈V, v∈W: Ω(~f,v)=0⟺ v=σ(~f)

where . Furthermore, since is continuously differentiable, there exists a neighborhood of such that is invertible for all . Thus, for all , is the unique non–degenerate differential Nash equilibrium. \qed

Let us return to Example II and examine what can happen in the degenerate case.

{example}

[Betty–Sue: Structural Instability] Let us recall again the Betty–Sue \debacle in which we have a game admitting a continuum of differential Nash equilibria. We can show that an arbitrarily small perturbation will make all the equilibria disappear. Indeed, let be arbitrarily small and consider Betty’s perturbed cost function

 ~f1(u1,u2)=u212−u1u2+εu1. (23)

Let Sue’s cost function remain unchanged. Then, all Nash equilibria disappear. Indeed, a necessary condition that a Nash equilibrium must satisfy is thereby implying and . This can only happen for . Hence, any perturbation with will remove all the Nash equilibria.

On the other hand, equilibria that are stable—thereby attracting using decoupled myopic approximate bestâ-response—persist under small perturbations [26].

{example}

[Convergence of Gradient Play] We adopt a dynamical systems perspective of a two–player game over the strategy space with player costs . Specifically, we consider the continuous–time dynamical system generated by the negative of the player’s individual gradients: \eqnn \matc˙u_1
˙u_2 = \matc- D_1 f_1(u_1,u_2)
- D_2 f_2(u_1,u_2) = -ω(u). If is a differential Nash equilibrium, then . These dynamics are uncoupled in the sense the dynamics for each player do not depend on the cost function of the other player. It is known that such uncoupled dynamics need not converge to local Nash equilibria [27]. However, the subset of non–degenerate differential Nash equilibria where the spectrum of is strictly in the right–half plane (in the finite–dimensional case, this corresponds to all eigenvalues of having strictly positive real parts) are exponentially stable stationary points of (IV[26, Prop. 4][25, Thm. 4.3.4]. Theorem IV shows that convergence of uncoupled gradient play to such stable non–degenerate differential Nash equilibria persists under small smooth perturbations to player costs.

We remark that in the finite–dimensional case we can show that non–degenerate differential Nash equilibria are generic among local Nash equilibria [17]. Genericity implies that local Nash equilibria in an open–dense set of continuous games are non–degenerate differential Nash equilibria. Furthermore, structural stability implies that these equilibria persist under smooth perturbations to player costs. As a consequence, small modeling errors or environmental disturbances generally do not result in games with drastically different equilibrium behavior.

## V Inducing a Nash Equilibrium

The problem of inducing Nash equilibria through incentive mechanisms appears in engineering applications including energy management [21] and network security [28, 29]. The central planner aims to shift the Nash equilibrium of the agents’ game to one that is desirable from its perspective. Thus the central planner optimizes its cost subject to constraints given by the inequalities that define a Nash equilibrium. This requires verification of non–convex conditions on an open set—a generally intractable task. A natural solution is to replace these inequalities with first– and second–order sufficient conditions on each agent’s optimization problem. As the Betty–Sue \debacle shows (Example II), these necessary conditions are not enough to guarantee the desired Nash is isolated; the additional constraint that be non–degenerate must be enforced.

{example}

[Betty–Sue: Inducing Nash] Consider a central planner who desires to optimize the cost of deviating from the temperature :

 fp(u1,u2)=(u1−τ)2+(u2−τ)2. (24)

The central planner wants to induce the agents to play by selecting and augmenting Betty’s and Sue’s costs:

 \tdfa1(u1,u2)=f1(u2,u2)+a2(u1−τ)2
 \tdfa2(u1,u2)=f2(u1,u2)+a2(u2−τ)2.

The differential game form of the augmented game is

 \tdω(u1,u2)=(u1−u2+a(u1−τ))du1+(u2−u1+a(u2−τ))du2

and the second–order differential game form is

For any , is a differential Nash equilibrium of since and . For any , the game undesirable behavior. Indeed, recall Example IV in which we consider the gradient dynamics for a two player game. For values of , is indefinite so that the equilibrium of the gradient system is a saddle point. Hence, if agents perform gradient play and happen to initialize on the unstable manifold, then they will not converge to any equilibrium. Further, while seems like a natural choice since it means not augmenting the players costs at all, it in fact gives rise to a continuum of equilibria. However, for , is positive definite so that, as Example IV points out, the gradient dynamics will converge and the value of determines the contraction rate.

This example indicates how undesirable behavior can arise when the operator is degenerate. Further, if the goal is to induce a particular Nash equilibrium amongst competitive agents, then it is not enough to consider only necessary and sufficient conditions for Nash equilibria; inducing stable non–degenerate differential Nash equilibria leads to desirable and structurally stable behavior.

## Vi Discussion

By paralleling results in non–linear programming and optimal control, we developed first– and second–order necessary and sufficient conditions that characterize local Nash equilibria in continuous games on both finite– and infinite–dimensional strategy spaces. We further provided a second–order sufficient condition guaranteeing differential Nash equilibria are non–degenerate and, hence, isolated. We showed that non–degenerate differential Nash equilibria are structurally stable and thus small modeling errors or environmental disturbances generally will not result in games with drastically different equilibrium behavior. Further, as a result of structural stability, our characterization of non–degenerate differential Nash equilibria is amenable to computation. We illustrate through an example that such a characterization has value for the design of incentives to induce a desired equilibria. By enforcing not only non–degeneracy but also stability of a differential Nash equilibrium, the central planner can ensure that the desired equilibrium is isolated and that gradient play will converge locally.

[Mathematical Prelimiaries] This appendix contains the standard mathematical objects used throughout this paper (see [30, 25] for a more detailed introduction).

Suppose that is second–countable and a Hausdorff topological space. Then a chart on is a homeomorphism from an open subset of to an open subset of a Banach space. We sometimes denote a chart by the pair . Two charts and are –compatible if and only if the composition is a –diffeomorphism. A –atlas on is a collection of charts any two of which are –compatible and such that the ’s cover . A smooth manifold is a topological manifold with a smooth atlas. We use the term manifold generally; we specify whether it is a finite– or infinite–dimensional manifold only when it is not clear from context. If a covering by charts takes their values in a Banach space , then is called the model space and we say that is a Banach manifold. We remark that one can form a manifold modeled on any linear space in which one has theory of differential calculus; we use Banach manifolds so that we can utilize the inverse function theorem.

Suppose that where are –manifolds. We say is of class with , and we write , if for each and a chart of with , there is a chart of satisfying , , and such that the local representation of , namely , is of class . If , then can be taken to be the identity map so that the local representation is given by .

Each has an associated tangent space , and the disjoint union of the tangent spaces is the tangent bundle . The co-tangent space to at , denoted , is the set of all real-valued linear functionals—or, simply, the dual—on the tangent space , and the disjoint union of the co–tangent spaces is the co–tangent bundle . Both and are naturally smooth manifolds [25, Thm. 3.3.10 and Ch. 5.2 resp.].

For a vector space we define the vector space of continuous –multilinear maps with copies of and copes of and where denotes the dual. We say elements of are tensors on , contravariant of order and covariant of order . Further, we use the notation to denote the vector bundle of tensors contravariant of order and covariant of order  [25, Def. 5.2.9]. In this notation, is identified with the tangent bundle and with the cotangent bundle .

Suppose is a mapping of one manifold into another, and , then by means of charts we can interpret the derivative of on each chart at as a linear mapping When , the collection of such maps defines a –form . More generally, a –form is a continuous map satisfying where is the natural projection mapping to .

A point is said to be a critical point of a map , if . At a critical point , there is a uniquely determined continuous, symmetric, bilinear form (termed the Hessian) such that is defined for all by where is any product chart at and are the local representations of respectively [31, Prop. in §7]. We say is positive semi–definite if there exists such that for any chart ,

 d2(f∘\vphi−1)(\vphi(u))(v,v)≥α∥v∥2,  ∀ v∈T\vphi(u)E. (25)

If , then we say is positive–definite. Both and positive definiteness are invariant with respect to the choice of coordinate chart.

Given a Banach space and a bounded, symmetric bilinear form on , we say that is non–degenerate if the linear map defined by is a linear isomorphism of onto , otherwise is degenerate. A critical point of is called non–degenerate if the Hessian of at is non–degenerate [31, Def. in §7]. Degeneracy is independent of the choice of coordinate chart.

Consider smooth manifolds . The product space is naturally a smooth manifold [25, Def. 3.2.4]. In particular, there is an atlas on composed of product charts where is a chart on for . We use the notation and .

There is a canonical isomorphism at each point such that the cotangent bundle of the product manifold splits:

 T∗(u1,…,un)(M1×⋯×Mn)≅T∗u1M1⊕⋯⊕T∗unMn (26)

where denotes the direct sum of vector spaces. There are natural bundle maps

 ψMi:T∗(M1×⋯×Mn)\rarT∗(M1×⋯×Mn) (27)

annihilating the all the components other than those corresponding to of an element in the cotangent bundle for each . In particular, where and is the zero functional in for each .

Let . Given a point , then is the natural inclusion map where . Suppose we have a function . Then the derivatives of the map where for each are called the partial derivatives of at  [25, Prop. 2.4.12]. They are given by where and . Indeed, is a map such that . Hence, by the chain rule, we have Further, we have that for , For second–order partial derivatives, we use the notation .

## References

• [1] P. Frihauf, M. Krstic, and T. Başar, “Nash equilibrium seeking in noncooperative games,” IEEE Trans. Automat. Control, vol. 57, no. 5, pp. 1192–1207, 2012.
• [2] J. Shamma and G. Arslan, “Dynamic fictitious play, dynamic gradient play, and distributed convergence to Nash equilibria,” IEEE Trans. Automat. Control, vol. 50, no. 3, pp. 312–327, 2005.
• [3] S. Li and T. Başar, “Distributed algorithms for the computation of noncooperative equilibria,” Automatica, vol. 23, no. 4, pp. 523–533, 1987.
• [4] T. Başar, “Relaxation techniques and asynchronous algorithms for on-line computation of non-cooperative equilibria,” J. of Economic Dynamics and Control, vol. 11, no. 4, pp. 531–549, 1987.
• [5] J. Contreras, M. Klusch, and J. Krawczyk, “Numerical solutions to Nash–Cournot equilibria in coupled constraint electricity markets,” IEEE Trans. Power Syst., vol. 19, no. 1, pp. 195–206, 2004.
• [6] D. Dorsch, H. Jongen, and V. Shikhman, “On structure and computation of generalized nash equilibria,” SIAM J. on Optimization, vol. 23, no. 1, pp. 452–474, 2013.
• [7] J. B. Rosen, “Existence and uniqueness of equilibrium points for concave n-person games,” Econometrica, vol. 33, no. 3, p. 520, 1965.
• [8] F. Facchinei, A. Fischer, and V. Piccialli, “On generalized Nash games and variational inequalities,” Oper. Res. Lett., vol. 35, no. 2, pp. 159–164, 2007.
• [9] R. Thom, “L’optimisation simultanée et la théorie des jeux en topologie différentielle,” Comptes rendus des Journées Mathématiques de la Société Mathématique de France, vol. 3, pp. 63–70, 1974.
• [10] I. Ekeland, “Topologie différentielle et théorie des jeux,” Topology, vol. 13, no. 4, pp. 375–388, 1974.
• [11] S. Smale, “Global analysis and economics,” Synthese, vol. 31, no. 2, pp. 345–358, 1975.
• [12] S. D. Flåm, “Restricted attention, myopic play, and the learning of equilibrium,” Ann. of Oper. Res., vol. 82, pp. 473–482, 1998.
• [13] A. Muhammad and M. Egerstedt, “Decentralized coordination with local interactions: Some new directions,” in Cooperative Control.   Springer, 2005, pp. 153–170.
• [14] E. Klavins and D. E. Koditschek, “Phase regulation of decentralized cyclic robotic systems,” The Int. J. of Robotics Research, vol. 21, no. 3, pp. 257–275, 2002.
• [15] D. P. Bertsekas, Nonlinear programming.   Athena Scientific, 1999.
• [16] E. Polak, Optimization: algorithms and consistent approximations.   Springer New York, 1997.
• [17] L. J. Ratliff, S. A. Burden, and S. S. Sastry, “Generictiy and structural stability of non–degenerate differential Nash equilibria,” in Proc. of the American Control Conf., 2014.
• [18] W. Krichene, J. Reilly, S. Amin, and A. Bayen, “Stackelberg routing on parallel networks with horizontal queues,” IEEE Trans. Automat. Control, vol. 59, no. 3, pp. 714–727, 2014.
• [19] U. O. Candogan, I. Menache, A. Ozdaglar, and P. A. Parrilo, “Near-optimal power control in wireless networks: a potential game approach,” in Proc. of the 29th IEEE Conf. on Information Communications, 2010, pp. 1–9.
• [20] J.-B. Park, B. H. Kim, J.-H. Kim, M.-H. Jung, and J.-K. Park, “A continuous strategy game for power transactions analysis in competitive electricity markets,” IEEE Trans. Power Syst., vol. 16, no. 4, pp. 847–855, Nov. 2001.
• [21] S. Coogan, L. J. Ratliff, D. Calderone, C. Tomlin, and S. S. Sastry, “Energy management via pricing in LQ dynamic games,” in Proc. of the American Control Conf., 2013, pp. 443–448.
• [22] A. Bressan and K. Han, “Nash equilibria for a model of traffic flow with several groups of drivers,” ESAIM: Control, Optimisation and Calculus of Variations, vol. 18, no. 04, pp. 969–986, 2012.
• [23] Q. Zhu, J. Zhang, P. Sauer, A. Dominguez-Garcia, and T. Başar, “A game-theoretic framework for control of distributed renewable-based energy resources in smart grids,” in Proc. of the American Control Conf., June 2012, pp. 3623–3628.
• [24] N. Stein, “Games on manifolds,” 2010, unpublished notes (personal correspondence).
• [25] R. Abraham, J. E. Marsden, and T. Ratiu, Manifolds, Tensor Analysis, and Applications, 2nd ed.   Springer, 1988.
• [26] L. J. Ratliff, S. A. Burden, and S. S. Sastry, “Characterization and computation of local Nash equilibria in continuous games,” in Proc. of the 51st Annu. Allerton Conf. on Communication, Control, and Computing, 2013.
• [27] S. Hart and A. Mas-Colell, “Uncoupled dynamics do not lead to Nash equilibrium,” American Economic Review, vol. 93, no. 5, pp. 1830–1836, 2003.
• [28] L. J. Ratliff, S. Coogan, D. Calderone, and S. S. Sastry, “Pricing in linear-quadratic dynamic games,” in Proc. of the 50th Annu. Allerton Conf. on Communication, Control, and Computing, 2012, pp. 1798–1805.
• [29] Q. Zhu, C. J. Fung, R. Boutaba, and T. Başar, “Guidex: A game-theoretic incentive-based mechanism for intrusion detection networks,” IEEE J. Sel. Areas Commun., pp. 2220–2230, 2012.
• [30] J. M. Lee, Introduction to smooth manifolds, 2nd ed.   Springer, 2012.
• [31] R. S. Palais, “Morse theory on hilbert manifolds,” Topology, vol. 2, no. 4, pp. 299–340, 1963.