ALGAMES: A Fast Solver for Constrained Dynamic Games
Abstract
Dynamic games are an effective paradigm for dealing with the control of multiple interacting actors. This paper introduces ALGAMES (Augmented Lagrangian GAMEtheoretic Solver), a solver that handles trajectory optimization problems with multiple actors and general nonlinear state and input constraints. Its novelty resides in satisfying the first order optimality conditions with a quasiNewton rootfinding algorithm and rigorously enforcing constraints using an augmented Lagrangian formulation. We evaluate our solver in the context of autonomous driving on scenarios with a strong level of interactions between the vehicles. We assess the robustness of the solver using Monte Carlo simulations. It is able to reliably solve complex problems like ramp merging with three vehicles three times faster than a stateoftheart DDPbased approach. A model predictive control (MPC) implementation of the algorithm demonstrates realtime performance on complex autonomous driving scenarios with an update frequency higher than 60 Hz.
I Introduction
Controlling a robot in an environment where it interacts with other agents is a complex task. Traditional approaches in the literature adopt a predictthenplan architecture. First, predictions of other agents’ trajectories are computed, then they are fed into a planner that considers them as immutable obstacles. This approach is limiting because the effect of the robot’s trajectory on the other agents is ignored. Moreover, it can lead to the “frozen robot” problem that arises when the planner finds that all paths to the goal are unsafe [Trautman2010]. It is therefore crucial for a robot to simultaneously predict the trajectories of other vehicles on the road while planning its own trajectory, in order to capture the reactive nature of all the agents in the scene. ALGAMES provides such a joint trajectory predictor and planner by considering all agents as players in a Nash style dynamic game. We envision ALGAMES as being run online by a robot in a receding horizon loop, at each iteration planning a trajectory for the robot by explicitly accounting for the reactive nature of all agents in its vicinity.
Joint trajectory prediction and planning in scenarios with multiple interacting agents is welldescribed by a dynamic game. Dealing with the gametheoretic aspect of multiagent planning problems is a critical issue that has a broad range of applications. For instance, in autonomous driving, ramp merging, lane changing, intersection crossing, and overtaking maneuvers all comprise some degree of gametheoretic interactions [Sadigh2016, Sadigh2016a, FridovichKeil2019a, Dreves2018, Fisac2019, Schmerling2018]. Other potential applications include mobile robots navigating in crowds, like package delivery robots, tour guides, or domestic robots; robots interacting with people in factories, such as mobile robots or fixedbase multilink manipulators; and competitive settings like drone and car racing [Spica2018, Liniger2019].
In this work, we seek solutions to constrained multiplayer generalsum dynamic games. In dynamic games, the players’ strategies are sequences of decisions. It is important to notice that, unlike traditional optimization problems, noncooperative games have no “optimal” solution. Depending on the structure of the game, asymmetry between players, etc., different concepts of solutions are possible. In this work, we search for Nash equilibrium solutions. This type of equilibrium models symmetry between the players; all players are treated equally. At such equilibria, no player can reduce its cost by unilaterally changing its strategy. For extensive details about the gametheory concepts addressed in this paper, we refer readers to the work of Bressan [Bressan2010] and Basar et al. [Basar1999].
Our solver is aimed at finding a Nash equilibrium for multiplayer dynamic games, and can handle general nonlinear state and input constraints. This is particularly important for robotic applications, where the agents often interact through their desire to avoid collisions with one another or with the environment. Such interactions are most naturally represented as (typically nonlinear) state constraints. This is a crucial feature that sets gametheoretic methods for robotics apart from gametheoretic methods in other domains, such as economics, behavioral sciences, and robust control. In these domains, the agent interactions are traditionally represented in the objective functions themselves, and these games typically have no state or input constraints. In mathematics literature, Nash equilibria with constraints are referred to as Generalized Nash Equilibria [Facchinei2007]. Hence, in this paper we present an augmented Lagrangian solver for finding Generalized Nash Equilibria specifically tailored to robotics applications.
Our solver assumes that players are rational agents acting to minimize their costs. This rational behavior is formulated using the firstorder necessary conditions for Nash equilibria, analogous to the KarushKuhnTucker (KKT) conditions in optimization. By relying on an augmented Lagrangian approach to handle constraints, the solver is able to solve multiplayer games with several agents and a high level of interactions at realtime speeds. Finding a Nash equilibrium for 3 autonomous cars in a freeway merging scenario takes ms. Our primary contributions are:

A general solver for dynamic games aimed at identifying Generalized Nash Equilibrium strategies.

A real time MPC implementation of the solver able to handle noise, disturbances, and collision constraints (Fig. 1).

A comparison with iLQGames [FridovichKeil2019a] on speed. ALGAMES finds Nash equilibria 3 times faster than iLQGames for a fixed constraint satisfaction criterion.
Ii Related Work
Iia Equilibrium Selection
Recent work focused on solving multiplayer dynamic games can be categorized by the type of equilibrium they select. Several works [Sadigh2016, Sadigh2016a, Liniger2019, Yoo2012] have opted to search for Stackelberg equilibria, which model an asymmetry of information between players. These approaches are usually formulated for games with two players, a leader and a follower. The leader chooses its strategy first, then the follower selects the best response to the leader’s strategy. Alternatively, a Nash equilibrium does not introduce hierarchy between players; each player’s strategy is the best response to the other players’ strategies. As pointed out in [Fisac2019], searching for openloop Stackelberg equilibrium strategies can fail on simple examples. In the context of autonomous driving, for instance, when players’ cost functions only depend on their own state and control trajectories, the solution becomes trivial. The leader ignores mutual collision constraints and the follower has to adapt to this strategy. This behavior can be overly aggressive for the leader (or overly passive for the follower) and does not capture the gametheoretic nature of the problem, see Figure 2.
Nash equilibria have been investigated in [FridovichKeil2019a, Dreves2018, Spica2018, Britzelmeier2019, Di2018, Di2020, Di2020a]. We also take the approach of searching for Nash equilibria, as this type of equilibrium seems better suited to symmetric, multirobot interaction scenarios. Indeed, we have observed more natural behavior emerging from Nash equilibria compared to Stackelberg when solving for openloop strategies.
IiB GameTheoretic Trajectory Optimization
Most of the algorithms proposed in the robotics literature to solve for gametheoretic equilibria can be grouped into four types: First are algorithms aimed at finding Nash equilibria that rely on decomposition, such as Jacobi or GaussSiedel methods [Spica2018, Britzelmeier2019, Wang2019a]. These algorithms are based on an iterative best response scheme in which players take turns at improving their strategies considering the other agents’ strategies as immutable. This type of approach is easy to interpret and scales reasonably well with the number of players. However, convergence of these algorithms is not well understood [Facchinei2007], and special care is required to capture the gametheoretic nature of the problem [Spica2018]. Moreover, solving for a Nash equilibrium until convergence can require many iterations, each of which is a (possibly expensive) trajectory optimization problem. This can lead to prohibitively long solution times.
Second, there are a variety of algorithms based on dynamic programming. In [Fisac2019], a Markovian Stackelberg strategy is computed via dynamic programming. This approach seems to capture the gametheoretic nature of autonomous driving. However, dynamic programming suffers from the curse of dimensionality, and therefore practical implementations rely on simplified dynamics models coupled with coarse discretization of the state and input spaces. To counterbalance these approximations, a lowerlevel planner informed by the state values under the Markovian Stackelberg strategy is run. This approach, which scales exponentially with the state dimension, has only been demonstrated in a twoplayer setting. Adding more players is likely to prevent realtime application of this algorithm. In contrast, our proposed approach scales polynomially with the number of players (see Section IVE).
Third, algorithms akin to differential dynamic programming have been developed for robust control [Morimoto2003] and later applied to gametheoretic problems [FridovichKeil2019a, Di2018]. This approach scales polynomially with the number of players and is fast enough to run realtime in a modelpredictive control (MPC) fashion [FridovichKeil2019a]. However, this type of approach does not natively handle constraints. Collisionavoidance constraints are typically handled using large penalties that can result in numerical illconditioning which, in turn, can impact the robustness or the convergence rate of the solver. Moreover, it leads to a tradeoff between trajectory efficiency and avoiding collisions with other players.
Finally, algorithms that are analogous to direct methods in trajectory optimization have also been developed [Di2020, Di2020a]. An algorithm based on a firstorder splitting method was proposed by Di [Di2020a] that is known to have a linear convergence rate. The experiments presented with this work show convergence of the algorithm after typically to iterations. A different approach based on Newton’s method has been proposed [Di2020], but it is restricted to unconstrained dynamic games. Our solver belongs to this family of approaches. It also relies on a secondorder Newtontype method, but it is able to handle general state and control input constraints. In addition, we demonstrate convergence on relatively complex problems in typically less than iterations.
IiC Generalized Nash Equilibrium Problems
As mentioned above, we focus on finding Nash equilibria for multiplayer games in which players are coupled through shared state constraints (such as collisionavoidance constraints). Therefore, these problems are instances of Generalized Nash Equilibrium Problems (GNEPs). The operations research field has a rich literature on GNEPs [Pang2005, Facchinei2006, Facchinei2009, Facchinei2010, Fukushima2011]. Exact penalty methods have been proposed to solve GNEPs [Facchinei2006, Facchinei2009]. Complex constraints such as those that couple players’ strategies are handled using penalties, allowing solution of multiplayer games jointly for all the players. However, these exact penalty methods require minimization of nonsmooth objective functions, which leads to slow convergence rates in practice.
In the same vein, a penalty approach relying on an augmented Lagrangian formulation of the problem has been advanced by Pang et al. [Pang2005]. This work, however, converts the augmented Lagrangian formulation to a set of KKT conditions, including complementarity constraints. The resulting constraintsatisfaction problem is solved with an offtheshelf linear complementarity problem (LCP) solver that exploits the linearity of a specific problem. Our solver, in contrast, is not tailored for a specific example and can solve general GNEPs. It draws inspiration from the augmented Lagrangian formulation, which does not introduce nonsmooth terms in the objective function, enabling fast convergence. Moreover, this formulation avoids illconditioning, which improves the numerical robustness of our solver.
Iii Problem Statement
In the discretized trajectory optimization setting with time steps, we denote by the state size, the control input size, the state, and the control input of player at the time step . In formulating the game, we do not distinguish between the robot carrying out the computation, and the other agents whose trajectories it is predicting. All agents are modeled equivalently, as is typical in the case of Nash style games.
Following the formalism of Facchinei [Facchinei2007], we consider the GNEP with players. Each player decides over its control input variables . This is player ’s strategy where denotes the dimension of the control inputs controlled by player and is the dimension of the whole trajectory of player ’s control inputs. By , we denote the vector of all the players’ strategies except the one of player . Additionally, we define the trajectory of state variables where , which results from applying all the control inputs decided by the players to a joint dynamical system,
(1) 
with denoting the time step index. The kinodynamic constraints over the whole trajectory can be expressed with equality constraints,
(2) 
The cost function of each player is noted . It depends on player ’s control inputs as well as on the state trajectory , which is shared with all the other players. The goal of player is to select a strategy and a state trajectory that minimizes the cost function . Naturally, the choice of state trajectory is constrained by the other players’ strategies and the dynamics of the system via Equation 2. In addition, the strategy must respect a set of constraints that depends on the state trajectory as well as on the other players strategies . We express this with a concatenated set of inequality constraints . Formally, {mini}[2] X, U^νJ^ν(X, U^ν), \addConstraintD(X, U) = 0 \addConstraintC(X, U) ≤0 . Problem (III), is a GNEP because of the constraints that couple the strategies of all the players. A solution of this GNEP (a generalized Nash equilibrium), is a vector such that, for all , is a solution to (III) with the other players’ strategies fixed to . This means that at an equilibrium point , no player can decrease their cost by unilaterally changing their strategy to any other feasible point.
When solving for a generalized Nash equilibrium of the game, , we identify openloop Nash equilibrium trajectories, in the sense that the whole trajectory is the best response to the other players’ strategies given the initial state of the system . Thus the control signal is a function of time, not of the current state of the system
Iv Augmented Lagrangian Formulation
We propose an algorithm to solve the previously defined GNEP in the context of trajectory optimization. We express the condition that players are acting optimally to minimize their cost functions subject to constraints as an equality. To do so, we first derive the augmented Lagrangian associated with (III) solved by each player. Then, we use the fact that, at an optimal point, the gradient of the augmented Lagrangian is null [Bertsekas2014]. Therefore, at a generalized Nash equilibrium point, the gradients of the augmented Lagrangians of all players must be null. Concatenating this set of equality constraints with the dynamics equality constraints, we obtain a set of equations that we solve using a quasiNewton rootfinding algorithm.
Iva Individual Optimality
First, without loss of generality, we suppose that the vector is actually the concatenated set of inequality and equality constraints, i.e. , where is the vector of inequality constraints and is the vector of equality constraints. To embed the notion that each player is acting optimally, we formulate the augmented Lagrangian associated with (III) for player . The dynamics constraints are handled with the Lagrange multiplier term , while the other constraints are dealt with using both a multiplier and a quadratic penalty term specific to the augmented Lagrangian formulation. We denote by the Lagrange multipliers associated with the vector of constraints ; is a penalty weight vector;
(3) 
is a diagonal matrix defined as,
(4) 
where indicates the constraint. It is important to notice that the Lagrange multipliers associated with the dynamics constraints are specific to each player , but the Lagrange multipliers and penalties and are common to all players. Given the appropriate Lagrange multipliers and , the gradient of the augmented Lagrangian with respect to the individual decision variables is null at an optimal point of (III). The fact that player is acting optimally to minimize under the constraints and can therefore be expressed as follows,
(5) 
It is important to note that this equality constraint preserves coupling between players since the gradient depends on the other players’ strategies .
IvB RootFinding Problem
At a generalized Nash equilibrium, all players are acting optimally and the dynamics constraints are respected. Therefore, to find an equilibrium point, we have to solve the following rootfinding problem,
[2] X, U, μ0, \addConstraintG^ν(X, U, μ^ν) = 0, ∀ ν∈{1, …, M} \addConstraintD(X,U) = 0 , We use Newton’s method to solve the rootfinding problem. We denote by the concatenation of the augmented Lagrangian gradients of all players and the dynamics constraints, , where . We compute the first order derivative of with respect to the primal variables and the dual variables that we concatenate in a single vector ,
(6) 
Newton’s method allows us to identify a search direction in the primaldual space,
(7) 
We couple this search direction with a backtracking linesearch [Nocedal2006] given in Algorithm 1 to ensure local convergence to a solution using Newton’s Method [Nocedal2006] detailed in Algorithm 2.
IvC Augmented Lagrangian Updates
To obtain convergence of the Lagrange multipliers , we update them with a dualascent step. This update can be seen as shifting the value of the penalty terms into the Lagrange multiplier terms,
(8) 
We also update the penalty weights according to an increasing schedule, with :
(9) 
IvD Algames
By combining Newton’s method for finding the point where the dynamics is respected and the gradients of the augmented Lagrangians are null with the Lagrange multiplier and penalty updates, we obtain our solver ALGAMES (Augmented Lagrangian GAMEtheoretic Solver) presented in Algorithm 3. The algorithm, which iteratively solves the GNEP, requires as inputs an initial guess for the primaldual variables and initial penalty weights . The algorithm outputs the openloop strategies of all players and the Lagrange multipliers associated with the dynamics constraints .
IvE Algorithm Complexity
Following a quasiNewton approximation of the matrix [Nocedal2006], we neglect some of the secondorder derivative terms associated with the constraints. Therefore, the most expensive part of the algorithm is the Newton step defined by Equation 7. By exploiting the sparsity pattern of the matrix , we can solve Equation 7 in . Indeed, the sparsity structure allows us to perform a backsubstitution scheme akin to solving a Riccati equation, which has known complexity of . The complexity is cubic in the number of states and the number of control inputs , which are typically linear in the number of players . Therefore, the overall complexity of the algorithm is .
IvF Algorithm Discussion
Here we discuss the inherent difficulty in solving for Nash equilibria in large problems, and explain some of the limitations of our approach. First of all, finding a Nash equilibrium is a nonconvex problem in general. Indeed, it is known that even for singleshot discrete games, solving for exact Nash equilibria is computationally intractable for a large number of players [DaskalakisEtAlSIAMJournalonComputing08ComplexityOfNash]. It is therefore not surprising that, in our more difficult setting of a dynamic game in continuous space, no guarantees can be provided about finding an exact Nash equilibrium. Furthermore, in complex interaction spaces, constraints can be highly nonlinear and nonconvex. This is the case in the autonomous driving context with collision avoidance constraints. In this setting, one cannot expect to find an algorithm working in polynomial time with guaranteed convergence to a Nash equilibrium respecting constraints. On the other hand, local convergence of Newton’s method to openloop Nash equilibria (that is, starting sufficiently close to the equilibrium, the algorithm will converge to it) has been established in the unconstrained case [Di2020]. Our approach solves a sequence of unconstrained problems via the augmented Lagrangian formulation. Each of these problems, therefore, has guaranteed local convergence. However, the overall method has no guarantee of global convergence to a generalized Nash equilibrium, and this is expected given the known computational difficulty of the problem.
Second, our algorithm requires an initial guess for the state and control input trajectories , and the dynamics multipliers . Empirically, we observe that choosing and simply rolling out the dynamics starting from the initial state without any control was a sufficiently good initial guess to get convergence to a local optimum that respects both the constraints and the firstorder optimality conditions. For a detailed empirical study of the convergence of ALGAMES and its failure cases, we refer to Sections VID and VIE.
Finally, even for simple linear quadratic games, the Nash equilibrium solution is not necessarily unique. In general, an entire subspace of equilibria exists. In this case, the matrix in Equation 7 will be singular. In practice, we regularize this matrix so that large steps are penalized, resulting in an invertible matrix and convergence to a Nash equilibrium that minimizes the norm of .
V Simulations: Design and Setup
We choose to apply our algorithm in the autonomous driving context. Indeed, many maneuvers like lane changing, ramp merging, overtaking, and intersection crossing involve a high level of interaction between vehicles. We assume a single car is computing the trajectories for all cars in its neighborhood, so as to find its own trajectory to act safely among the group. We assume that this car has access to a relatively good estimate of the surrounding cars’ objective functions. Such an estimate could, in principle, be obtained by applying inverse optimal control on observed trajectories of the surrounding cars.
In a real application, the car could be surrounded by other cars that might not necessarily follow a Nash equilibrium strategy. In this case, we demonstrate empirically that by repeating the computation as frequently as possible in an MPC fashion, we obtain safe and adaptive autonomous behaviors.
Va Autonomous Driving Problem
Constraints
Each vehicle in the scene is an agent of the game. Our objective is to find a generalized Nash equilibrium trajectory for all of the vehicles. These trajectories have to be dynamically feasible. The dynamics constraints at time step are expressed as follows,
(10) 
We consider a nonlinear unicycle model for the dynamics of each vehicle. A vehicle state, , is composed of a 2D position, a heading angle and a scalar velocity. The control input is composed of an angular velocity and a scalar acceleration.
In addition, it is critical that the trajectories respect collisionavoidance constraints. We model the collision zone of the vehicles as circles of radius . The collision constraints between vehicles are then simply expressed in terms of the position of each vehicle,
(11) 
We also model boundaries of the road to force the vehicles to remain on the roadway. This means that the distance between the vehicle and the closest point, , on each boundary, , has to remain larger than the collision circle radius, ,
(12) 
In summary, based on reasonable simplifying assumptions, we have expressed the driving problem in terms of nonconvex and nonlinear coupled constraints.
Cost Function
We use a quadratic cost function penalizing the use of control inputs and the distance between the current state and the desired final state of the trajectory,
(13)  
(14) 
This cost function only depends on the decision variables of vehicle . Players’ behaviors are coupled only through collision constraints. We could also add terms depending on other vehicles’ strategies, such as a congestion penalty.
Vi Comparison to iLQGames
Via Motivation
In order to evaluate the merits of ALGAMES, we compare it to iLQGames [FridovichKeil2019a] which is a DDPbased algorithm for solving general dynamic games. Both algorithms solve the problem by iteratively solving linearquadratic approximations that have an analytical solution [Basar1999]. For iLQGames, the augmented objective function differs from the objective function, , by a quadratic term penalizing constraint violations,
(15) 
Where is defined by,
(16) 
Here is an optimization hyperparameter that we can tune to satisfy constraints. For ALGAMES, the augmented objective function, , is actually an augmented Lagrangian, see Equation 3. The hyperparameters for ALGAMES are the initial value of and its increase rate defined in Equation 9.
Scenario  # Players  ALGAMES  iLQGames 

2  
Ramp merging  3  
4  
2  
Intersection  3  
4 
ViB Timing Experiments
We evaluate the performance of both algorithms in two scenarios, see Figures 3 and 7, with the number of players varying from two to four. To compare the speed of both algorithms, we set the termination criterion as a threshold on constraint violations . The timing results averaged over 100 samples are presented in Table 4. First, we notice that both algorithms achieve realtime or nearrealtime performance on complex autonomous driving scenarios (the horizon of the solvers is fixed to ).
We observe that the speed performance of ALGAMES and iLQGames are comparable in the ramp merging scenario. For this scenario, we tuned the value of the penalty for iLQGames to . Notice that for all scenarios the dimensions of the problem are scaled so that the velocities and displacements are all the same order of magnitude. For the intersection scenario, we observe that the twoplayer and fourplayer cases both have much higher solve times for iLQGames compared to the 3player case. Indeed, in those two cases, we had to increase the penalty to , otherwise the iLQGames would plateau and never reach the constraint satisfaction criterion. This, in turn, slowed the algorithm down by decreasing the constraint violation convergence rate.
ViC Discussion
The main takeaway from these experiments is that, for a given scenario, it is generally possible to find a suitable value for that will ensure the convergence of iLQGames to constraint satisfaction. With higher values for , we can reach better constraint satisfaction at the expense of slower convergence rate. In the context of a receding horizon implementation (MPC), finding a good choice of that would suit the whole sequence of scenarios encountered by a vehicle could be difficult. In contrast, the same hyperparameters and were used in ALGAMES for all the experiments across this paper. This supports the idea that, thanks to its adaptive penalty scheme, ALGAMES requires little tuning.
While performing the timing experiments, we also noticed several instances of oscillatory behavior for iLQGames. The solution would oscillate, preventing it from converging. This happened even after an adaptive regularization scheme was implemented to regularize iLQGames’ Riccati backward passes. Oscillatory behavior was not seen with ALGAMES. We hypothesize that this is due to the dual ascent update coupled with the penalty logic detailed in Equations 8 and 4, which add hysteresis to the solver.
Scenario  Freq. in Hz  in ms  in ms 

Ramp Merging  69  14  72 
Intersection  66  15  66 
ViD Monte Carlo Analysis
To evaluate the robustness of ALGAMES, we performed a Monte Carlo analysis of its performance on a ramp merging problem. First, we set up a roadway with hard boundaries as pictured in Fig. 3. We position two vehicles on the roadway and one on the ramp in a collisionfree initial configuration. We choose a desired final state where the incoming vehicle has merged into the traffic. Our objective is to generate generalized Nash equilibrium trajectories for the three vehicles. These trajectories are collisionfree and cannot be improved unilaterally by any player. To introduce randomness in the solving process, we apply a random perturbation to the initial state of the problem. Specifically, we perturb by adding a uniformly sampled noise. This would typically correspond to displacing the initial position of the vehicles by , changing their initial velocity by and their heading by .
We observe in Figure 5, that ALGAMES consistently finds a satisfactory solution to the problem using the same hyperparameters and . Out of the 1000 samples converged to constraint satisfaction while respecting the optimality criterion . By definition, is a merit function for satisfying optimality and dynamics constraints. We also observe that the solver converges to a solution in less than for of the samples. The solver requires less than 16 Newton steps to converge for of the samples. These empirical data tend to support the fact that ALGAMES is able to solve the class of ramp merging problem quickly and reliably.
For comparison, we present in Figure 5 the results obtained with iLQGames. We apply the same constraint satisfaction criterion . We fixed the value of the penalty hyperparameter for all the samples as it would not be a fair comparison to tune it for each sample. Only 3 samples did not converge with iLQGames, this is a performance comparable to ALGAMES for which 5 samples failed to converge. However, we observe that iLQGames is 3 times slower than ALGAMES with an average solve time of ms compared to ms and require on average 4 times more iterations (9 against 41).
ViE Solver Failure Cases
The Monte Carlo analysis allows us to identify the typical failure cases of our solver. We empirically identify the cases where the solver does not satisfy the constraints or the optimality criterion for the ramp merging problem. Typically in such cases, the initial guess, which consists of rolling out the dynamics with no control, is far from a reasonable solution. Since the constraints are ignored during this initial rollout, the car at the back can overtake the car at the front by driving through it. This creates an initial guess where constraints are strongly violated. Moreover, we hypothesize that the tight roadway boundary constraints tend to strongly penalize solutions that would ’disentangle’ the car trajectories because they would require large boundary violation at first. Therefore, the solver gets stuck in this local optimum where cars overlap each other. Sampling several initial guesses with random initial control inputs and solving in parallel could reduce the occurrence of these failure cases. Also being able to detect, reject and resample initial guesses when the initial car trajectories are strongly entangled could also improve the robustness of the solver.
Vii MPC Implementation of ALGAMES
In this section, we propose a modelpredictive control (MPC) implementation of the algorithm that demonstrates realtime performance. The benefits of the MPC are twofold: it provides a feedback policy instead of an openloop strategy, and it can improve interactions with actors for which we do not have a good estimate of the objective function.
Viia MPC Feedback Policy
First, the strategies identified by ALGAMES are openloop Nash equilibrium strategies. They are sequences of control inputs. On the contrary, DDPbased approaches like iLQGames, solve for feedback Nash equilibrium strategies which provide a sequence of control gains. In the MPC setting, we can obtain a feedback policy with ALGAMES by updating the strategy as fast as possible and only executing the beginning of the strategy. This assumes a fast update rate of the solution. To support the feasibility of the approach, we implemented an MPC on the ramp merging scenario described in Figure 3. There are 3 players constantly maintaining a 40 time step strategy with 3 seconds of horizon. We simulate 3 seconds of operation of the MPC by constantly updating the strategies and propagating noisy unicycle dynamics for each vehicle. We compile the results from 100 MPC trajectories in Table 6. We obtain a Hz update frequency for the planner on average. We observe similar performance on the intersection problem defined in Figure 7, with an update frequency of Hz.
ViiB Adaptive Behavior
The second benefit of MPC is that it mitigates a major assumption of this work. We assumed that the car we control has access to the exact objective functions of the surrounding cars. A more realistic setting would be that the car has access to a noisy estimate of the surrounding cars’ objective functions. This estimate could be provided by an inverse optimal control algorithm, for instance. By replanning at a high enough frequency, an MPC implementation of ALGAMES could safely control an agent who has an inaccurate estimate of surrounding agents’ objective functions. We further assumed that the other players solve for Nash equilibrium strategies, which is not necessarily the case in the presence of selfish players.
To support this claim, we modified the intersection scenario described in Figure 7 so that the blue car has a poor estimate of the pedestrian desired speed. Specifically, in its nominal strategy, the blue car crosses the intersection without slowing down crossing the crosswalk right after the yellow pedestrian. For this nominal strategy the blue car has solved for the GNE assuming that the pedestrian would have a desired speed . However, in reality, the pedestrian is crossing the road at speed that is significantly lower than . In addition, the pedestrian is not solving for any GNE and is just crossing the road in a straight line at a constant speed . Therefore, the pedestrian objective function assumed by the blue car does not capture the real behavior of the pedestrian. However, when applying the MPC implementation of ALGAMES, we observe that the blue car gradually adapts its strategy to accommodate for the pedestrian, see Figure 8. Indeed, as the blue car gets closer to the pedestrian the car significantly slows down compared to the nominal strategy. Also, we observe that the car shifts to the right of the roadway to avoid the pedestrian. This nudging maneuver was not present in the nominal plan because the pedestrian was expected to have crossed the lane already. It is important to note that this adaptive behavior is observed even though the blue car kept a constant and wrong estimate of the pedestrian’s objective function. Being able to refine the estimates of other players objectives online could further improve the adaptive property of the algorithm.
Viii Conclusions
We have introduced a new algorithm for finding constrained Nash equilibrium trajectories in multiplayer dynamic games. We demonstrated the performance and robustness of the solver through a Monte Carlo analysis on complex autonomous driving scenarios including nonlinear and nonconvex constraints. We have shown realtime performance for up to 4 players and implemented ALGAMES in a recedinghorizon framework to give a feedback policy. We empirically demonstrated the ability to safely interact with players that violate the Nash equilibrium assumptions when the strategies are updated fast enough online. The results we obtained from ALGAMES are promising, as they seem to let the vehicles share the responsibility for avoiding collisions, leading to naturallooking trajectories where players are able to negotiate complex, interactive traffic scenarios that are challenging for traditional, nongametheoretic trajectory planners. For this reason, we believe that this solver could be a very efficient tool to generate trajectories in situations where the level of interaction between players is strong. Our implementation of ALGAMES is available at https://github.com/RoboticExplorationLab/ALGAMES.jl.
References
Footnotes
 One might also explore solving for feedback Nash equilibria, where the strategies are functions of the state of all agents. This is an interesting direction for future work.