On the Convergence of SingleCall
Stochastic ExtraGradient Methods
Abstract.
Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems). In this setting, the optimal convergence rate for solving smooth monotone variational inequalities is achieved by the ExtraGradient (EG) algorithm and its variants. Aiming to alleviate the cost of an extra gradient step per iteration (which can become quite substantial in deep learning applications), several algorithms have been proposed as surrogates to ExtraGradient with a single oracle call per iteration. In this paper, we develop a synthetic view of such algorithms, and we complement the existing literature by showing that they retain a ergodic convergence rate in smooth, deterministic problems. Subsequently, beyond the monotone deterministic case, we also show that the last iterate of singlecall, stochastic extragradient methods still enjoys a local convergence rate to solutions of nonmonotone variational inequalities that satisfy a secondorder sufficient condition.
Key words and phrases:
Variational inequalities; extragradient algorithm; ergodic average; last iterate; stochastic approximation2010 Mathematics Subject Classification:
65K15; 62L20; 90C15; 90C33OXOt#1_#2 \NewDocumentCommand\interOXOt#1_#2+1/2 \NewDocumentCommand\updateOXOt#1_#2+1 \NewDocumentCommand\lastOXOt#1_#21 \NewDocumentCommand\pastOXOt#1_#21/2 \NewDocumentCommand\pastpastOXOt#1_#23/2 \NewDocumentCommand\futureOXOt#1_#2+3/2 \NewDocumentCommand\lastlastOXOt#1_#22
1. Introduction
Deep learning is arguably the fastestgrowing field in artificial intelligence: its applications range from image recognition and natural language processing to medical anomaly detection, drug discovery, and most fields where computers are required to make sense of massive amounts of data. In turn, this has spearheaded a prolific research thrust in optimization theory with the twofold aim of demystifying the successes of deep learning models and of providing novel methods to overcome their failures.
Introduced by Goodfellow et al. [21], generative adversarial networks have become the youngest torchbearers of the deep learning revolution and have occupied the forefront of this drive in more ways than one. First, the adversarial training of deep neural nets has given rise to new challenges regarding the efficient allocation of parallelizable resources, the compatibility of the chosen architectures, etc. Second, the loss landscape in GANs is no longer that of a minimization problem but that of a zerosum, minmax game – or, more generally, a variational inequality (VI).
Variational inequalities are a flexible and widely studied framework in optimization which, among others, incorporates minimization, saddlepoint, Nash equilibrium, and fixed point problems. As such, there is an extensive literature devoted to solving variational inequalities in different contexts; for an introduction, see [18, 4] and references therein. In particular, in the setting of monotone variational inequalities with Lipschitz continuous operators, it is well known that the optimal rate of convergence is , and that this rate is achieved by the EG algorithm of Korpelevich [24] and its Bregman variant, the MirrorProx (MP) algorithm of Nemirovski [33].^{1}^{1}1Korpelevich [24] proved the method’s asymptotic convergence for pseudomonotone variational inequalities. The convergence rate was later established by Nemirovski [33] with ergodic averaging.
These algorithms require two projections and two oracle calls per iteration, so they are more costly than standard ForwardBackward / descent methods. As a result, there are two complementary strands of literature aiming to reduce one (or both) of these cost multipliers – that is, the number of projections and/or the number of oracle calls per iteration. The first class contains algorithms like the ForwardBackwardForward (FBF) method of Tseng [44], while the second focuses on gradient extrapolation mechanisms like Popov’s modified Arrow–Hurwicz algorithm [38].
In deep learning, the latter direction has attracted considerably more interest than the former. The main reason for this is that neural net training often does not involve constraints (and, when it does, they are relatively cheap to handle). On the other hand, gradient calculations can become very costly, so a decrease in the number of oracle calls could offer significant practical benefits. In view of this, our aim in this paper is (i) to develop a synthetic approach to methods that retain the anticipatory properties of the ExtraGradient algorithm while making a single oracle call per iteration; and (ii) to derive quantitative convergence results for such singlecall extragradient (EG) algorithms.
Lipschitz  Lipschitz Strong  

Ergodic  Last Iterate  Ergodic  Last Iterate  
Deterministic  \Ovalbox  Unknown  [19, 32, 26]  
Stochastic  [19, 14]  Unknown  \Ovalbox  \Ovalbox 
Our contributions
Our first contribution complements the existing literature (reviewed below and in Section 3) by showing that the class of singlecall extragradient (EG) algorithms under study attains the optimal convergence rate of the twocall method in deterministic variational inequalities with a monotone, Lipschitz continuous operator. Subsequently, we show that this rate is also achieved in stochastic variational inequalities with strongly monotone operators provided that the optimizer has access to an oracle with bounded variance (but not necessarily bounded second moments).
Importantly, this stochastic result concerns both the method’s “ergodic average” (a weighted average of the sequence of points generated by the algorithm) as well as its “last iterate” (the last generated point). The reason for this dual focus is that averaging can be very useful in convex/monotone landscapes, but it is not as beneficial in nonmonotone problems (where Jensen’s inequality does not apply). On that account, lastiterate convergence results comprise an essential stepping stone for venturing beyond monotone problems.
Armed with these encouraging results, we then focus on nonmonotone problems and show that, with high probability, the method’s last iterate exhibits a local convergence rate to solutions of nonmonotone variational inequalities that satisfy a secondorder sufficient condition. To the best of our knowledge, this is the first convergence rate guarantee of this type for stochastic, nonmonotone variational inequalities.
Related work
The prominence of ExtraGradient/MirrorProx methods in solving variational inequalities and saddlepoint problems has given rise to a vast corpus of literature which we cannot hope to do justice here. Especially in the context of adversarial networks, there has been a flurry of recent activity relating variants of the ExtraGradient algorithm to GAN training, see e.g., [15, 45, 19, 20, 29, 9, 25] and references therein. For concreteness, we focus here on algorithms with a singlecall structure and refer the reader to Sections 5, 4 and 3 for additional details.
The first variant of ExtraGradient with a single oracle call per iteration dates back to Popov [38]. This algorithm was subsequently studied by, among others, Chiang et al. [10], Rakhlin and Sridharan [40, 39] and Gidel et al. [19]; see also [26, 14] for a “reflected” variant, [15, 37, 32, 31] for an “optimistic” one, and Section 3 for a discussion of the differences between these variants. In the context of deterministic, strongly monotone variational inequalities with Lipschitz continuous operators, the last iterate of the method was shown to exhibit a geometric convergence rate [43, 19, 26, 32]; similar geometric convergence results also extend to bilinear saddlepoint problems [43, 19, 37], even though the operator involved is not strongly monotone. In turn, this implies the convergence of the method’s ergodic average, but at a rate (because of the hysteresis of the average). In view of this, the fact that EG methods retain the optimal convergence rate in deterministic variational inequalities without strong monotonicity assumptions closes an important gap in the literature.^{2}^{2}2A few weeks after the submission of our paper, we were made aware of a very recent preprint by Mokhtari et al. [31] which also establishes a convergence rate for the algorithm’s “optimistic” variant in saddlepoint problems (in terms of the Nikaido–Isoda gap function). To the best of our knowledge, this is the closest result to our own in the literature.
At the local level, the geometric convergence results discussed above echo a surge of interest in local convergence guarantees of optimization algorithms applied to games and saddlepoint problems, see e.g., [25, 1, 16, 3] and references therein. In more detail, Liang and Stokes [25] proved local geometric convergence for several algorithms in possibly nonmonotone saddlepoint problems under a local smoothness condition. In a similar vein, Daskalakis and Panageas [16] analyzed the limit points of (optimistic) gradient descent, and showed that local saddle points are stable stationary points; subsequently, Adolphs et al. [1] and Mazumdar et al. [28] proposed a class of algorithms that eliminate stationary points which are not local Nash equilibria.
Geometric convergence results of this type are inherently deterministic because they rely on an associated resolvent operator being firmly nonexpansive – or, equivalently, rely on the use of the center manifold theorem. In a stochastic setting, these techniques are no longer applicable because the contraction property cannot be maintained in the presence of noise; in fact, unless the problem at hand is amenable to variance reduction – e.g., as in [22, 6, 9] – geometric convergence is not possible if the noise process is even weakly isotropic. Instead, for monotone problems, Cui and Shanbhag [14] and Gidel et al. [19] showed that the ergodic average of the method attains a convergence rate. Our global convergence results for stochastic variational inequalities improve this rate to in strongly monotone variational inequalities for both the method’s ergodic average and its last iterate. In the same light, our local convergence results for nonmonotone variational inequalities provide a key extension of local, deterministic convergence results to a fully stochastic setting, all the while retaining the fastest convergence rate for monotone variational inequalities.
For convenience, our contributions relative to the state of the art are summarized in Table 1.
2. Problem setup and blanket assumptions
Variational inequalities
We begin by presenting the basic variational inequality framework that we will consider throughout the sequel. To that end, let be a nonempty closed convex subset of , and let be a singlevalued operator on . In its most general form, the variational inequality (VI) problem associated to and can be stated as:
(VI) 
To provide some intuition about (VI), we discuss two important examples below:
Example 1 (Loss minimization).
Suppose that for some smooth loss function on . Then, is a solution to (VI) if and only if , i.e., if and only if is a critical point of . Of course, if is convex, any such solution is a global minimizer.∎
Example 2 (Minmax optimization).
Suppose that decomposes as with , , and assume for some smooth function , , . As in Example 1 above, the solutions to (VI) correspond to the critical points of ; if, in addition, is convexconcave, any solution of (VI) is a global saddlepoint, i.e.,
(1) 
Given the original formulation of GANs as (stochastic) saddlepoint problems [21], this observation has been at the core of a vigorous literature at the interface between optimization, game theory, and deep learning, see e.g., [15, 45, 29, 19, 37, 25, 9] and references therein.∎
The operator analogue of convexity for a function is monotonicity, i.e.,
(2) 
Specifically, when for some sufficiently smooth function , this condition is equivalent to being convex [4]. In this case, following Nesterov [35, 36] and Juditsky et al. [23], the quality of a candidate solution can be assessed via the socalled error (or merit) function
(3)  
and/or its restricted variant  
(4) 
where denotes the “restricted domain” of the problem. More precisely, we have the following basic result.
In light of this result, and will be among our principal measures of convergence in the sequel.
Blanket assumptions
With all this in hand, we present below the main assumptions that will underlie the bulk of the analysis to follow.
Assumption 1.
The solution set of (VI) is nonempty.
Assumption 2.
The operator is Lipschitz continuous, i.e.,
(5) 
Assumption 3.
The operator is monotone.
In some cases, we will also strengthen Assumption 3 to:
Assumption 3(s).
The operator is strongly monotone, i.e.,
(6) 
Throughout our paper, we will be interested in sequences of points generated by algorithms that can access the operator via a stochastic oracle [34].^{3}^{3}3Depending on the algorithm, the sequence index may take positive integer or halfinteger values (or both). Formally, this is a blackbox mechanism which, when called at , returns the estimate
(7) 
where is an additive noise variable satisfying the following hypotheses: \cref@addtoresetequationparentequation
(8a)  
(8b) 
In the above, denotes the history (natural filtration) of , so is adapted to by definition; on the other hand, since the th instance of is generated randomly from , is not adapted to . Obviously, if , we have the deterministic, perfect feedback case .
3. Algorithms
The ExtraGradient algorithm
In the general framework outlined in the previous section, the ExtraGradient (EG) algorithm of Korpelevich [24] can be stated in recursive form as
(EG)  
where denotes the Euclidean projection of onto the closed convex set and is a variable stepsize sequence. Using this formulation as a starting point, the main idea behind the method can be described as follows: at each , the oracle is called at the algorithm’s current – or base – state to generate an intermediate – or leading – state ; subsequently, the base state is updated to using gradient information from the leading state , and the process repeats. Heuristically, the extra oracle call allows the algorithm to “anticipate” the landscape of and, in so doing, to achieve improved convergence results relative to standard projected gradient / forwardbackward methods; for a detailed discussion, we refer the reader to [18, 7] and references therein.
Singlecall variants of the ExtraGradient algorithm
Given the significant computational overhead of gradient calculations, a key desideratum is to drop the second oracle call in (EG) while retaining the algorithm’s “anticipatory” properties. In light of this, we will focus on methods that perform a single oracle call at the leading state , but replace the update rule for (and, possibly, as well) with a proxy that compensates for the missing gradient. Concretely, we will examine the following family of singlecall extragradient (EG) algorithms:
These are the main algorithmic schemes that we will consider, so a few remarks are in order. First, given the extensive literature on the subject, this list is not exhaustive; see e.g., [32, 31, 37] for a generalization of (OG), [27] for a variant that employs averaging to update the algorithm’s base state , and [20] for a proxy defined via “negative momentum”. Nevertheless, the algorithms presented above appear to be the most widely used singlecall variants of (EG), and they illustrate very clearly the two principal mechanisms for approximating missing gradients: (i ) using past gradients (as in the Past ExtraGradient (PEG) and Optimistic Gradient (OG) variants); and/or (ii ) using a difference of successive states (as in the Reflected Gradient (RG) variant).
We also take this opportunity to provide some background and clear up some issues on terminology regarding the methods presented above. First, the idea of using past gradients dates back at least to Popov [38], who introduced (PEG) as a “modified Arrow–Hurwicz” method a few years after the original paper of Korpelevich [24]; the same algorithm is called “meta” in [10] and “extrapolation from the past” in [19] (but see also the note regarding optimism below). The terminology “Reflected Gradient” and the precise formulation that we use here for (RG) is due to Malitsky [26]. The wellknown primaldual algorithm of Chambolle and Pock [8] can be seen as a onesided, alternating variant of the method for saddlepoint problems; see also [45] for a more recent take.
Finally, the terminology “optimistic” is due to Rakhlin and Sridharan [39, 40], who provided a unified view of (PEG) and (EG) based on the sequence of oracle vectors used to update the algorithm’s leading state .^{4}^{4}4More precisely, Rakhlin and Sridharan [39, 40] use the term Optimistic Mirror Descent (OMD) in reference to the MirrorProx method of Nemirovski [33], itself a variant of (EG) with projections defined by means of a Bregman function; for a related treatment, see Nesterov [35] and Juditsky et al. [23]. Because the framework of [39, 40] encompasses two different algorithms, there is some danger of confusion regarding the use of the term “optimism”; in particular, both (EG) and (PEG) can be seen as instances of optimism. The specific formulation of (OG) that we present here is the projected version of the algorithm considered by Daskalakis et al. [15];^{5}^{5}5To see this, note that the difference between two consecutive intermediate steps and can be written as . Writing (OG) in the form presented above shows that (OG) can also be viewed as a singlecall variant of the FBF method of Tseng [44]. by contrast, the “optimistic” method of Mertikopoulos et al. [29] is equivalent to (EG) – not (PEG) or (OG).
The above shows that there can be a broad array of singlecall extragradients methods depending on the specific proxy used to estimate the missing gradient, whether it is applied to the algorithm’s base or leading state, when (or where) a projection operator is applied, etc. The contact point of all these algorithms is the unconstrained setting () where they are exactly equivalent:
Proposition 1.
Suppose that the EG methods presented above share the same initialization, , , and are run with the same, constant stepsize for all . If , the generated iterates coincide for all .
4. Deterministic analysis
We begin with the deterministic analysis, i.e., when the optimizer receives oracle feedback of the form (7) with . In terms of presentation, we keep the global and local cases separated and we interleave our results for the generated sequence and its ergodic average. To streamline our presentation, we defer the details of the proofs to the paper’s supplement and only discuss here the main ideas.
4.1. Global convergence
Our first result below shows that the algorithms under study achieve the optimal ergodic convergence rate in monotone problems with Lipschitz continuous operators.
Theorem 1.
Suppose that satisfies Assumptions 3, 2 and 1. Assume further that a EG algorithm is run with perfect oracle feedback and a constant stepsize , where for the RG variant and for the PEG and OG variants. Then, for all , we have
(9) 
where is the ergodic average of the algorithm’s sequence of leading states.
This result shows that the EG and EG algorithms share the same convergence rate guarantees, so we can safely drop one gradient calculation per iteration in the monotone case. The proof of the theorem is based on the following technical lemma which enables us to treat the different variants of the EG method in a unified way.
Lemma 2.
Assume that satisfies Assumption 3 (monotonicity). Suppose further that the sequence of points in satisfies the following “quasidescent” inequality with :
(10) 
for all and all . Then,
(11) 
The use of Lemma 2 is tailored to timeaveraged sequences like , and relies on establishing a suitable “quasidescent inequality” of the form (10) for the iterates of EG. Doing this requires in turn a careful comparison of successive iterates of the algorithm via the Lipschitz continuity assumption for ; we defer the precise treatment of this argument to the paper’s supplement.
On the other hand, because the role of averaging is essential in this argument, the convergence of the algorithm’s last iterate requires significantly different techniques. To the best of our knowledge, there are no comparable convergence rate guarantees for under Assumptions 3, 2 and 1; however, if Assumption 3 is strengthened to Assumption 3(s), the convergence of to the (necessarily unique) solution of (VI) occurs at a geometric rate. For completeness, we state here a consolidated version of the geometric convergence results of Malitsky [26], Gidel et al. [19], and Mokhtari et al. [32].
Theorem 2.
Assume that satisfies Assumptions 3(s), 2 and 1, and let denote the (necessarily unique) solution of (VI). If a EG algorithm is run with a sufficiently small stepsize , the generated sequence converges to at a rate of for some .
4.2. Local convergence
We continue by presenting a local convergence result for deterministic, nonmonotone problems. To state it, we will employ the following notion of regularity in lieu of Assumptions 3, 2 and 1 and 3(s).
Definition 3.
We say that is a regular solution of (VI) if is smooth in a neighborhood of and the Jacobian is positivedefinite along rays emanating from , i.e.,
(12) 
for all that are tangent to at .
This notion of regularity is an extension of similar conditions that have been employed in the local analysis of loss minimization and saddlepoint problems. More precisely, if for some loss function , this definition is equivalent to positivedefiniteness of the Hessian along qualified constraints [5, Chap. 3.2]. As for saddlepoint problems and smooth games, variants of this condition can be found in several different sources, see e.g., [42, 17, 30, 41, 25] and references therein.
Under this condition, we obtain the following local geometric convergence result for EG methods.
Theorem 4.
Let be a regular solution of (VI). If a EG method is run with perfect oracle feedback and is initialized sufficiently close to with a sufficiently small constant stepsize, we have for some .
The proof of this theorem relies on showing that (i ) essentially behaves like a smooth, strongly monotone operator close to ; and (ii ) if the method is initialized in a small enough neighborhood of , it will remain in said neighborhood for all . As a result, Theorem 4 essentially follows by “localizing” Theorem 2 to this neighborhood.
As a preamble to our stochastic analysis in the next section, we should state here that, albeit straightforward, the proof strategy outlined above breaks down if we have access to only via a stochastic oracle. In this case, a single “bad” realization of the feedback noise could drive the process away from the attraction region of any local solution of (VI). For this reason, the stochastic analysis requires significantly different tools and techniques and is considerably more intricate.
5. Stochastic analysis
We now present our analysis for stochastic variational inequalities with oracle feedback of the form (7). For concreteness, given that the PEG variant of the EG method employs the most straightforward proxy mechanism, we will focus on this variant throughout; for the other variants, the proofs and corresponding explicit expressions follow from the same rationale (as in the case of Theorem 1).
5.1. Global convergence
As we mentioned in the introduction, under Assumptions 3, 2 and 1, Cui and Shanbhag [14] and Gidel et al. [19] showed that EG methods attain a ergodic convergence rate. By strengthening Assumption 3 to Assumption 3(s), we show that this result can be augmented in two synergistic ways: under Assumptions 3(s), 2 and 1, both the last iterate and the ergodic average of EG achieve a convergence rate.
Theorem 5.
Suppose that satisfies Assumptions 3(s), 2 and 1, and assume that (PEG) is run with stochastic oracle feedback of the form (7) and a stepsize of the form for some and . Then, the generated sequence of the algorithm’s base states satisfies
(13) 
while its ergodic average enjoys the bound
^{color=DodgerBlue!30,author=Pan:}^{color=DodgerBlue!30,author=Pan:}todo: color=DodgerBlue!30,author=Pan:Why ? I think it would be better to write .
Also, I would suggest to write the with , otherwise they stand out too much (because of the huge parentheses).
^{color=Orchid!20!LightGray,author=YuGuan:}^{color=Orchid!20!LightGray,author=YuGuan:}todo: color=Orchid!20!LightGray,author=YuGuan:I prefer to stay with . Let’s see what the others think.
(14) 
Regarding our proof strategy for the last iterate of the process, we can no longer rely either on a contraction argument or the averaging mechanism that yields the ergodic convergence rate. Instead, we show in the appendix that is (stochastically) quasiFejér in the sense of [12, 13]; then, leveraging the method’s specific stepsize, we employ successive numerical sequence estimates to control the summability error and obtain the rate.
5.2. Local convergence
We proceed to examine the convergence of the method in the stochastic, nonmonotone case. Our main result in this regard is the following.
Theorem 6.
Let be a regular solution of (VI) and fix a tolerance level .
Suppose further that (PEG) is run with
stochastic oracle feedback of the form (7)
and
a variable stepsize of the form for large enough and .
Then:^{color=DodgerBlue!30,author=Pan:}^{color=DodgerBlue!30,author=Pan:}todo: color=DodgerBlue!30,author=Pan:Multiplied in the figure by to get rid of the fractions and save white space.

There are neighborhoods and of in such that, if , the event
(15) occurs with probability at least .

Conditioning on the above, we have:
(16) where and .
The finiteness of and the positivity of are both consequences of the regularity of and their values only depend on the size of the neighborhood . Taking a larger would increase the algorithm’s certified initialization basin but it would also negatively impact its convergence rate (since would increase while would decrease). Likewise, the neighborhood only depends on the size of and, as we explain in the appendix, it suffices to take to be “one fourth” of .
From the above, it becomes clear that the situation is significantly more involved than the corresponding deterministic analysis. This is also reflected in the proof of Theorem 6 which requires completely new techniques, well beyond the straightforward localization scheme underlying Theorem 4. More precisely, a key step in the proof (which we detail in the appendix) is to show that the iterates of the method remain close to for all with arbitrarily high probability. In turn, this requires showing that the probability of getting a string of “bad” noise realizations of arbitrary length is controllably small. Even then however, the global analysis still cannot be localized because conditioning changes the probability law under which the oracle noise is unbiased. Accounting for this conditional bias requires a surprisingly delicate probabilistic argument which we also detail in the supplement.
6. Concluding remarks
Our aim in this paper was to provide a synthetic view of singlecall surrogates to the ExtraGradient algorithm, and to establish optimal convergence rates in a range of different settings – deterministic, stochastic, and/or nonmonotone. Several interesting avenues open up as a result, from extending the theory to more general Bregman proximal settings, to developing an adaptive version as in the recent work [2] for twocall methods. We defer these research directions to future work.
References
 Adolphs et al. [2019] Adolphs, Leonard, Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann. 2019. Local saddle point optimization: a curvature exploitation approach. AISTATS ’19: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics.
 Bach and Levy [2019] Bach, Francis, Kfir Y. Levy. 2019. A universal algorithm for variational inequalities adaptive to smoothness and noise. COLT ’19: Proceedings of the 32nd Annual Conference on Learning Theory.
 Balduzzi et al. [2018] Balduzzi, David, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, Thore Graepel. 2018. The mechanics of player differentiable games. ICML ’18: Proceedings of the 35th International Conference on Machine Learning.
 Bauschke and Combettes [2017] Bauschke, Heinz H., Patrick L. Combettes. 2017. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. 2nd ed. Springer, New York, NY, USA.
 Bertsekas [1997] Bertsekas, Dimitri P. 1997. Nonlinear programming. Journal of the Operational Research Society 48(3) 334–334.
 Boţ et al. [2019] Boţ, Radu Ioan, Panayotis Mertikopoulos, Mathias Staudigl, Phan Tu Vuong. 2019. Forwardbackwardforward methods with variance reduction for stochastic variational inequalities. https://arxiv.org/abs/1902.03355.
 Bubeck [2015] Bubeck, Sébastien. 2015. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning 8(34) 231–358.
 Chambolle and Pock [2011] Chambolle, Antonin, Thomas Pock. 2011. A firstorder primaldual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40(1) 120–145.
 Chavdarova et al. [2019] Chavdarova, Tatjana, Gauthier Gidel, François Fleuret, Simon LacosteJulien. 2019. Reducing noise in GAN training with variance reduced extragradient. https://arxiv.org/abs/1904.08598.
 Chiang et al. [2012] Chiang, ChaoKai, Tianbao Yang, ChiaJung Lee, Mehrdad Mahdavi, ChiJen Lu, Rong Jin, Shenghuo Zhu. 2012. Online optimization with gradual variations. COLT ’12: Proceedings of the 25th Annual Conference on Learning Theory.
 Chung [1954] Chung, KuoLiang. 1954. On a stochastic approximation method. The Annals of Mathematical Statistics 25(3) 463–483.
 Combettes [2001] Combettes, Patrick L. 2001. QuasiFejérian analysis of some optimization algorithms. Dan Butnariu, Yair Censor, Simeon Reich, eds., Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Elsevier, New York, NY, USA, 115–152.
 Combettes and Pesquet [2015] Combettes, Patrick L., JeanChristophe Pesquet. 2015. Stochastic quasiFejér blockcoordinate fixed point iterations with random sweeping. SIAM Journal on Optimization 25(2) 1221–1248.
 Cui and Shanbhag [2016] Cui, Shisheng, Uday V. Shanbhag. 2016. On the analysis of reflected gradient and splitting methods for monotone stochastic variational inequality problems. CDC ’16: Proceedings of the 57th IEEE Annual Conference on Decision and Control.
 Daskalakis et al. [2018] Daskalakis, Constantinos, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng. 2018. Training GANs with optimism. ICLR ’18: Proceedings of the 2018 International Conference on Learning Representations.
 Daskalakis and Panageas [2018] Daskalakis, Constantinos, Ioannis Panageas. 2018. The limit points of (optimistic) gradient descent in minmax optimization. NIPS’18: Proceedings of the 31st International Conference on Neural Information Processing Systems.
 Facchinei and Kanzow [2007] Facchinei, Francisco, Christian Kanzow. 2007. Generalized Nash equilibrium problems. 4OR 5(3) 173–210.
 Facchinei and Pang [2003] Facchinei, Francisco, JongShi Pang. 2003. FiniteDimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research, Springer.
 Gidel et al. [2019a] Gidel, Gauthier, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, Simon LacosteJulien. 2019a. A variational inequality perspective on generative adversarial networks. ICLR ’19: Proceedings of the 2019 International Conference on Learning Representations.
 Gidel et al. [2019b] Gidel, Gauthier, Reyhane Askari Hemmat, Mohammad Pezehski, Rémi Le Priol, Gabriel Huang, Simon LacosteJulien, Ioannis Mitliagkas. 2019b. Negative momentum for improved game dynamics. AISTATS ’19: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics.
 Goodfellow et al. [2014] Goodfellow, Ian J., Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. 2014. Generative adversarial nets. NIPS ’14: Proceedings of the 27th International Conference on Neural Information Processing Systems.
 Iusem et al. [2017] Iusem, Alfredo N., Alejandro Jofré, Roberto I. Oliveira, Philip Thompson. 2017. Extragradient method with variance reduction for stochastic variational inequalities. SIAM Journal on Optimization 27(2) 686–724.
 Juditsky et al. [2011] Juditsky, Anatoli, Arkadi Semen Nemirovski, Claire Tauvel. 2011. Solving variational inequalities with stochastic mirrorprox algorithm. Stochastic Systems 1(1) 17–58.
 Korpelevich [1976] Korpelevich, G. M. 1976. The extragradient method for finding saddle points and other problems. Èkonom. i Mat. Metody 12 747–756.
 Liang and Stokes [2019] Liang, Tengyuan, James Stokes. 2019. Interaction matters: A note on nonasymptotic local convergence of generative adversarial networks. AISTATS ’19: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics.
 Malitsky [2015] Malitsky, Yura. 2015. Projected reflected gradient methods for monotone variational inequalities. SIAM Journal on Optimization 25(1) 502–520.
 Malitsky [2018] Malitsky, Yura. 2018. Golden ratio algorithms for variational inequalities. https://arxiv.org/abs/1803.08832.
 Mazumdar et al. [2019] Mazumdar, Eric V, Michael I Jordan, S Shankar Sastry. 2019. On finding local nash equilibria (and only local nash equilibria) in zerosum games. https://arxiv.org/abs/1901.00838.
 Mertikopoulos et al. [2019] Mertikopoulos, Panayotis, Bruno Lecouat, Houssam Zenati, ChuanSheng Foo, Vijay Chandrasekhar, Georgios Piliouras. 2019. Optimistic mirror descent in saddlepoint problems: Going the extra (gradient) mile. ICLR ’19: Proceedings of the 2019 International Conference on Learning Representations.
 Mertikopoulos and Zhou [2019] Mertikopoulos, Panayotis, Zhengyuan Zhou. 2019. Learning in games with continuous action sets and unknown payoff functions. Mathematical Programming 173(12) 465–507.
 Mokhtari et al. [2019a] Mokhtari, Aryan, Asuman Ozdaglar, Sarath Pattathil. 2019a. Convergence rate of for optimistic gradient and extragradient methods in smooth convexconcave saddle point problems. https://arxiv.org/pdf/1906.01115.pdf.
 Mokhtari et al. [2019b] Mokhtari, Aryan, Asuman Ozdaglar, Sarath Pattathil. 2019b. A unified analysis of extragradient and optimistic gradient methods for saddle point problems: proximal point approach. https://arxiv.org/abs/1901.08511v2.
 Nemirovski [2004] Nemirovski, Arkadi Semen. 2004. Proxmethod with rate of convergence for variational inequalities with Lipschitz continuous monotone operators and smooth convexconcave saddle point problems. SIAM Journal on Optimization 15(1) 229–251.
 Nesterov [2004] Nesterov, Yurii. 2004. Introductory Lectures on Convex Optimization: A Basic Course. No. 87 in Applied Optimization, Kluwer Academic Publishers.
 Nesterov [2007] Nesterov, Yurii. 2007. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming 109(2) 319–344.
 Nesterov [2009] Nesterov, Yurii. 2009. Primaldual subgradient methods for convex problems. Mathematical Programming 120(1) 221–259.
 Peng et al. [2019] Peng, Wei, YuHong Dai, Hui Zhang, Lizhi Cheng. 2019. Training GANs with centripetal acceleration. https://arxiv.org/abs/1902.08949.
 Popov [1980] Popov, Leonid Denisovich. 1980. A modification of the Arrow–Hurwicz method for search of saddle points. Mathematical Notes of the Academy of Sciences of the USSR 28(5) 845–848.
 Rakhlin and Sridharan [2013a] Rakhlin, Alexander, Karthik Sridharan. 2013a. Online learning with predictable sequences. COLT ’13: Proceedings of the 26th Annual Conference on Learning Theory.
 Rakhlin and Sridharan [2013b] Rakhlin, Alexander, Karthik Sridharan. 2013b. Optimization, learning, and games with predictable sequences. NIPS ’13: Proceedings of the 26th International Conference on Neural Information Processing Systems.
 Ratliff et al. [2013] Ratliff, Lillian J, Samuel A Burden, S Shankar Sastry. 2013. Characterization and computation of local nash equilibria in continuous games. 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 917–924.
 Rosen [1965] Rosen, J. B. 1965. Existence and uniqueness of equilibrium points for concave person games. Econometrica 33(3) 520–534.
 Tseng [1995] Tseng, Paul. 1995. On linear convergence of iterative methods for the variational inequality problem. Journal of Computational and Applied Mathematics 60(12) 237–252.
 Tseng [2000] Tseng, Paul. 2000. A modified forwardbackward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization 38(2) 431–446.
 Yadav et al. [2018] Yadav, Abhay, Sohil Shah, Zheng Xu, David Jacobs, Tom Goldstein. 2018. Stabilizing adversarial nets with prediction methods. ICLR ’18: Proceedings of the 2018 International Conference on Learning Representations.
OXOt#1_#2+12 \RenewDocumentCommand\pastOXOt#1_#212 \RenewDocumentCommand\pastpastOXOt#1_#232 \RenewDocumentCommand\futureOXOt#1_#2+32
Appendix A Technical lemmas
Lemma A.1.
Let and be a closed convex set. We set . For all , we have
(A.1) 
Proof.
Since , we have the following property , leading to
Lemma A.2.
Let and be two closed convex sets. We set and .

If , for all , it holds
(A.2) 
If , for all , it holds
(A.3)
Proof.
Lemma A.3 (Chung [11, Lemma 1]).
Let be a sequence of real numbers and such that for all ,
(A.7) 
where and . Then,
(A.8) 
Proof.
For the sake of completeness, we provide a basic proof for the above lemma (which is a direct corollary of Chung [11, Lemma 1]). Let and , we have
(A.9) 
This shows that for any
(A.10) 
By substituting , (A.7) combined with (A.10) yields
(A.11) 
Let us define . (A.11) becomes
(A.12) 
This inequality holds for all . Then, either:
• becomes nonpositive for some
, and (A.12) implies that this is also the case for all subsequent , which leads to
(A.13) 
Lemma A.4.
Let be a regular solution of (VI). Then, there exists constants such that is Lipschitz continuous on and for all .
Proof.
The Lipschitz continuity is straightforward: a smooth operator is necessarily locally Lipschitz and thus Lipshitz on every compact. The proof consists in establishing the existence of . To this end, we consider the following function:
(A.15) 
where denotes the tangent cone to at . The function is concave as it is defined as a pointwise minimum over a set of linear functions. This in turn implies the continuity because every concave function is continous on the interior of its effective domain. The solution being regular, we have . Combined with the continuity of in a neighborhood of , we deduce the existence of such that for all . Now let . It holds:
(A.16) 
Consequently, writing , , we have
(A.17)  
(A.18) 
Finally, since is a solution of (VI), we have and
(A.19) 
This ends the proof. ∎
Appendix B Proofs for the deterministic setting
b.1. Proof of 2
b.2. Proof of Theorem 1
To facilitate analysis and presentation of our results, (PEG) and (OG) are initialized with random and in while for (RG) we start with and . We are constrained to have different initial states in (RG) due to its specific formulation.
The theorem is immediate from Lemma 2 if we know that (10) is verified by the generated iterates for some . Below, we show it separately for PEG, OG and RG under Assumption 2 and with selected as per the theorem statement. Moreover, we have and for all methods, hence the corresponding bound in our statement. The arguments used in the proof are inspired from [44, 26, 19] but we emphasize the relation between the analyses of these algorithms by putting forward the technical A.2.