Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








101 Hits in 6.2 sec

What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? [article]

Chi Jin, Praneeth Netrapalli, Michael I. Jordan
2020 arXiv   pre-print
Finally, we establish a strong connection to a basic local search algorithm---gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to  ...  The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting---local minimax, as well as to present its properties and existence results  ...  Acknowledgements The authors would like to thank Guojun Zhang and Yaoliang Yu for raising a technical issue with Proposition 19 in an earlier version of this paper, Oleg Burdakov for pointing out the related  ... 
arXiv:1902.00618v3 fatcat:6stlxiyawnhovfqimkse5snupi

Newton and interior-point methods for (constrained) nonconvex-nonconcave minmax optimization with stability and instability guarantees [article]

Raphael Chinchilla, Guosong Yang, Joao P. Hespanha
2024 arXiv   pre-print
Moreover, we show that by selecting the modification in an appropriate way, the only stable equilibrium points of the algorithm's iterations are local minmax points.  ...  We address the problem of finding a local solution to a nonconvex-nonconcave minmax optimization using Newton type methods, including interior-point ones.  ...  In multistep gradient descent ascent, also know as unrolled or GDmax, the minimizer is updated by a single gradient descent whereas the maximizer is updated by several gradient ascent steps that aim to  ... 
arXiv:2205.08038v3 fatcat:uh4vwpk6zndblh5c4ort7n4kau

Taming GANs with Lookahead-Minmax [article]

Tatjana Chavdarova, Matteo Pagliardini, Sebastian U. Stich, Francois Fleuret, Martin Jaggi
2021 arXiv   pre-print
The backtracking step of our Lookahead-minmax naturally handles the rotational game dynamics, a property which was identified to be key for enabling gradient ascent descent methods to converge on challenging  ...  The underlying minmax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field.  ...  A natural generalization of gradient descent for minmax problems is the gradient descent ascent algorithm (GDA), which alternates between a gradient descent step for the min-player and a gradient ascent  ... 
arXiv:2006.14567v3 fatcat:52dchvyfmncqxgm3w7drxuxs2y

Gradient Descent-Ascent Provably Converges to Strict Local Minmax Equilibria with a Finite Timescale Separation [article]

Tanner Fiez, Lillian Ratliff
2020 arXiv   pre-print
In contrast, Jin et al. (2020) showed that the stable critical points of gradient descent-ascent coincide with the set of strict local minmax equilibria as τ→∞.  ...  In this work, we bridge the gap between past work by showing there exists a finite timescale separation parameter τ^∗ such that x^∗ is a stable critical point of gradient descent-ascent for all τ∈ (τ^∗  ...  Acknowledgements This work is funded by the Office of Naval Research (YIP Award) and National Science Foundation CAREER Award (CNS-1844729).  ... 
arXiv:2009.14820v1 fatcat:75yy2bdxdncu5ekg2msf2epvwa

Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games

Tanner Fiez, Lillian J. Ratliff, Eric Mazumdar, Evan Faulkner, Adhyyan Narang
2021 Neural Information Processing Systems  
In pursuit of this goal, we prove that the only locally stable points of the τ -GDA continuous-time limiting system correspond to strict local minmax equilibria in each class of games.  ...  asymptotic almost-sure convergence guarantee for the discrete-time gradient descent-ascent update to a set of the strict local minmax equilibrium.  ...  Acknowledgments and Disclosure of Funding  ... 
dblp:conf/nips/FiezRMFN21 fatcat:gxnuluoxyjdytgpavchwikoh3y

Minimax Optimization with Smooth Algorithmic Adversaries [article]

Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff
2021 arXiv   pre-print
Our framework covers practical settings where the smooth algorithms deployed by the adversary are multi-step stochastic gradient ascent, and its accelerated version.  ...  Our algorithm is guaranteed to make monotonic progress (thus having no limit cycles), and to find an appropriate "stationary point" in a polynomial number of iterations.  ...  However, it has been shown that while local Nash are guaranteed to be stable equilibria of simultaneous gradient descent-ascent [11, 26, 42] , local minmax may not be unless there is sufficient timescale  ... 
arXiv:2106.01488v1 fatcat:wnv37invx5gxxesezoch4ozefy

The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization [article]

Constantinos Daskalakis, Ioannis Panageas
2018 arXiv   pre-print
We characterize the limit points of two basic first order methods, namely Gradient Descent/Ascent (GDA) and Optimistic Gradient Descent Ascent (OGDA).  ...  Moreover, for small step sizes and under mild assumptions, the set of {OGDA}-stable critical points is a superset of {GDA}-stable critical points, which is a superset of local min-max solutions (strict  ...  Our contribution and techniques: In this paper we analyze Gradient Descent/Ascent (GDA) and Optimistic Gradient Descent/Ascent (OGDA) dynamics applied to min-max optimization problems.  ... 
arXiv:1807.03907v1 fatcat:7mmpjo7z7za33j2ak6gtd3tpfy

Non-convex Min-Max Optimization: Applications, Challenges, and Recent Theoretical Advances [article]

Meisam Razaviyayn, Tianjian Huang, Songtao Lu, Maher Nouiehed, Maziar Sanjabi, Mingyi Hong
2020 arXiv   pre-print
The min-max optimization problem, also known as the saddle point problem, is a classical optimization problem which is also studied in the context of zero-sum games.  ...  Finally, we will point out open questions and future research directions.  ...  A simple extension of the PGD to the min-max setting is the gradient-descent ascent algorithm (GDA).  ... 
arXiv:2006.08141v2 fatcat:cq2igqaf4vbx3gszti26ghpfwu

Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization [article]

Tianqi Zheng, Nicolas Loizou, Pengcheng You, Enrique Mallada
2024 arXiv   pre-print
Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings.  ...  We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient  ...  That is, solving the minmax optimization problem via running the standard Gradient Descent Ascent (GDA) algorithm often leads to unstable oscillatory behavior rather than convergence to the optimal solution  ... 
arXiv:2403.09090v1 fatcat:nmd7vihra5g2la3xnya3rqu5c4

Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution Differential Equations [article]

Tatjana Chavdarova, Michael I. Jordan, Manolis Zampetakis
2023 arXiv   pre-print
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) that is identical to that of the Gradient Descent Ascent (GDA)  ...  Additionally, we show that the HRDE of Optimistic Gradient Descent Ascent (OGDA) exhibits last-iterate convergence for general monotone variational inequalities.  ...  We acknowledge support from the Swiss National Science Foundation (SNSF), grants P2ELP2 199740 and P500PT 214441, and from the Mathematical Data Science program of the Office of Naval Research under grant  ... 
arXiv:2112.13826v3 fatcat:3yx3yzrgbvg6tjrihumactdrci

Near-Optimal Algorithms for Minimax Optimization [article]

Tianyi Lin, Chi Jin, Michael. I. Jordan
2021 arXiv   pre-print
Current state-of-the-art first-order algorithms find an approximate Nash equilibrium using Õ(κ_𝐱+κ_𝐲) or Õ(min{κ_𝐱√(κ_𝐲), √(κ_𝐱)κ_𝐲}) gradient evaluations, where κ_𝐱 and κ_𝐲 are the condition numbers  ...  This paper resolves a longstanding open question pertaining to the design of near-optimal first-order algorithms for smooth and strongly-convex-strongly-concave minimax problems.  ...  Moreover, real-world machine-learning systems are increasingly embedded in multiagent systems or matching markets and subject to game-theoretic constraints [Jordan, 2018].  ... 
arXiv:2002.02417v6 fatcat:xppwvgbeafh57g2xdadnv3qmay

Last-iterate convergence rates for min-max optimization [article]

Jacob Abernethy, Kevin A. Lai, Andre Wibisono
2019 arXiv   pre-print
Proving last-iterate convergence is challenging because many natural algorithms, such as Simultaneous Gradient Descent/Ascent, provably diverge or cycle even in simple convex-concave min-max settings,  ...  In this work, we show that the Hamiltonian Gradient Descent (HGD) algorithm achieves linear convergence in a variety of more general settings, including convex-concave problems that satisfy a "sufficiently  ...  The limit points of (optimistic) gradient descent in min-max optimization. In Advances in Neural Information Processing Systems (NeurIPS), pages 9255-9265, 2018. [FS99] Yoav Freund and Robert E.  ... 
arXiv:1906.02027v3 fatcat:sfbwjzoutzdcbgum5obh6iuj2m

Competitive Gradient Optimization [article]

Abhijeet Vyas, Kamyar Azizzadenesheli
2022 arXiv   pre-print
We provide continuous-time analysis of CGO and its convergence properties while showing that in the continuous limit, CGO predecessors degenerate to their gradient descent ascent (GDA) variants.  ...  We propose competitive gradient optimization (CGO ), a gradient-based method that incorporates the interactions between the two players in zero-sum games for optimization updates.  ...  Minmax optimization: Stable limit points of gradient descent ascent are locally optimal. arXiv preprint arXiv:1902.00618, 2019. Chi Jin, Praneeth Netrapalli, and Michael Jordan.  ... 
arXiv:2205.14232v1 fatcat:g5bqkgkoj5fxvjg2agdkykqb24

Last-Iterate Convergence Rates for Min-Max Optimization: Convergence of Hamiltonian Gradient Descent and Consensus Optimization

Jacob D. Abernethy, Kevin A. Lai, Andre Wibisono
2021 International Conference on Algorithmic Learning Theory  
Proving last-iterate convergence is challenging because many natural algorithms, such as Simultaneous Gradient Descent/Ascent, provably diverge or cycle even in simple convex-concave min-max settings,  ...  In this work, we show that the HAMILTONIAN GRADIENT DESCENT (HGD) algorithm achieves linear convergence in a variety of more general settings, including convexconcave problems that satisfy a sufficiently  ...  This follows simply because we will do gradient descent on H in the first case and gradient ascent on H in the second case.  ... 
dblp:conf/alt/AbernethyLW21 fatcat:whwieeossvbv7fr236h57daz6e

GDA-AM: On the effectiveness of solving minimax optimization via Anderson Acceleration [article]

Huan He, Shifan Zhao, Yuanzhe Xi, Joyce C Ho, Yousef Saad
2022 arXiv   pre-print
Gradient descent ascent (GDA) is the most commonly used algorithm due to its simplicity. However, GDA can converge to non-optimal minimax points.  ...  We propose a new minimax optimization framework, GDA-AM, that views the GDAdynamics as a fixed-point iteration and solves it using Anderson Mixing to con-verge to the local minimax.  ...  Algorithms such as optimistic Gradient Descent Ascent (OG) (Daskalakis et al., 2018; and extra-gradient (EG) (Gidel et al., 2019a) can alleviate the issue of GDA for some problems.  ... 
arXiv:2110.02457v3 fatcat:chdpirdodrhzxcpu7ik7mxpone
« Previous Showing results 1 — 15 out of 101 results