Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








151 Hits in 3.3 sec

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems – Survey [article]

Aleksandr Beznosikov, Boris Polyak, Eduard Gorbunov, Dmitry Kovalev, Alexander Gasnikov
2022 arXiv   pre-print
The last parts of the paper are devoted to various recent (not necessarily stochastic) advances in algorithms for variational inequalities.  ...  This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities.  ...  A survey of stochastic methods for solving variational inequalities is the subject of this paper. Structure of the paper.  ... 
arXiv:2208.13592v2 fatcat:tgfxbc5p7bcd5mtnwdtl4yuewq

Revisiting Stochastic Extragradient [article]

Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky
2020 arXiv   pre-print
variational inequality that go beyond existing settings.  ...  Since the existing stochastic extragradient algorithm, called Mirror-Prox, of (Juditsky et al., 2011) diverges on a simple bilinear problem when the domain is not bounded, we prove guarantees for solving  ...  Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17- 58, 2011. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization.  ... 
arXiv:1905.11373v2 fatcat:yx5barli3vbkvktiq74735b5ve

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets [article]

Mingrui Liu, Youssef Mroueh, Jerret Ross, Wei Zhang, Xiaodong Cui, Payel Das, Tianbao Yang
2020 arXiv   pre-print
stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by .  ...  Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks.  ...  Liu would like to thank Xiufan Yu from Pennsylvania State University and Zehao Dou from Yale University for helpful discussions.  ... 
arXiv:1912.11940v2 fatcat:dttxn2qxqrdurpxrxh6vbd3h7i

Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration [article]

Michael B. Cohen, Aaron Sidford, Kevin Tian
2021 arXiv   pre-print
To obtain this result we provide a fine-grained characterization of the convergence rates of extragradient methods for solving monotone variational inequalities in terms of a natural condition we call  ...  We show that standard extragradient methods (i.e. mirror prox and dual extrapolation) recover optimal accelerated rates for first-order minimization of smooth convex functions.  ...  Acknowledgments The existence of an extragradient algorithm in the primal-dual formulation of smooth minimization directly achieving accelerated rates is due to discussions with the first author, Michael  ... 
arXiv:2011.06572v2 fatcat:ycwuvjf7gnai3fbt4y44ubv234

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling [article]

Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
2020 arXiv   pre-print
On the other hand, as we show in this paper, running vanilla extragradient with stochastic gradients may jeopardize its convergence, even in simple bilinear models.  ...  Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning.  ...  Solving variational inequalities with stochastic mirror-prox algorithm.  ... 
arXiv:2003.10162v2 fatcat:efpokp6m5fcgpmfa25qw6jsnjy

On the Convergence of Stochastic Extragradient for Bilinear Games using Restarted Iteration Averaging [article]

Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, Michael I. Jordan
2022 arXiv   pre-print
We study the stochastic bilinear minimax optimization problem, presenting an analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the  ...  In the interpolation setting where noise vanishes at the Nash equilibrium, we achieve an optimal convergence rate up to tight constants.  ...  reduced algorithms for solving variational inequalities with the finite-sum structure.  ... 
arXiv:2107.00464v4 fatcat:dth6pfn4cfadjhpzrrey2vti6m

Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces [article]

Sridhar Mahadevan, Bo Liu, Philip Thomas, Will Dabney, Steve Giguere, Nicholas Jacek, Ian Gemp, Ji Liu
2014 arXiv   pre-print
in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization.  ...  This key technical innovation makes it possible to finally design "true" stochastic gradient methods for reinforcement learning.  ...  Principal funding for this research was provided by the National Science Foundation under the grant NSF IIS-1216467.  ... 
arXiv:1405.6757v1 fatcat:u77kqc6iyncy7fixlnrfcnqrmy

Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [article]

Simon S. Du, Gauthier Gidel, Michael I. Jordan, Chris Junchi Li
2022 arXiv   pre-print
Building upon standard stochastic extragradient analysis for variational inequalities, we present a stochastic accelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragradient  ...  error term for bounded stochastic noise that is optimal up to a constant prefactor.  ...  Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17-58, 2011. (Cited on page 5.) Galina M Korpelevich.  ... 
arXiv:2206.08573v3 fatcat:7jls7yoiyfhwbnj42ki4hszzw4

Reducing Noise in GAN Training with Variance Reduced Extragradient [article]

Tatjana Chavdarova, Gauthier Gidel, François Fleuret, Simon Lacoste-Julien
2020 arXiv   pre-print
We address this issue with a novel stochastic variance-reduced extragradient (SVRE) optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the  ...  We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while  ...  Acknowledgements This research was partially supported by the Canada CIFAR AI Chair Program, the Canada Excellence Research Chair in "Data Science for Realtime Decision-making", by the NSERC Discovery  ... 
arXiv:1904.08598v3 fatcat:gotf432ufbfg3dhsdbu6lwabvi

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems [article]

Digvijay Boob, Cristóbal Guzmán
2022 arXiv   pre-print
We propose two algorithms: Noisy Stochastic Extragradient (NSEG) and Noisy Inexact Stochastic Proximal Point (NISPP).  ...  In this work, we conduct the first systematic study of stochastic variational inequality (SVI) and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP).  ...  Acknowledgements CG would like to thank Roberto Cominetti for valuable discussions on stochastic variational inequalities and nonexpansive iterations.  ... 
arXiv:2104.02988v3 fatcat:sz2d52o6pjeobgdlx4u3mmly3y

Online and Bandit Algorithms for Nonstationary Stochastic Saddle-Point Optimization [article]

Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, Prasant Mohapatra
2019 arXiv   pre-print
We then analyze extragradient and Frank-Wolfe algorithms, for the unconstrained and constrained settings respectively, for the above class of nonstationary saddle-point optimization problems.  ...  Saddle-point optimization problems are an important class of optimization problems with applications to game theory, multi-agent reinforcement learning and machine learning.  ...  Algorithms for Nonstationary Saddle-Point Optimization We now discuss the extragradient and the Frank-Wolfe algorithms we use for obtaining regret bounds for the nonstationary saddle-point optimization  ... 
arXiv:1912.01698v1 fatcat:osyj5lxv3zhbldhnq2zctwnz5a

Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization [article]

Jelena Diakonikolas, Constantinos Daskalakis, Michael I. Jordan
2021 arXiv   pre-print
We introduce a new class of structured nonconvex-nonconcave min-max optimization problems, proposing a generalization of the extragradient algorithm which provably converges to a stationary point.  ...  or in which a weak solution to the associated variational inequality problem is assumed to exist.  ...  Acknowledgements We wish to thank Steve Wright for a useful discussion regarding convergence of sequences.  ... 
arXiv:2011.00364v2 fatcat:znwzeh4r2ndthgc3no6bfuyybm

Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Optimization [article]

Luofeng Liao, Li Shen, Jia Duan, Mladen Kolar, Dacheng Tao
2022 arXiv   pre-print
We study a class of stochastic minimax methods and develop a communication-efficient distributed stochastic extragradient algorithm, LocalAdaSEG, with an adaptive learning rate suitable for solving convex-concave  ...  We compare LocalAdaSEG against several existing optimizers for minimax problems and demonstrate its efficacy through several experiments in both homogeneous and heterogeneous settings.  ...  We implement WGAN with the DCGAN architecture, which improves the original GAN with convolutional layers.  ... 
arXiv:2106.10022v2 fatcat:b4qfds2qhzg3zib3nvwwsbzqou

Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems [article]

Thomas Pethick, Puya Latafat, Panagiotis Patrinos, Olivier Fercoq, Volkan Cevher
2023 arXiv   pre-print
This problem class captures non-trivial structures as we demonstrate with examples, for which a large family of existing algorithms provably converge to limit cycles.  ...  This observation has recently motivated the study of structures sufficient for convergence of first order methods in the more general setting of variational inequalities when the so-called weak Minty variational  ...  The work of Olivier Fercoq was supported by the Agence National de la Recherche grant ANR-20-CE40-0027, Optimal Primal-Dual Algorithms (APDO).  ... 
arXiv:2302.09831v1 fatcat:x3skeet7m5chvg6vc23vd6lonu

Mirror-prox sliding methods for solving a class of monotone variational inequalities [article]

Guanghui Lan, Yuyuan Ouyang
2021 arXiv   pre-print
In this paper we propose new algorithms for solving a class of structured monotone variational inequality (VI) problems over compact feasible sets.  ...  Moreover, for the case when the operator H can only be accessed through its stochastic estimators, we propose a stochastic mirror-prox sliding method that can compute a stochastic ε-approximate weak solution  ...  In this paper we consider a class of structured monotone variational inequalities over compact feasible sets, in which there exist gradient components in the operators of variational inequalities.  ... 
arXiv:2111.00996v1 fatcat:ijclaqqqlzbzfayrjlavwcndde
« Previous Showing results 1 — 15 out of 151 results