Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








160 Hits in 1.7 sec

Structured Prediction via the Extragradient Method

Benjamin Taskar, Simon Lacoste-Julien, Michael I. Jordan
2005 Neural Information Processing Systems  
We formulate the estimation problem as a convex-concave saddle-point problem and apply the extragradient method, yielding an algorithm with linear convergence using simple gradient and projection calculations  ...  We present experiments on two very different structured prediction tasks: 3D image segmentation and word alignment, illustrating the favorable scaling properties of our algorithm.  ...  This work was funded by the DARPA CALO project (03-000219) and Microsoft Research MICRO award (05-081). SLJ was also supported by an NSERC graduate sholarship.  ... 
dblp:conf/nips/TaskarLJ05 fatcat:hlvszmtsrfeg5pmwlhqk5wvr24

Large-margin structural prediction via linear programming

Zhuoran Wang, John Shawe-Taylor, Sándor Szedmák
2009 European Association for Machine Translation Conferences/Workshops  
May 13, 2009 L 1 -Regularized Structured Prediction Z. Wang, J. Shawe-Taylor, S. Szedmák, 10/29 Extragradient Method (Cont.)  ...  May 13, 2009 L 1 -Regularized Structured Prediction Z. Wang, J. Shawe-Taylor, S. Szedmák, 13/29 Extragradient Method for LP (Cont.)  ...  Suppose w parameterizes the supporting hyperplane for the data set S.  ... 
dblp:conf/eamt/WangSS09 fatcat:tw6eruptbje4blewfihd57kece

Large-Margin Structured Prediction via Linear Programming

Zhuoran Wang, John Shawe-Taylor
2009 Journal of machine learning research  
The proposed method has the advantages that it can handle arbitrary structures and larger-scale problems.  ...  In addition, we also explore the integration of column generation and an extragradient method for linear programming to gain further efficiency.  ...  We thank Sándor Szedmák for numerous useful discussions on the extragradient method and providing an initial MATLAB implementation of the original algorithm.  ... 
dblp:journals/jmlr/WangS09 fatcat:2piy5fengzeqbgn33jzfilyyfi

Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration [article]

Michael B. Cohen, Aaron Sidford, Kevin Tian
2021 arXiv   pre-print
To obtain this result we provide a fine-grained characterization of the convergence rates of extragradient methods for solving monotone variational inequalities in terms of a natural condition we call  ...  We show that standard extragradient methods (i.e. mirror prox and dual extrapolation) recover optimal accelerated rates for first-order minimization of smooth convex functions.  ...  Acknowledgments The existence of an extragradient algorithm in the primal-dual formulation of smooth minimization directly achieving accelerated rates is due to discussions with the first author, Michael  ... 
arXiv:2011.06572v2 fatcat:ycwuvjf7gnai3fbt4y44ubv234

Distributed Saddle-Point Problems Under Data Similarity

Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander V. Gasnikov
2021 Neural Information Processing Systems  
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type-master/workers (thus centralized) architectures and mesh (thus decentralized) networks  ...  strong convexity constant, and is the diameter of the network.  ...  Figure 4 compares Algorithm 2 with Decentralized Extragradient method (EGD) [5] and Extragradient method with gradient-tracking (EGD-GT) [27] .  ... 
dblp:conf/nips/BeznosikovSRG21 fatcat:3doipurcubaafbysggzcpli6mm

Ultra-Low-Complexity Algorithms with Structurally Optimal Multi-Group Multicast Beamforming in Large-Scale Systems [article]

Chong Zhang, Min Dong, Ben Liang
2022 arXiv   pre-print
The second algorithm adopts the alternating direction method of multipliers (ADMM) method by converting each SCA subproblem into a favorable ADMM structure.  ...  The first algorithm uses a saddle point reformulation in the dual domain and applies the extragradient method with an adaptive step-size procedure to find the saddle point with simple closed-form updates  ...  We now show that the prediction-correction procedure guarantees the extragradient algorithm converges to the saddle point u o of problem (8) .  ... 
arXiv:2206.01846v1 fatcat:h5owmfl2nbcyzjupflejncfngy

Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [article]

Gen Li and Yanxi Chen and Yuejie Chi and H. Vincent Poor and Yuxin Chen
2023 arXiv   pre-print
Underlying our algorithm designs are two key elements: (a) converting the original problem into a bilinear minimax problem over probability distributions; (b) exploiting the extragradient idea – in conjunction  ...  This paper develops a scalable first-order optimization-based method that computes optimal transport to within ε additive accuracy with runtime O( n^2/ε), where n denotes the dimension of the probability  ...  This special structure motivates one to alternate between row and column rescaling until convergence, the key idea behind the Sinkhorn algorithm. Extragradient methods.  ... 
arXiv:2301.13006v1 fatcat:yrtcm4rb7vaqrbs2nzcnbc2cdi

Minimax in Geodesic Metric Spaces: Sion's Theorem and Algorithms [article]

Peiyuan Zhang, Jingzhao Zhang, Suvrit Sra
2022 arXiv   pre-print
In our second main result, we specialize to geodesically complete Riemannian manifolds: we devise and analyze the complexity of first-order methods for smooth minimax problems.  ...  The first main result of the paper is a geodesic metric space version of Sion's minimax theorem; we believe our proof is novel and transparent, as it relies on Helly's theorem only.  ...  Broadly construed, these applications arise via the lens of robust learning with underlying geometric structure (e.g., manifolds).  ... 
arXiv:2202.06950v1 fatcat:qxyy3xzrt5gthjdg7pr6lvogmm

Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions [article]

Sayantan Choudhury, Eduard Gorbunov, Nicolas Loizou
2023 arXiv   pre-print
Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are one of the most efficient  ...  Equipped with this condition, we provide theoretical guarantees for the convergence of single-call extragradient methods for different step-size selections, including constant, decreasing, and step-size-switching  ...  On Single-Call Extragradient Methods. The seminal work of Popov [1980] is the first paper that proposes the deterministic Past Extagradient method.  ... 
arXiv:2302.14043v2 fatcat:lvhyxwrqbbdvbdxux5b67mvdk4

On the Convergence of Stochastic Extragradient for Bilinear Games using Restarted Iteration Averaging [article]

Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, Michael I. Jordan
2022 arXiv   pre-print
We study the stochastic bilinear minimax optimization problem, presenting an analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the  ...  In sharp contrasts with the basic SEG method whose last iterate only contracts to a fixed neighborhood of the Nash equilibrium, SEG augmented with iteration averaging provably converges to the Nash equilibrium  ...  A key example of such a method is the celebrated extragradient method.  ... 
arXiv:2107.00464v4 fatcat:dth6pfn4cfadjhpzrrey2vti6m

Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization [article]

Shicong Cen, Yuting Wei, Yuejie Chi
2023 arXiv   pre-print
Despite recent efforts in understanding the last-iterate convergence of extragradient methods in the unconstrained setting, the theoretical underpinnings of these methods in the constrained settings, especially  ...  Motivated by the algorithmic role of entropy regularization in single-agent reinforcement learning and game theory, we develop provably efficient extragradient methods to find the quantal response equilibrium  ...  The entropy-regularized value iteration (17) converges at a linear rate, i.e. Approximate value iteration via policy extragradient methods.  ... 
arXiv:2105.15186v3 fatcat:rywjyhssjjd45nm3znq7aw2zva

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets [article]

Mingrui Liu, Youssef Mroueh, Jerret Ross, Wei Zhang, Xiaodong Cui, Payel Das, Tianbao Yang
2020 arXiv   pre-print
algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by .  ...  While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear.  ...  ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their helpful comments. M. Liu and T. Yang are partially supported by National Science Foundation CAREER Award 1844403. M.  ... 
arXiv:1912.11940v2 fatcat:dttxn2qxqrdurpxrxh6vbd3h7i

Distributed Saddle-Point Problems Under Similarity [article]

Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
2022 arXiv   pre-print
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type - master/workers (thus centralized) architectures and meshed (thus decentralized  ...  their strong convexity constant, and Δ is the diameter of the network.  ...  The proof is completed by choosing γ according to (19) . Corollary 3 Let we solve the subproblem (10) via Extragradient method with starting point z k and T = O (1 + γL) log 1 ẽ (22) iterations.  ... 
arXiv:2107.10706v3 fatcat:oei5dqx2jvd6jlx5zrkshws7ae

Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with the Ordered l1-Norm [article]

Sangkyun Lee, Damian Brzyski, Malgorzata Bogdan
2016 arXiv   pre-print
Algorithm performance is compared between ours and other alternatives, including the linearized ADMM, Nesterov's smoothing, Nemirovski's mirror-prox, and the accelerated hybrid proximal extragradient techniques  ...  In this paper we propose a primal-dual proximal extragradient algorithm to solve the generalized Dantzig selector (GDS) estimation problem, based on a new convex-concave saddle-point (SP) reformulation  ...  Structured prediction, dual extragradient and bregman projections. Journal of Machine Learning Research, 7:1627-1653, 2006. P. Tseng.  ... 
arXiv:1511.05864v3 fatcat:affcscjs2rf2himp72ztzzgnau

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling [article]

Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
2020 arXiv   pre-print
Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning.  ...  The basic premise of these algorithms is the use of an extrapolation step before performing an update; thanks to this exploration step, extra-gradient methods overcome many of the non-convergence issues  ...  The extragradient method and its limitations As discussed earlier, the go-to method for saddle-point problems and variational inequalities is the extragradient (EG) algorithm of Korpelevich [16] and its  ... 
arXiv:2003.10162v2 fatcat:efpokp6m5fcgpmfa25qw6jsnjy
« Previous Showing results 1 — 15 out of 160 results