Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








426 Hits in 4.1 sec

Fast Distributionally Robust Learning with Variance Reduced Min-Max Optimization [article]

Yaodong Yu, Tianyi Lin, Eric Mazumdar, Michael I. Jordan
2022 arXiv   pre-print
Key to our results is the use of variance reduction and random reshuffling to accelerate stochastic min-max optimization, the analysis of which may be of independent interest.  ...  Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive  ...  Conclusions We studied Wasserstein distributionally robust supervised learning through the lens of min-max optimization.  ... 
arXiv:2104.13326v2 fatcat:bqobb3ieijajnhykasqjd6masa

Group Distributionally Robust Reinforcement Learning with Hierarchical Latent Variables [article]

Mengdi Xu, Peide Huang, Yaru Niu, Visak Kumar, Jielin Qiu, Chao Fang, Kuan-Hui Lee, Xuewei Qi, Henry Lam, Bo Li, Ding Zhao
2022 arXiv   pre-print
Robust RL has been applied to deal with task ambiguity, but may result in over-conservative policies.  ...  To balance the worst-case (robustness) and average performance, we propose Group Distributionally Robust Markov Decision Process (GDR-MDP), a flexible hierarchical MDP formulation that encodes task groups  ...  The Group Distributionally Robust Bellman optimality equation is V π t (b t , s t ) = max πt min bt∈Cb t E bt(z) E µz(m) E πt E Rm [r t ] + γ st+1 P m (s t+1 |s t , a t )V π t+1 (b t+1 , s t+1 ) .  ... 
arXiv:2210.12262v1 fatcat:sruqwgyeefeb3pw2xwe5ygfxri

Compositional federated learning: Applications in distributionally robust averaging and meta learning [article]

Feihu Huang, Junyi Li
2023 arXiv   pre-print
data mining and machine learning problems with a hierarchical structure such as distributionally robust FL and model-agnostic meta learning (MAML).  ...  In particular, we first transform the distributionally robust FL (i.e., a minimax optimization problem) into a simple composition optimization problem by using KL divergence regularization.  ...  Index Terms-Federated Learning, Composition Optimization, Distributionally Robust, Meta Learning, Model Agnostic.  ... 
arXiv:2106.11264v3 fatcat:anekjuzz65en7etnlufbeyncuy

Generalized Wasserstein Dice Score, Distributionally Robust Deep Learning, and Ranger for brain tumor segmentation: BraTS 2020 challenge [article]

Lucas Fidon and Sebastien Ourselin and Tom Vercauteren
2021 arXiv   pre-print
, corresponding to distributionally robust optimization, and a non-standard optimizer, Ranger.  ...  Distributionally robust optimization is a generalization of empirical risk minimization that accounts for the presence of underrepresented subdomains in the training dataset.  ...  Changing the Optimizer: Ranger RAdam [23] is a modification of the Adam optimizer [21] that aims at reducing the variance of the adaptive learning rate of Adam in the early-stage of training.  ... 
arXiv:2011.01614v2 fatcat:sns55rjuordated7d3xlxpv6je

Distributionally Robust Surrogate Optimal Control for High-Dimensional Systems [article]

Aaron Kandel, Saehong Park, Scott Moura
2022 arXiv   pre-print
This work is motivated by the ongoing challenges of safety, computation, and optimality in high-dimensional optimal control. We address these key questions with the following approach.  ...  This paper presents a novel methodology for tractably solving optimal control and offline reinforcement learning problems for high-dimensional systems.  ...  This highlights a comparative Figure 4 shows the optimal fast charging results for versions of our algorithm with and without distributionally robust optimization.  ... 
arXiv:2105.10070v2 fatcat:cx5bttn4afgitoh77juq4z4any

Reliable Off-policy Evaluation for Reinforcement Learning [article]

Jie Wang, Rui Gao, Hongyuan Zha
2021 arXiv   pre-print
Leveraging methodologies from distributionally robust optimization, we show that with proper selection of the size of the distributional uncertainty set, these estimates serve as confidence bounds with  ...  Our results are also generalized to batch reinforcement learning and are supported by empirical analysis.  ...  Smirnova E, Dohmatob E, Mary J ( ) Distributionally robust reinforcement learning. ArXiv abs/ . . Song J, Zhao C ( ) Optimistic distributionally robust policy optimization. arXiv preprint arXiv: . .  ... 
arXiv:2011.04102v2 fatcat:hd4mbmxxtjd65iprydrf2xygxy

An approach to the distributionally robust shortest path problem [article]

Sergey S. Ketkov, Oleg A. Prokopyev, Evgenii P. Burashnikov
2020 arXiv   pre-print
Under some additional assumptions the resulting distributionally robust shortest path problem (DRSPP) admits equivalent robust and mixed-integer programming (MIP) reformulations.  ...  The ambiguity set is formed by all distributions that satisfy prescribed linear first-order moment constraints with respect to subsets of arcs and individual probability constraints with respect to particular  ...  [37] for the distributionally robust max-min problems with a marginal moment ambiguity set.  ... 
arXiv:1910.08744v3 fatcat:ewhvgxddxbgwdd3hten37rk4fe

DR-DSGD: A Distributionally Robust Decentralized Learning Algorithm over Graphs [article]

Chaouki Ben Issaid, Anis Elgabli, Mehdi Bennis
2022 arXiv   pre-print
By adding a Kullback-Liebler regularization function to the robust min-max optimization problem, the learning problem can be reduced to a modified robust minimization problem and solved efficiently.  ...  In this paper, we propose to solve a regularized distributionally robust learning problem in the decentralized setting, taking into account the data distribution shift.  ...  In this case, the distributionally robust empirical loss problem is given by the following min-max optimization problem min Θ∈R d max λ∈∆ K i=1 λ i f i (Θ). ( 5 ) Although several distributed algorithms  ... 
arXiv:2208.13810v2 fatcat:nik3iefajjb2xcniw3lrbksxqq

Distributionally Robust Optimization: A review on theory and applications

Fengming Lin, Xiaolei Fang, Zheming Gao
2022 Numerical Algebra, Control and Optimization  
<p style='text-indent:20px;'>In this paper, we survey the primary research on the theory and applications of distributionally robust optimization (DRO).  ...  We start with reviewing the modeling power and computational attractiveness of DRO approaches, induced by the ambiguity sets structure and tractable robust counterpart reformulations.  ...  [66] also proposed their work in distributionally robust learning with unlabeled data.  ... 
doi:10.3934/naco.2021057 fatcat:cluvjzkrhfbc7o2446fhjrmbji

Efficient Distributionally Robust Bayesian Optimization with Worst-case Sensitivity

Sebastian Shenghong Tay, Chuan Sheng Foo, Urano Daisuke, Richalynn Leong, Bryan Kian Hsiang Low
2022 International Conference on Machine Learning  
In distributionally robust Bayesian optimization (DRBO), an exact computation of the worst-case expected value requires solving an expensive convex optimization problem.  ...  We provide a regret bound for our novel DRBO algorithm with the fast approximation, and empirically show it is competitive with that using the exact worst-case expected value while incurring significantly  ...  Distributionally robust BO (DRBO) was concurrently introduced by Kirschner et al. (2020) and Nguyen et al. (2020) and adapts the framework of distributionally robust optimization (DRO) in operations  ... 
dblp:conf/icml/TayFDLL22 fatcat:s2qsht2wwnhojouet2zmwe4tye

Efficient Stochastic Gradient Descent for Learning with Distributionally Robust Optimization [article]

Soumyadip Ghosh, Mark Squillante, Ebisa Wollega
2020 arXiv   pre-print
Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization.  ...  These min-max formulations, however, are more difficult to solve. We therefore provide a new stochastic gradient descent algorithm to efficiently solve this DRO formulation.  ...  Distributionally Robust Learning Consider a general formulation of the distributionally robust optimization problem of active interest.  ... 
arXiv:1805.08728v2 fatcat:u2jeqbumtbbfjlvelv2gt7nicy

Wasserstein Distributionally Robust Inverse Multiobjective Optimization

Chaosheng Dong, Bo Zeng
2021 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
To hedge against the uncertainties in the hypothetical DMP, the data, and the parameter space, we investigate in this paper the distributionally robust approach for inverse multiobjective optimization.  ...  We then formulate a Wasserstein distributionally robust inverse multiobjective optimization problem (WRO-IMOP) that minimizes a worst-case expected loss function, where the worst case is taken over all  ...  Algorithm 1 Wasserstein Distributionally Robust IMOP if CV i > 0 then let Y i ← Y i ∪ { y i } end if 8: end for 9: until max i∈[N ] CV i ≤ δ 10: Output: a δ-optimal solution θ N of (3) Using the above  ... 
doi:10.1609/aaai.v35i7.16739 fatcat:vugh7jk5hnalzf7l525z4eau3q

Distributionally Robust Optimization [chapter]

Jian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michail Smyrnakis, Hamidou TembineJian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michalis Smyrnakis, Hamidou Tembine
2018 Optimization Algorithms - Examples  
This chapter presents a class of distributionally robust optimization problems in which a decision-maker has to choose an action in an uncertain environment.  ...  This leads to a distributionally robust optimization problem. Simple algorithms, whose dynamics are inspired from the gradient flows, are proposed to find local optima.  ...  The presented methodology is simple and reduces significantly the dimensionality of the distributionally robust optimization.  ... 
doi:10.5772/intechopen.76686 fatcat:3s5nhfcjwncsjfxqp6mlnkdnhe

Sinkhorn Distributionally Robust Optimization [article]

Jie Wang, Rui Gao, Yao Xie
2022 arXiv   pre-print
We study distributionally robust optimization (DRO) with Sinkhorn distance – a variant of Wasserstein distance based on entropic regularization.  ...  We demonstrate that our proposed algorithm finds δ-optimal solution of the new DRO formulation with computation cost Õ(δ^-3) and memory cost Õ(δ^-2), and the computation cost further improves to Õ(δ^-2  ...  IEEE Transactions on Automatic Control ( ): -. [ ] Yu Y, Lin T, Mazumdar EV, Jordan M ( ) Fast distributionally robust learning with variance-reduced min-max optimization.  ... 
arXiv:2109.11926v2 fatcat:jkoyh2rrqvdvxfdke75k7liv54

Gradient Descent Ascent for Minimax Problems on Riemannian Manifolds [article]

Feihu Huang, Shangqian Gao
2023 arXiv   pre-print
To further reduce the sample complexity, we propose an accelerated Riemannian stochastic gradient descent ascent (Acc-RSGDA) algorithm based on the momentum-based variance-reduced technique.  ...  Extensive experimental results on the robust distributional optimization and robust Deep Neural Networks (DNNs) training over Stiefel manifold demonstrate efficiency of our algorithms.  ...  Subsequently, [43] presented fast stochastic variance-reduced methods to Riemannian manifold optimization.  ... 
arXiv:2010.06097v5 fatcat:tgz3wv7abzaqbnphfwt43j7hoa
« Previous Showing results 1 — 15 out of 426 results