A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2013; you can also visit the original URL.
The file type is application/pdf
.
Filters
Stable Convergence Behavior Under Summable Perturbations of a Class of Projection Methods for Convex Feasibility and Optimization Problems
2007
IEEE Journal on Selected Topics in Signal Processing
We study the convergence behavior of a class of projection methods for solving convex feasibility and optimization problems. ...
We prove that the algorithms in this class converge to solutions of the consistent convex feasibility problem, and that their convergence is stable under summable perturbations. ...
Dan Butnariu's work on this paper was done during his 2006 visit to the Discrete Imaging and Graphics group of the Graduate Center of the City University of New York, and he gratefully acknowledges the ...
doi:10.1109/jstsp.2007.910263
fatcat:stfqnfe5prfpbpmpayrocja3hq
Superiorization and Perturbation Resilience of Algorithms: A Continuously Updated Bibliography
[article]
2023
arXiv
pre-print
This document presents a (mostly) chronologically-ordered bibliography of scientific publications on the superiorization methodology and perturbation resilience of algorithms which is compiled and continuously ...
If you know of a related scientific work in any form that should be included here kindly write to me on: yair@math.haifa.ac.il with full bibliographic details, a DOI if available, and a PDF copy of the ...
Kazantsev, Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems, IEEE Jour- All references refer to the bibliography ...
arXiv:1506.04219v8
fatcat:xpucyqpogjemldrbiotjh6jtzy
Perturbation resilience and superiorization of iterative algorithms
2010
Inverse Problems
This is possible to do if the original algorithm is "perturbation resilient," which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. ...
For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. ...
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung, And Blood Institute or the National Institutes of Health. ...
doi:10.1088/0266-5611/26/6/065008
pmid:20613969
pmcid:PMC2897099
fatcat:h5qogt6txvehbmabzco3y6dfsu
Subgradient Techniques for Passivity Enforcement of Linear Device and Interconnect Macromodels
2012
IEEE transactions on microwave theory and techniques
This paper presents a class of nonsmooth convex optimization methods for the passivity enforcement of reducedorder macromodels of electrical interconnects, packages and linear passive devices. ...
We provide a theoretical proof of the global optimality for the solution computed via both schemes. ...
A second class of methods is based on Hamiltonian eigenvalue extraction and perturbation [12] , [18] , [19] , [20] , [21] , [22] , [23] , [24] . ...
doi:10.1109/tmtt.2012.2211610
fatcat:2lwinsuwvfa2hfurpg4rqs5djy
Differentially Private Distributed Convex Optimization via Functional Perturbation
[article]
2016
arXiv
pre-print
We study a class of distributed convex constrained optimization problems where a group of agents aim to minimize the sum of individual objective functions while each desires that any information about ...
To this end, we establish a general framework for differentially private handling of functional data. ...
ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for helpful comments and suggestions that helped improve the presentation. ...
arXiv:1512.00369v3
fatcat:pmwqgzbedbexxmq3thgm2vcnci
Differentially private distributed convex optimization via objective perturbation
2016
2016 American Control Conference (ACC)
We study a class of distributed convex constrained optimization problems where a group of agents aim to minimize the sum of individual objective functions while each desires that any information about ...
To this end, we establish a general framework for differentially private handling of functional data. ...
ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for helpful comments and suggestions that helped improve the presentation. ...
doi:10.1109/acc.2016.7525222
dblp:conf/amcc/NozariTC16
fatcat:gsq4srj5yrexfhtdpbptj4ytxy
Efficient Semidefinite Programming with approximate ADMM
[article]
2021
arXiv
pre-print
Tenfold improvements in computation speed can be brought to the alternating direction method of multipliers (ADMM) for Semidefinite Programming with virtually no decrease in robustness and provable convergence ...
This in turn guarantees convergence, either to a solution or a certificate of infeasibility, of the ADMM algorithm. ...
Cannon, and P. J. Goulart. COSMO: A conic operator
splitting method for convex conic problems. arXiv 1901.10887, 2019.
P. Giselsson, M. Fält, and S. Boyd. ...
arXiv:1912.02767v2
fatcat:nxvv6nc6obffbhxwd45ej6b4we
A STOCHASTIC APPROXIMATION ALGORITHM FOR STOCHASTIC SEMIDEFINITE PROGRAMMING
2016
Probability in the engineering and informational sciences (Print)
Motivated by applications to multi-antenna wireless networks, we propose a distributed and asynchronous algorithm for stochastic semidefinite programming. ...
When applied to throughput maximization in wireless systems, the proposed algorithm retains its convergence properties under a wide array of mobility impediments such as user update asynchronicities, random ...
Acknowledgments This research was supported by the European Commission in the framework of the QUANTICOL Project (grant agreement no. 600708) and the French National Research Agency under grant agreements ...
doi:10.1017/s0269964816000127
fatcat:zlmyyhtbgvbf3oxjlcyu4bd7mu
On the string averaging method for sparse common fixed-point problems
2009
International Transactions in Operational Research
The convex feasibility problem is treated as a special case and a new subgradient projections algorithmic scheme is obtained. ...
We study the common ...xed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. ...
Butnariu, Davidi, Herman and Kazantsev [7] call a certain class of string-averaging methods the Amalgamated Projection Method and show its stable behavior under summable perturbations. ...
doi:10.1111/j.1475-3995.2008.00684.x
pmid:20300484
pmcid:PMC2839252
fatcat:6rvh73dodrdeff7hirknx5p3yu
On the convergence of mirror descent beyond stochastic convex programming
[article]
2018
arXiv
pre-print
In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex), and which we call variationally coherent ...
These results contribute to the landscape of non-convex stochastic optimization by showing that (quasi-)convexity is not essential for convergence to a global minimum: rather, variational coherence, a ...
In this paper, we examine the asymptotic behavior of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex). ...
arXiv:1706.05681v2
fatcat:5kutdfqnzjge5lhkqfq3l2wq3y
On Error Bounds and Multiplier Methods for Variational Problems in Banach Spaces
2018
SIAM Journal of Control and Optimization
We give some global convergence properties of the method and then use the error bound theory to provide estimates for the rate of convergence and to deduce boundedness of the sequence of penalty parameters ...
Finally, numerical results for optimal control, Nash equilibrium problems, and elliptic parameter estimation problems are presented. ...
We refer the reader to [51] for a formal proof; an alternative way to verify this irregularity is to note that if RCQ holds, then it remains stable under small perturbations of the constraint function ...
doi:10.1137/17m1146518
fatcat:24yynk5viralvm736jsae6qtsq
Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
2017
IEEE Transactions on Automatic Control
We present distributed subgradient methods for min-max problems with agreement constraints on a subset of the arguments of both the convex and concave parts. ...
For the case of general convex-concave saddlepoint problems, our analysis establishes the convergence of the running time-averages of the local estimates to a saddle point under periodic connectivity of ...
ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their useful feedback that helped us improve the presentation of the paper. ...
doi:10.1109/tac.2016.2616646
fatcat:577gcegwefb4jgovz5umyjpyie
Distributed saddle-point subgradient algorithms with Laplacian averaging
[article]
2016
arXiv
pre-print
We present distributed subgradient methods for min-max problems with agreement constraints on a subset of the arguments of both the convex and concave parts. ...
For the case of general convex-concave saddle-point problems, our analysis establishes the convergence of the running time-averages of the local estimates to a saddle point under periodic connectivity ...
ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their useful feedback that helped us improve the presentation of the paper. ...
arXiv:1510.05169v2
fatcat:6vdbwetwanbe3epvgszhoxct5m
Dual Smoothing and Level Set Techniques for Variational Matrix Decomposition
[article]
2016
arXiv
pre-print
We focus on the robust principal component analysis (RPCA) problem, and review a range of old and new convex formulations for the problem and its variants. ...
In the final sections, we show a range of numerical experiments for simulated and real-world problems. ...
The problem class (9) falls into the class of problems studied by van den Friedlander (2011, 2008) for ρ(·) = · 2 and by Aravkin et al. (2013) for arbitrary convex ρ. ...
arXiv:1603.00284v1
fatcat:xp3phw4qinfofoxcoychelpnvm
Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
[article]
2015
arXiv
pre-print
We discuss how to adapt IQC theory to study optimization algorithms, proving new inequalities about convex functions and providing a version of IQC theory adapted for use by optimization researchers. ...
Using these inequalities, we derive numerical upper bounds on convergence rates for the gradient method, the heavy-ball method, Nesterov's accelerated method, and related variants by solving small, simple ...
Acknowledgments We would like to thank Peter Seiler for many helpful pointers on time-domain IQCs, Elad Hazan for his suggestion of how to analyze functions that are not strongly convex, and Bin Hu for ...
arXiv:1408.3595v7
fatcat:ycc3dl53tzh4zgzz4wszxrx6uy
« Previous
Showing results 1 — 15 out of 148 results