A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost
2008
Neural Information Processing Systems
This has various consequences of practical interest, such as showing that 1) the widely adopted practice of relying on convex loss functions is unnecessary, and 2) many new losses can be derived for classification ...
It is shown that a better alternative is to start from the specification of a functional form for the minimum conditional risk, and derive the loss function. ...
This is unlike all other previous φ functions, and suggests that classifiers designed with the new loss should be more robust to outliers. ...
dblp:conf/nips/Masnadi-ShiraziV08
fatcat:sjncdp7nujgc5p7tx6h546wemy
SPLBoost: An Improved Robust Boosting Algorithm Based on Self-paced Learning
[article]
2017
arXiv
pre-print
., LogitBoost and SavageBoost, have been proposed to improve the robustness of AdaBoost by replacing the exponential loss with some designed robust loss functions. ...
Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which is proved to be very sensitive to random noise/outliers. ...
ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China (Grant Nos. 11501440, 61303168, 61333019 and 61373114). ...
arXiv:1706.06341v2
fatcat:kbcfiqd2xrfpfl6nxdh2ahsxqy
Restricted Minimum Error Entropy Criterion for Robust Classification
[article]
2020
arXiv
pre-print
However, the implementation of MEE on robust classification is rather a vacancy in the literature. ...
To this end, we analyze the optimal error distribution in the presence of outliers for those classifiers with continuous errors, and introduce a simple codebook to restrict MEE so that it drives the error ...
Simulation results in the above works have shown the effectiveness of using a bounded and non-convex loss function for robust classification. ...
arXiv:1909.02707v4
fatcat:pdo7rjnknza4rg2js32ppdfgie
On the design of robust classifiers for computer vision
2010
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
The probability elicitation view of classifier design is adopted, and a set of necessary conditions for the design of such losses is identified. ...
The design of robust classifiers, which can contend with the noisy and outlier ridden datasets typical of computer vision, is studied. ...
Figure 1 . 1 Loss functions used for classifier design in alternative to the non-margin enforcing 0−1 loss. Top: classical non-robust losses. Bottom: robust losses of SavageBoost and TangentBoost. ...
doi:10.1109/cvpr.2010.5540136
dblp:conf/cvpr/Masnadi-ShiraziMV10
fatcat:vyocqnjq2ngp3ndz7tk45vntwa
Calibrated Surrogate Losses for Adversarially Robust Classification
[article]
2021
arXiv
pre-print
We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated. ...
Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. ...
On the design of loss functions for classification:
theory, robustness to outliers, and savageboost. In Advances in Neural Information Processing
Systems 22, pages 1049-1056, 2009. ...
arXiv:2005.13748v2
fatcat:fu3yomuppvfdhj74hrjkwdd3kq
Variable margin losses for classifier design
2010
Neural Information Processing Systems
A detailed analytical study is presented on how properties of the classification risk, such as its optimal link and minimum risk functions, are related to the shape of the loss, and its margin enforcing ...
These enable a precise characterization of the loss for a popular class of link functions. ...
This loss is then used to design a robust boosting algorithm, denoted SavageBoost. ...
dblp:conf/nips/Masnadi-ShiraziV10
fatcat:5fwzjheemrdx5n6srwd4q2j4ui
On Symmetric Losses for Learning from Corrupted Labels
[article]
2019
arXiv
pre-print
This paper aims to provide a better understanding of a symmetric loss. ...
Finally, we conduct experiments to validate the relevance of the symmetric condition. ...
Acknowledgement We thank Han Bao and Zhenghang Cui for helpful discussion. We also thank anonymous reviewers for providing insightful comments. ...
arXiv:1901.09314v2
fatcat:sfqomzvx6bg57cf54djfci2x44
Progressive Identification of True Labels for Partial-Label Learning
[article]
2020
arXiv
pre-print
The goal of this paper is to propose a novel framework of PLL with flexibility on the model and optimization algorithm. ...
The resulting algorithm is model-independent and loss-independent, and compatible with stochastic optimization. Thorough experiments demonstrate it sets the new state of the art. ...
On the design
of loss functions for classification: theory, robustness to
outliers, and savageboost. ...
arXiv:2002.08053v3
fatcat:qr6ysodjbndb7kz3r75tzcbuwa
Loss Functions, Axioms, and Peer Review
[article]
2020
arXiv
pre-print
We consider the class of L(p,q) loss functions, which is a matrix-extension of the standard class of L_p losses on vectors; here the choice of the loss function amounts to choosing the hyperparameters ...
The key challenge that arises is the specification of a loss function for ERM. ...
We are grateful to Francisco Cruz for compiling the IJCAI 2017 review dataset, and to Carles Sierra for making it available to us. ...
arXiv:1808.09057v2
fatcat:3jdvu6z7m5e2xcctowypkwapfa
Dimension-free convergence rates for gradient Langevin dynamics in RKHS
[article]
2020
arXiv
pre-print
Amongst others, the convergence analysis relies on the properties of a stochastic differential equation, its discrete time Galerkin approximation and the geometric ergodicity of the associated Markov chains ...
In this work, we provide a convergence analysis of GLD and SGLD when the optimization space is an infinite dimensional Hilbert space. ...
Masnadi-Shirazi and N. Vasconcelos. On the design of loss functions for classification: theory, robustness to outliers, and savageboost. ...
arXiv:2003.00306v2
fatcat:peaztc5anbbvbkih2codeekgcm