A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization
2010
IEEE Transactions on Information Theory
We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. ...
Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. ...
In other words, any negative -divergence can serve as a lower bound for a risk minimization problem. ...
doi:10.1109/tit.2010.2068870
fatcat:hqtr5bvfvzb6bgsmomkozyucjm
Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization
2007
Neural Information Processing Systems
We develop and analyze an algorithm for nonparametric estimation of divergence functionals and the density ratio of two probability distributions. ...
Our method is based on a variational characterization of f -divergences, which turns the estimation into a penalized convex risk minimization problem. ...
In this paper, we estimate the likelihood ratio and the KL divergence by optimizing a penalized convex risk. ...
dblp:conf/nips/NguyenWJ07
fatcat:hoxwvye2ije33czieq4xmoildi
Implications of the Cressie-Read Family of Additive Divergences for Information Recovery
2012
Social Science Research Network
family of probability distributions, likelihood functions, estimators, and inference procedures. ...
To address the unknown sampling process underlying the data, we consider a convex combination of two or more estimators derived from members of the flexible CR family of divergence measures and optimize ...
We also thank two anonymous journal reviewers for their helpful comments and insights. Of course they are not to be held responsible for the current set of words and symbols. ...
doi:10.2139/ssrn.2158413
fatcat:3vhclykp5vfotiofa46or2ccsy
Implications of the Cressie-Read Family of Additive Divergences for Information Recovery
2012
Entropy
family of probability distributions, likelihood functions, estimators, and inference procedures. ...
To address the unknown sampling process underlying the data, we consider a convex combination of two or more estimators derived from members of the flexible CR family of divergence measures and optimize ...
We also thank two anonymous journal reviewers for their helpful comments and insights. Of course they are not to be held responsible for the current set of words and symbols. ...
doi:10.3390/e14122427
fatcat:cdnvldnagzbctjvah734kx6hrm
Convex Multiple-Instance Learning by Estimating Likelihood Ratio
2010
Neural Information Processing Systems
Theoretically, we prove a quantitative relationship between the risk estimated under the 0-1 classification loss, and under a loss function for likelihood ratio. ...
This is casted as joint estimation of both a likelihood ratio predictor and the target (likelihood ratio variable) for instances. ...
Acknowledgements This work is supported, in part, by the European Commission, under a Marie Curie Excellence Grant MCEXT-025481. ...
dblp:conf/nips/LiS10
fatcat:5izppstt7nhhfc2ra7acgm4sku
Bayes Risk Error is a Bregman Divergence
2011
IEEE Transactions on Signal Processing
In this correspondence, we show that the Bayes risk error is a member of the class of Bregman divergences and discuss the implications of this fact. ...
In previous work reported in these Transactions, we proposed a new distortion measure for the quantization of prior probabilities that are used in the threshold of likelihood ratio test detection: Bayes ...
We show that the Bayes risk error is non-negative and only equal to zero when p 0 = a, that it is strictly convex in p 0 , and that it is quasi-convex in a for deterministic likelihood ratio tests [2] ...
doi:10.1109/tsp.2011.2159500
fatcat:s5nzuhwyt5dthbz6eugotqobwq
Nonparametric estimation of the likelihood ratio and divergence functionals
2007
2007 IEEE International Symposium on Information Theory
We develop and analyze a nonparametric method for estimating the class of f -divergence functionals, and the density ratio of two probability distributions. ...
Our method is based on a non-asymptotic variational characterization of the f -divergence, which allows us to cast the problem of estimating divergences in terms of risk minimization. ...
Overall, we obtain an Mestimator, whose optimal value estimates the divergence and optimizing argument estimates the likelihood ratio. ...
doi:10.1109/isit.2007.4557517
dblp:conf/isit/NguyenWJ07
fatcat:6apwvghvbragxhbekpbuvm2ioe
Empirical divergence maximization for quantizer design: An analysis of approximation error
2011
2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Empirical divergence maximization is an estimation method similar to empirical risk minimization whereby the Kullback-Leibler divergence is maximized over a class of functions that induce probability distributions ...
We derive this estimator's approximation error decay rate as a function of the resolution of a class of partitions known as recursive dyadic partitions. ...
, (6) where (5) are observations distributed according to p and q respectively. φn is an empirical divergence maximization estimator akin to the familiar empirical risk minimization estimators [2] . ...
doi:10.1109/icassp.2011.5947284
dblp:conf/icassp/Lexa11
fatcat:lp2setjaavbrbeya2atzvaf7ki
New Developments in Statistical Information Theory Based on Entropy and Divergence Measures
2019
Entropy
In the last decades the interest in statistical methods based on information measures and particularly in pseudodistances or divergences has grown substantially [...] ...
Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. ...
Besides the theoretical results, they have constructed an efficient algorithm, in which we minimize a convex loss function at each iteration. ...
doi:10.3390/e21040391
pmid:33267105
fatcat:jc37cuc4yjamhhdlddrppmkzuu
Divergence measures for statistical data processing—An annotated bibliography
2013
Signal Processing
This note provides a bibliography of investigations based on or related to divergence measures for theoretical and applied inference problems. ...
Mesures de distance pour le traitement statistique de données Résumé : Cette note contient une bibliographie de travaux concernant l'utilisation de divergences dans des problèmes relatifsà l'inférence ...
The application of variational formulation to estimating divergence functionals and the likelihood ratio is addressed in [175] . f -divergences can usefully play the role of surrogate functions, that ...
doi:10.1016/j.sigpro.2012.09.003
fatcat:i5ki4ziujvf7hawvj663cqqzcu
Empirical Localization of Homogeneous Divergences on Discrete Sample Spaces
2015
Neural Information Processing Systems
The proposed estimator is derived from minimization of homogeneous divergence and can be constructed without calculation of the normalization constant, which is frequently infeasible for models in the ...
Some experiments show that the proposed estimator attains comparable performance to the maximum likelihood estimator with drastically lower computational cost. ...
The proposed estimator is defined by minimization of a risk function derived by an unnormalized model and the homogeneous divergence having a weak coincidence axiom. ...
dblp:conf/nips/TakenouchiK15
fatcat:5htcrgy625ggbgvdmqnfhffx5q
Wald-Kernel: Learning to Aggregate Information for Sequential Inference
[article]
2017
arXiv
pre-print
We formulate the problem as a constrained likelihood ratio estimation which can be solved efficiently through convex optimization by imposing Reproducing Kernel Hilbert Space (RKHS) structure on the log-likelihood ...
The proposed algorithm, namely Wald-Kernel, is tested on a synthetic data set and two real world data sets, together with previous approaches for likelihood ratio estimation. ...
[7] derived variational characterizations of f -divergences which enabled estimation of divergence functionals and likelihood ratios through convex risk minimization. ...
arXiv:1508.07964v3
fatcat:vrcl5rwg25ga3aiq5zqo5ltuk4
Optimal Grouping for Group Minimax Hypothesis Testing
[article]
2013
arXiv
pre-print
Together, the optimal grouping and representation points are an epsilon-net with respect to Bayes risk error divergence, and permit a rate-distortion type asymptotic analysis of detection performance with ...
We show that when viewed from a quantization perspective, group minimax amounts to determining centroids with a minimax Bayes risk error divergence distortion criterion: the appropriate Bregman divergence ...
ACKNOWLEDGMENT The authors thank Joong Bum Rhim for discussions. ...
arXiv:1307.6512v1
fatcat:vp5ktb5ahnhyzitmlhdfpbm4im
Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences
[article]
2015
arXiv
pre-print
Like regular empirical risk minimization (ERM), it aims to learn a classifier with a low error rate, and at the same time, for the predictions of the classifier to be independent of sensitive features, ...
In addition, and more importantly, we show that, for any f-divergence, the upper bound of the estimation error of the divergence is O(√(1/n)). ...
Estimation of the Divergence by Minimizing the Maximum Mean Discrepancy To estimate D φ (f ), we first empirically estimate the probability ratio r(V, f (X)) = dPr(V )Pr(f (X))/dPr(V , f (X)), and then ...
arXiv:1506.07721v1
fatcat:hzrycgs3krhf5eh4bf4myjgou4
Relative Novelty Detection
2009
Journal of machine learning research
By design this is dependent on the underlying measure of the space. ...
In this paper we derive a formulation which is able to address this problem by allowing for a reference measure to be given in the form of a sample from an alternative distribution. ...
Variational Decomposition Divergences between distributions, say p and q which can be expressed as the expectation over a function of a likelihood ratio can be estimated directly by solving a convex minimization ...
dblp:journals/jmlr/JSmolaST09
fatcat:3akzacuufzgoflczqptavuyf4a
« Previous
Showing results 1 — 15 out of 7,113 results