Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








2,000 Hits in 6.3 sec

Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization

XuanLong Nguyen, Martin J. Wainwright, Michael I. Jordan
2007 Neural Information Processing Systems  
Our method is based on a variational characterization of f -divergences, which turns the estimation into a penalized convex risk minimization problem.  ...  We develop and analyze an algorithm for nonparametric estimation of divergence functionals and the density ratio of two probability distributions.  ...  In this paper, we estimate the likelihood ratio and the KL divergence by optimizing a penalized convex risk.  ... 
dblp:conf/nips/NguyenWJ07 fatcat:hoxwvye2ije33czieq4xmoildi

Convex Multiple-Instance Learning by Estimating Likelihood Ratio

Fuxin Li, Cristian Sminchisescu
2010 Neural Information Processing Systems  
Theoretically, we prove a quantitative relationship between the risk estimated under the 0-1 classification loss, and under a loss function for likelihood ratio.  ...  This is casted as joint estimation of both a likelihood ratio predictor and the target (likelihood ratio variable) for instances.  ...  Acknowledgements This work is supported, in part, by the European Commission, under a Marie Curie Excellence Grant MCEXT-025481.  ... 
dblp:conf/nips/LiS10 fatcat:5izppstt7nhhfc2ra7acgm4sku

New Developments in Statistical Information Theory Based on Entropy and Divergence Measures

Leandro Pardo
2019 Entropy  
In the last decades the interest in statistical methods based on information measures and particularly in pseudodistances or divergences has grown substantially [...]  ...  Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics.  ...  Besides the theoretical results, they have constructed an efficient algorithm, in which we minimize a convex loss function at each iteration.  ... 
doi:10.3390/e21040391 pmid:33267105 fatcat:jc37cuc4yjamhhdlddrppmkzuu

Penalized Bregman divergence for large-dimensional regression and classification

Chunming Zhang, Yuan Jiang, Yi Chai
2010 Biometrika  
We introduce the penalized Bregman divergence by replacing the negative loglikelihood in the conventional penalized likelihood with Bregman divergence, which encompasses many commonly used loss functions  ...  It is shown that the resulting penalized estimator, combined with appropriate penalties, achieves the same oracle property as the penalized likelihood estimator, but asymptotically does not rely on the  ...  In that case, if convex penalties are used in (6), then n ( β) is necessarily convex in β, and hence the local minimizer β E is the unique global penalized Bregman divergence estimator.  ... 
doi:10.1093/biomet/asq033 pmid:22822248 pmcid:PMC3372245 fatcat:lyvdgqpvozbkpkuygxirhmisea

Statistical models, likelihood, penalized likelihood and hierarchical likelihood [article]

Daniel Commenges
2008 arXiv   pre-print
The Kullback-Leibler divergence is referred to repeatedly, for defining the misspecification risk of a model, for grounding the likelihood and the likelihood crossvalidation which can be used for choosing  ...  Families of penalized likelihood and sieves estimators are shown to be equivalent. The similarity of these likelihood with a posteriori distributions in a Bayesian approach is considered.  ...  I would like to thank Anne Gégout-Petit for helpful comments on the manuscript.  ... 
arXiv:0808.4042v1 fatcat:zsxiwq2nhrdbplybv3jos3aqqa

Statistical models: Conventional, penalized and hierarchical likelihood

Daniel Commenges
2009 Statistics Survey  
The Kullback-Leibler divergence is referred to repeatedly in the literature, for defining the misspecification risk of a model and for grounding the likelihood and the likelihood cross-validation, which  ...  Families of penalized likelihood and particular sieves estimators are shown to be equivalent. The similarity of these likelihoods with a posteriori distributions in a Bayesian approach is considered.  ...  Acknowledgements I would like to thank Anne Gégout-Petit for helpful comments on the manuscript. D. Commenges/Statistical models and likelihoods  ... 
doi:10.1214/08-ss039 fatcat:imjm3d5dsfdqljdcm7mddjb43m

MDL, penalized likelihood, and statistical risk

Andrew R. Barron, Cong Huang, Jonathan Q. Li, Xi Luo
2008 2008 IEEE Information Theory Workshop  
Penalized likelihood risk bounds should capture the tradeoff of Kullback-Leibler approximation error and penalty.  ...  This is examined for 1 penalized least squares in the manuscripts [80] and [39] and for 1 penalized likelihood in the present paper.  ...  RISK AND RESOLVABILITY FOR COUNTABLEF Here we recall risk bounds for penalized likelihood with a countableF.  ... 
doi:10.1109/itw.2008.4578660 dblp:conf/itw/BarronHLL08 fatcat:zvxpvgjpgjejjjv32ywoxlu3u4

Penalized high-dimensional empirical likelihood

Cheng Yong Tang, Chenlei Leng
2010 Biometrika  
By using an appropriate penalty function, we show that penalized empirical likelihood has the oracle property.  ...  We propose penalized empirical likelihood for parameter estimation and variable selection for problems with diverging numbers of parameters.  ...  For (1) and (3) to have solutions, μ needs to be in the convex hull formed by {X i } n i=1 . Therefore, the penalized empirical likelihood estimatorμ must lie within the convex hull of the data.  ... 
doi:10.1093/biomet/asq057 fatcat:s5faeofjrfgzdmbqud62j6by7e

A Practical Transfer Learning Algorithm for Face Verification

Xudong Cao, David Wipf, Fang Wen, Genquan Duan, Jian Sun
2013 2013 IEEE International Conference on Computer Vision  
Based upon a surprisingly simple generative Bayesian model, our approach combines a KL-divergencebased regularizer/prior with a robust likelihood function leading to a scalable implementation via the EM  ...  As justification for our design choices, we later use principles from convex analysis to recast our algorithm as an equivalent structured rank minimization problem leading to a number of interesting insights  ...  We may now optimize the objective function in (3) by iteratively computing the expectation of the latent variables (E step) and updating the parameters by maximizing the expected penalized log-likelihood  ... 
doi:10.1109/iccv.2013.398 dblp:conf/iccv/CaoWWD013 fatcat:6w46kxokljahrpvde6air6zqzq

A Selective Overview of Variable Selection in High Dimensional Feature Space

Jianqing Fan, Jinchi Lv
2010 Statistica sinica  
The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized.  ...  What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field.  ...  Acknowledgments Fan's research was partially supported by NSF Grants DMS-0704337 and DMS-0714554 and NIH Grant R01-GM072611.  ... 
pmid:21572976 pmcid:PMC3092303 fatcat:kf3kbxcyozdaleej4i63y63phm

Wald-Kernel: Learning to Aggregate Information for Sequential Inference [article]

Diyan Teng, Emre Ertin
2017 arXiv   pre-print
We formulate the problem as a constrained likelihood ratio estimation which can be solved efficiently through convex optimization by imposing Reproducing Kernel Hilbert Space (RKHS) structure on the log-likelihood  ...  The proposed algorithm, namely Wald-Kernel, is tested on a synthetic data set and two real world data sets, together with previous approaches for likelihood ratio estimation.  ...  [7] derived variational characterizations of f -divergences which enabled estimation of divergence functionals and likelihood ratios through convex risk minimization.  ... 
arXiv:1508.07964v3 fatcat:vrcl5rwg25ga3aiq5zqo5ltuk4

Collaborative likelihood-ratio estimation over graphs [article]

Alejandro de la Concha and Nicolas Vayatis and Argyris Kalogeratos
2024 arXiv   pre-print
Assuming we have iid observations from two unknown probability density functions (pdfs), p and q, the likelihood-ratio estimation (LRE) is an elegant approach to compare the two pdfs only by relying on  ...  from two unknown node-specific pdfs, p_v and q_v, and the goal is to estimate for each node the likelihood-ratio between both pdfs by also taking into account the information provided by the graph structure  ...  Acknowledgments The authors acknowledge support from the Industrial Data Analytics and Machine Learning Chair hosted at ENS Paris-Saclay, Université Paris-Saclay.  ... 
arXiv:2205.14461v2 fatcat:p77bnoslcndkhdp5qk5gbgkal4

Relative Novelty Detection

Alexander J. Smola, Le Song, Choon Hui Teo
2009 Journal of machine learning research  
By design this is dependent on the underlying measure of the space.  ...  In this paper we derive a formulation which is able to address this problem by allowing for a reference measure to be given in the form of a sample from an alternative distribution.  ...  Variational Decomposition Divergences between distributions, say p and q which can be expressed as the expectation over a function of a likelihood ratio can be estimated directly by solving a convex minimization  ... 
dblp:journals/jmlr/JSmolaST09 fatcat:3akzacuufzgoflczqptavuyf4a

A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article) [article]

Jianqing Fan, Jinchi Lv
2009 arXiv   pre-print
The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized.  ...  What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field.  ...  Oracle property What are the sampling properties of penalized least squares (4) and penalized likelihood estimation (2) when the penalty function p λ is no longer convex?  ... 
arXiv:0910.1122v1 fatcat:elzacdq7ircbjepok7bbkk5zyi

Penalized likelihood regression for generalized linear models with non-quadratic penalties

Anestis Antoniadis, Irène Gijbels, Mila Nikolova
2009 Annals of the Institute of Statistical Mathematics  
This is the case when formulating penalized likelihood regression for exponential families.  ...  One of the popular method for fitting a regression function is regularization: minimizing an objective function which enforces a roughness penalty in addition to coherence with the data.  ...  P6/03 of the Federal Science Policy, Belgium, is acknowledged. The second author also gratefully acknowledges financial support by the GOA/07/04-project of the Research Fund KU Leuven.  ... 
doi:10.1007/s10463-009-0242-4 fatcat:53s7eednjzfehkrtudgonavofy
« Previous Showing results 1 — 15 out of 2,000 results