Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








62,581 Hits in 3.9 sec

Generative and Discriminative Learning with Unknown Labeling Bias

Miroslav Dudík, Steven J. Phillips
2008 Neural Information Processing Systems  
We apply robust Bayesian decision theory to improve both generative and discriminative learners under bias in class proportions in labeled training data, when the true class proportions are unknown.  ...  We apply our theory to the modeling of species geographic distributions from presence data, an extreme case of labeling bias since there is no absence data.  ...  Magill and T. Consiglio; and T. Wohlgemuth and U. Braendi, WSL Switzerland.  ... 
dblp:conf/nips/DudikP08 fatcat:bigjg7hpfve67cgfqfbp4hupo4

Semi-Supervised Learning for Multi-Component Data Classification

Akinori Fujino, Naonori Ueda, Kazumi Saito
2007 International Joint Conference on Artificial Intelligence  
With our hybrid approach, for each component, we consider an individual generative model trained on labeled samples and a model introduced to reduce the effect of the bias that results when there are few  ...  The proposed method is based on a hybrid of generative and discriminative approaches to take advantage of both approaches.  ...  For the semi-supervised learning of the generative classifiers, unlabeled samples are dealt with as a missing class label problem, and are incorporated in a mixture of joint probability models [Nigam  ... 
dblp:conf/ijcai/FujinoUS07 fatcat:g646lww4ujgn7dljkmjvibm2nu

A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis [article]

Laurent Risser, Agustin Picard, Lucas Hervier, Jean-Michel Loubes
2022 arXiv   pre-print
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years due to its concrete and potentially hazardous implications in society.  ...  In much the same manner, biases can also alter modern industrial and safety-critical applications where machine learning are based on high dimensional inputs such as images.  ...  Funding was provided by ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANR-19-PI3A-0004).  ... 
arXiv:2210.04491v1 fatcat:j5tcmc2ycjcpznpauqs2avi7ia

Semi-FairVAE: Semi-supervised Fair Representation Learning with Adversarial Variational Autoencoder [article]

Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
2022 arXiv   pre-print
However, in many scenarios the sensitive attribute labels of many samples can be unknown, and it is difficult to train a strong discriminator based on the scarce data with observed attribute labels, which  ...  Adversarial learning is a widely used technique in fair representation learning to remove the biases on sensitive attributes from data representations.  ...  The models learned on such data may also inherit these biases and generate biased representations [19] .  ... 
arXiv:2204.00536v1 fatcat:dugxjc5wirb7nohdvh2mvk47ly

Learning Invariant Representations for Sentiment Analysis: The Missing Material is Datasets [article]

Victor Bouvier, Philippe Very, Céline Hudelot, Clément Chastagnol
2019 arXiv   pre-print
In this paper, we introduce two generalization metrics to assess model robustness to a nuisance factor: generalization under target bias and generalization onto unknown.  ...  We combine those metrics with a simple data filtering approach to control the impact of the nuisance factor on the data and thus to build experimental biased datasets.  ...  Generalization onto unknown (GU) We propose to assess the amount of generic knowledge learned by the model by evaluating it on a test set with values of S that were absent during training.  ... 
arXiv:1907.12305v1 fatcat:pblwfrivazgwjewbdchxkro22i

Achieving Non-Discrimination in Prediction

Lu Zhang, Yongkai Wu, Xintao Wu
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
We adopt the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction.  ...  In discrimination-aware classification, the pre-process methods for constructing a discrimination-free classifier first remove discrimination from the training data, and then learn the classifier from  ...  Although exact measurement of sampling error is generally not feasible as M is unknown, it can be probabilistically bounded.  ... 
doi:10.24963/ijcai.2018/430 dblp:conf/ijcai/ZhangWW18 fatcat:sy25lweekvf6zkkrsnxsieccyy

Generative Local Metric Learning for Nearest Neighbor Classification

Yung-Kyun Noh, Byoung-Tak Zhang, Daniel D. Lee
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
We focus on the bias in the information-theoretic error arising from finite sampling effects, and find an appropriate local metric that maximally reduces the bias based upon knowledge from generative models  ...  As a byproduct, the asymptotic theoretical analysis in this work relates metric learning with dimensionality reduction, which was not understood from previous discriminative approaches.  ...  Metric Learning for Nearest Neighbor Classification A nearest neighbor classifier determines the label of an unknown datum according to the label of its nearest neighbor.  ... 
doi:10.1109/tpami.2017.2666151 pmid:28186880 fatcat:7sw3cvod6naxrplbi3q6krr6uq

A Generative Approach for Mitigating Structural Biases in Natural Language Inference [article]

Dimion Asael, Zachary Ziegler, Yonatan Belinkov
2021 arXiv   pre-print
These structural biases lead discriminative models to learn unintended superficial features and to generalize poorly out of the training distribution.  ...  In this work, we reformulate the NLI task as a generative task, where a model is conditioned on the biased subset of the input and the label and generates the remaining subset of the input.  ...  This research was supported by the ISRAEL SCI-ENCE FOUNDATION (grant No. 448/20) and by an Azrieli Foundation Early Career Faculty Fellowship.  ... 
arXiv:2108.14006v1 fatcat:4qmbw53obvbedefdqal26dsnpy

Estimating and Improving Fairness with Adversarial Learning [article]

Xiaoxiao Li, Ziteng Cui, Yifan Wu, Lin Gu, Tatsuya Harada
2021 arXiv   pre-print
Furthermore, the critical module can predict fairness scores for the data with unknown sensitive attributes.  ...  Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.  ...  However, this end-to-end paradigm leaves the deep learning model vulnerable to biases in the model itself and to biases in the data: such as, user groups with sensitive (protected) attributes (i.e. age  ... 
arXiv:2103.04243v2 fatcat:7aqnqfga6fa5ldflj5adoic2bm

Distribution Matching Losses Can Hallucinate Features in Medical Image Translation [article]

Joseph Paul Cohen, Margaux Luck, Sina Honari
2018 arXiv   pre-print
When the output of an algorithm is a transformed image there are uncertainties whether all known and unknown class labels have been preserved or changed.  ...  It seems appealing to use these new image synthesis methods for translating images from a source to a target domain because they can produce high quality images and some even do not require paired data  ...  This work utilized the supercomputing facilities managed by the Montreal Institute for Learning Algorithms, NSERC, Compute Canada, and Calcul Quebec.  ... 
arXiv:1805.08841v3 fatcat:2ncyhq5pzzfplfynfuxtjp3dpq

Achieving non-discrimination in prediction [article]

Lu Zhang University of Arkansas)
2018 arXiv   pre-print
We adopt the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction.  ...  The pre-process methods for constructing a discrimination-free classifier first remove discrimination from the training data, and then learn the classifier from the cleaned data.  ...  As a result, both the training data with predicted labels, i.e., D h , and the prediction, i.e., M h , also contain discrimination.  ... 
arXiv:1703.00060v2 fatcat:gjjgxl3e5zhf5japh7kvf7hdqy

Robust Multiple-Instance Learning with Superbags [chapter]

Borislav Antić, Björn Ommer
2013 Lecture Notes in Computer Science  
Multiple-instance learning consists of two alternating optimization steps: learning a classifier with missing labels and finding the missing labels with the classifier.  ...  Label inference is performed on samples from separate superbags, and thus avoids label imputation on training samples in the same superbag.  ...  This work has been supported by the German Research Foundation (DFG) within the program "Spatio-/Temporal Graphical Models and Applications in Image Analysis", grant GRK 1653, and by the Excellence Initiative  ... 
doi:10.1007/978-3-642-37444-9_19 fatcat:bekscg3rsrbexdoprkaufgcdgi

Counterfactual Augmentation for Multimodal Learning Under Presentation Bias [article]

Victoria Lin, Louis-Philippe Morency, Dimitrios Dimitriadis, Srinagesh Sharma
2023 arXiv   pre-print
However, feedback loops between users and models can bias future user behavior, inducing a presentation bias in the labels that compromises the ability to train new models.  ...  In this paper, we propose counterfactual augmentation, a novel causal method for correcting presentation bias using generated counterfactual labels.  ...  We generate counterfactuals for the labels that are unobserved due to presentation bias, then augment the observed labels with the generated ones.  ... 
arXiv:2305.14083v2 fatcat:74gryqlvsrhurbfwmvzz3434qy

Domain-Specific Bias Filtering for Single Labeled Domain Generalization [article]

Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin
2022 arXiv   pre-print
A major obstacle in the SLDG task is the discriminability-generalization bias: the discriminative information in the labeled source dataset may contain domain-specific bias, constraining the generalization  ...  To tackle this challenging task, we propose a novel framework called Domain-Specific Bias Filtering (DSBF), which initializes a discriminative model with the labeled source data and then filters out its  ...  shift by extending FixMatch via uncertainty and style consistency learning; but we learn a discriminative model from the labeled dataset and then filter out bias and boost generalization using the unlabeled  ... 
arXiv:2110.00726v3 fatcat:rtqutkvwkngsxbgpkzs2ev47di

Fair Representation for Safe Artificial Intelligence via Adversarial Learning of Unbiased Information Bottleneck

Jin-Young Kim, Sung-Bae Cho
2020 AAAI Conference on Artificial Intelligence  
Algorithmic bias indicates the discrimination caused by algorithms, which occurs with protected features such as gender and race.  ...  We illustrate it by applying to the conventional machine learning models and visualizing the data representation with t-SNE algorithm.  ...  The algorithm for classifying data representations into real labels with other classifiers is as follows. where is a binary function measuring the difference between real label and calculated label ̂=  ... 
dblp:conf/aaai/KimC20 fatcat:han4cf5xwjagxpxoxan6qbudxm
« Previous Showing results 1 — 15 out of 62,581 results