A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2023; you can also visit the original URL.
The file type is application/pdf
.
Filters
Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility
[article]
2023
arXiv
pre-print
This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision. ...
We see that that certain tradeoffs exist between fairness and utility, privacy and utility, and between privacy and fairness. ...
learning correlations between the sensitive attribute and the task in an excessive manner. ...
arXiv:2302.07917v1
fatcat:sjvszebh3bal3ga5jjbhflluum
Synthetic Data – Anonymisation Groundhog Day
[article]
2022
arXiv
pre-print
Furthermore, in contrast to traditional anonymisation, the privacy-utility tradeoff of synthetic data publishing is hard to predict. ...
In other words, we empirically show that synthetic data does not provide a better tradeoff between privacy and utility than traditional anonymisation techniques. ...
We would like to thank Jon Ullman, Aloni Cohen, Kobbi Nissim, and Salil Vadhan for their valuable feedback on earlier versions of this work, Laurent Girod for his support in open-sourcing our code, and ...
arXiv:2011.07018v6
fatcat:6asc3vovfvefdfwse2m6m4t6om
SoK: Taming the Triangle – On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
[article]
2023
arXiv
pre-print
Indeed, interpretability, fairness and privacy are key requirements for the development of responsible machine learning, and all three have been studied extensively during the last decade. ...
Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction. ...
Indeed, these concerns often conflict [6] , and tradeoffs between them, as well as with utility, generally have to be set. ...
arXiv:2312.16191v1
fatcat:oyydwlehb5hyppf2qcc7jfpzj4
Algorithm Fairness in AI for Medicine and Healthcare
[article]
2022
arXiv
pre-print
Lastly, we also review emerging technology for mitigating bias via federated learning, disentanglement, and model explainability, and their role in AI-SaMD development. ...
, genetic variation, intra-observer labeling variability) arise in current clinical workflows and their resulting healthcare disparities. ...
X without any features that correlates with A via an adversarial loss term, as seen in Figure 1 . ...
arXiv:2110.00603v2
fatcat:pspb6bqqxjh45an5mhqohysswu
Achieving Transparency Report Privacy in Linear Time
[article]
2021
arXiv
pre-print
Subsequently, we quantify the privacy-utility trade-offs induced by our scheme, and analyze the impact of privacy perturbation on fairness measures in ATRs. ...
The far-fetched benefit of such a study lies in the methodical characterization of privacy-utility trade-offs for release of ATRs in public, and their consequential application-specific impact on the dimensions ...
Privacy Leakage via Interpretable Surrogate Models As noted, transparency schemes can interpret a blackbox's rules in a human-understandable manner, such as decision rules or decision trees. ...
arXiv:2104.00137v3
fatcat:ebv6vzmsvfapjjphtvlr3d6z7y
Dissecting Distribution Inference
[article]
2024
arXiv
pre-print
A distribution inference attack aims to infer statistical properties of data used to train machine learning models. ...
To improve understanding of distribution inference risks, we develop a new black-box attack that even outperforms the best known white-box attack in most settings. ...
In such a setup, the adversary can check which of its data (the one with attributes zero, and attributes one) still all tests as members. ...
arXiv:2212.07591v2
fatcat:5lqfnk3r2zggfcuf5hseiahgzy
SoK: Explainable Machine Learning for Computer Security Applications
[article]
2023
arXiv
pre-print
., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification ...
We also discuss scenarios where interpretability by design may be a better alternative. ...
There is some preliminary work that studies the tradeoff between explainability and privacy in order to select privacy-preserving explanations [133] . ...
arXiv:2208.10605v2
fatcat:3lsl5zblb5ha7fidkgqadi2ncy
Private Graph Extraction via Feature Explanations
[article]
2022
arXiv
pre-print
Privacy and interpretability are two of the important ingredients for achieving trustworthy machine learning. ...
For the other two classes of explanations, privacy leakage increases with an increase in explanation utility. ...
Explaining Graph Neural Networks GNNs are deep learning models which are inherently black-box or non-interpretable. ...
arXiv:2206.14724v1
fatcat:umzt6dbh6zg35mbckwwoj233a4
Fairness-aware Optimal Graph Filter Design
[article]
2023
arXiv
pre-print
Our optimal filter designs offer complementary strengths to explore favorable fairness-utility-complexity tradeoffs. ...
Our idea is to introduce predesigned graph filters within an ML pipeline to reduce a novel unsupervised bias measure, namely the correlation between sensitive attributes and the underlying graph connectivity ...
the flexibility to delineate favorable fairness-utility-complexity tradeoffs in ML over graphs. ...
arXiv:2310.14432v1
fatcat:zugdwiurlzeilgjp7wcnx73b4q
Toward Learning Trustworthily from Data Combining Privacy, Fairness, and Explainability: An Application to Face Recognition
2021
Entropy
In particular, we show that it is possible to simultaneously learn from data while preserving the privacy of the individuals thanks to the use of Homomorphic Encryption, ensuring fairness by learning a ...
Fundamental rights, such as the ones that require the preservation of privacy, do not discriminate based on sensible attributes (e.g., gender, ethnicity, political/sexual orientation), or require one to ...
Their approach is to mitigate Disparate Impact [106] (i.e., discrimination due to the correlation between sensitive and non-sensitive attributes) without disclosing sensitive information through secure ...
doi:10.3390/e23081047
fatcat:vl5q3ys6xbha7oxgj4ba67mkne
SoK: Machine Learning Governance
[article]
2021
arXiv
pre-print
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society. ...
Building on this foundation, we use identities to hold principals accountable for failures of ML systems through both attribution and auditing. ...
Privacy: Several approaches can be taken to preserve data privacy. The de-facto approach is to utilize DP learning. Proposed initially by Chaudhuri et al. ...
arXiv:2109.10870v1
fatcat:7zklvf3ocjeaje6pq45cgp4zkm
Fairness and Privacy-Preserving in Federated Learning: A Survey
[article]
2023
arXiv
pre-print
Federated learning (FL) as distributed machine learning has gained popularity as privacy-aware Machine Learning (ML) systems have emerged as a technique that prevents privacy leakage by building a global ...
In this paper, we provide a comprehensive survey stating the basic concepts of FL, the existing privacy challenges, techniques, and relevant works concerning privacy in FL. ...
guest features, balancing the model interpretability and data privacy in vertical federated learning. ...
arXiv:2306.08402v2
fatcat:uwtrk4afjrfp7c477kc73q25i4
Differential Privacy in Practice: Expose your Epsilons!
2019
Journal of Privacy and Confidentiality
When not meaningfully implemented, differential privacy delivers privacy mostly in name. ...
Implementations have been successfully leveraged in private industry, the public sector, and academia in a wide variety of applications, allowing scientists, engineers, and researchers the ability to learn ...
Second, we lack a formula for determining, for a given privacy-utility tradeoff, what is the judicious choice of . ...
doi:10.29012/jpc.689
fatcat:nfzsjuzgrjbsjf4kvx432kq5zi
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
[article]
2024
arXiv
pre-print
We first provide a brief overview of alternative architectures for distributed learning, discuss inherent vulnerabilities for security, privacy, and fairness of AI algorithms in distributed learning, and ...
analyze why these problems are present in distributed learning regardless of specific architectures. ...
The discussion on defense against attribute inference and model inversion is rather limited. [172] adds adversarial noise to attribute value to alter the probability distribution in attribute inference ...
arXiv:2402.01096v1
fatcat:r6h3ciftzzcsvfu76wjmj5e3pm
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges
[article]
2022
arXiv
pre-print
In this work, we review major trends in machine learning approaches that can address the sustainability problem of AI. ...
Besides, debates on the societal impacts of AI, such as fairness, safety and privacy, have continued to grow in intensity. ...
It is thus important to achieve a balance between fairness and utility factors (e.g., accuracy [123] , and privacy [124] ). 2) Privacy: Another key aspect in responsible AI is data privacy. ...
arXiv:2205.03824v1
fatcat:q6wti44kaffnbcjy4jtyhkekb4
« Previous
Showing results 1 — 15 out of 363 results