A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Meta Federated Learning
[article]
2021
arXiv
pre-print
To this end, we propose Meta Federated Learning (Meta-FL), a novel variant of federated learning which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor ...
Contemporary defenses against backdoor attacks in federated learning require direct access to each individual client's update which is not feasible in recent FL settings where Secure Aggregation is deployed ...
NB), differential privacy (DP), and RFA against naive and model replacement backdoor attacks on both Meta-FL (MFL-15-10) and baseline federated learning (FL-15) frameworks in which number of aggregands ...
arXiv:2102.05561v1
fatcat:2ri6iry5prg3fbojkwa2zh3p6u
Privacy and Robustness in Federated Learning: Attacks and Defenses
[article]
2022
arXiv
pre-print
Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. ...
Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning. ...
federated learning (HFL), vertically federated learning (VFL) and federated transfer learning (FTL) [17] . ...
arXiv:2012.06337v3
fatcat:f5aflxnsdrdcdf4kvoa6yzseqq
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
[article]
2022
arXiv
pre-print
Our experiments show that both DP variants do d fend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses. ...
This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL. ...
Discussion & Conclusion Attacks against Federated Learning (FL) techniques have highlighted weaknesses in both robustness and privacy [57] . ...
arXiv:2009.03561v5
fatcat:bvkr4gr5djestmlqlaezbkedly
Can You Really Backdoor Federated Learning?
[article]
2019
arXiv
pre-print
The decentralized nature of federated learning makes detecting and defending against adversarial attacks a challenging task. ...
We have implemented the attacks and defenses in TensorFlow Federated (TFF), a TensorFlow framework for federated learning. ...
(Weak) differential privacy. A mathematically rigorous way for defending against backdoor tasks is to train models with differential privacy (22; 14; 2). ...
arXiv:1911.07963v2
fatcat:xtaw7msghjfdhofll5mqiegd3i
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
[article]
2021
arXiv
pre-print
With the rapid demand of data and computational resources in deep learning systems, a growing number of algorithms to utilize collaborative machine learning techniques, for example, federated learning, ...
In an organized way, we then detail the existing integrity and privacy attacks as well as their defenses. ...
The attack is effective against the collaborative learning tasks with convolutional neural networks, even when the parameters are obfuscated via differential privacy techniques. ...
arXiv:2112.10183v1
fatcat:ujfz4a5mdrhsbk4kiqoqo2snfe
Federated Learning in Adversarial Settings
[article]
2020
arXiv
pre-print
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements but are less robust against model degradation and backdoor attacks. ...
It is also robust against state-of-the-art backdoor as well as model degradation attacks even when a large proportion of the participant nodes are malicious. ...
Privacy of Federated Learning: There exist a few inference attacks specifically designed against federated learning schemes. ...
arXiv:2010.07808v1
fatcat:6grxgyh6ubhh7dcvvue4sgtvvm
Turning Privacy-preserving Mechanisms against Federated Learning
[article]
2023
arXiv
pre-print
For this reason, experts proposed solutions that combine federated learning with Differential Privacy strategies and community-driven approaches, which involve combining data from neighbor clients to make ...
In this paper, we identify a crucial security flaw in such a configuration, and we design an attack capable of deceiving state-of-the-art defenses for federated learning. ...
To address this vulnerability, several recent studies have combined federated learning with Differential Privacy techniques [18] . ...
arXiv:2305.05355v1
fatcat:xvlj5wkvlrhf3liq4btavwvynq
A Detailed Survey on Federated Learning Attacks and Defenses
2023
Electronics
While federated learning (FL), as a machine learning (ML) strategy, may be effective for safeguarding the confidentiality of local data, it is also vulnerable to attacks. ...
Increased interest in the FL domain inspired us to write this paper, which informs readers of the numerous threats to and flaws in the federated learning strategy, and introduces a multiple-defense mechanism ...
Defense Mechanism: Norm thresholds or weak differential privacy are updated to prevent backdoor attacks. ...
doi:10.3390/electronics12020260
fatcat:yjyaj6f5gncwtpkdhbjf3c4dzq
On the Security Privacy in Federated Learning
[article]
2022
arXiv
pre-print
Federated Learning (FL) grants a privacy-driven, decentralized training scheme that improves ML models' security. ...
Recent privacy awareness initiatives such as the EU General Data Protection Regulation subdued Machine Learning (ML) to privacy and security assessments. ...
In order to choose the papers for review, we have used some keywords in our search, i.e., adversarial federated learning, poisoning federated learning, backdoor federated learning, attack federated learning ...
arXiv:2112.05423v2
fatcat:qcovp2cz2rfgbcvx6mtx5xighe
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy
[article]
2022
arXiv
pre-print
We investigate the existing security challenges in federated learning and provide a comprehensive overview of established defense techniques for data poisoning, inference attacks, and model poisoning attacks ...
Federated learning holds great promise in learning from fragmented sensitive data and has revolutionized how machine learning models are trained. ...
Differential Privacy adds a certain degree of noise in the original local update while furnishing theoretical guarantees on the model quality and protection against the inference attack on the model ( ...
arXiv:2204.13697v1
fatcat:rvlsrnk66jblzguy2vnh3thgtu
FedDefender: Backdoor Attack Defense in Federated Learning
[article]
2023
arXiv
pre-print
In this work, we propose FedDefender, a defense mechanism against targeted poisoning attacks in FL by leveraging differential testing. ...
Federated Learning (FL) is a privacy-preserving distributed machine learning technique that enables individual clients (e.g., user participants, edge devices, or organizations) to train a model on their ...
In this work, we propose FEDDEFENDER, a defense against backdoor attacks in federated learning by leveraging a differential testing technique for FL [6] . ...
arXiv:2307.08672v1
fatcat:xlmtroxbqvgqbkqmmhqzfkqelu
MUSKETEER D5.1 Threat analysis for federated machine learning algorithms
2019
Zenodo
A report describing the main threats and vulnerabilities that may be present in federated machine learning algorithms considering both, attacks at training and test time and defining requirements for the ...
design, deployment and testing of federated machine learning algorithms. ...
Machine Learning to Augment Shared Knowledge in Federated Privacy-Preserving Scenarios (MUSKETEER)
Poisoning Attacks in MUSKETEER In MUSKETEER, we need to differentiate two scenarios for data poisoning ...
doi:10.5281/zenodo.4736943
fatcat:2zealgae7rhsjecbg4q4vnb72i
FedIPR: Ownership Verification for Federated Deep Neural Network Models
[article]
2022
arXiv
pre-print
Our watermarking scheme is also resilient to various federated training settings and robust against removal attacks. ...
To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i.e., FedIPR. ...
Robustness Against Differential Privacy We adopt the Gaussian noise-based method to provide differential privacy guarantee for federated learning. ...
arXiv:2109.13236v3
fatcat:kykbwwycvfhnlf5y5hnomkavdu
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses
[article]
2024
arXiv
pre-print
Moreover, we provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, including backdoor attacks, adversarial examples, combined ...
We summarize the existing surveys on machine learning for 6G IoT security and machine learning-associated threats in three different learning modes: centralized, federated, and distributed. ...
[175] proposed a security system to defend against distributed backdoor attacks in federated learning. ...
arXiv:2306.10309v2
fatcat:iktyi3gfjnfplgvuemlnktes74
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
[article]
2023
arXiv
pre-print
Adversarial attacks against data privacy, learning algorithm stability, and system confidentiality are particularly concerning in the context of distributed training in federated learning. ...
Therefore, it is crucial to develop FL in a trustworthy manner, with a focus on security, robustness, and privacy. ...
[195] suggests using norm thresholding and differential privacy (clipping updates) to defend against backdoor attacks. ...
arXiv:2302.10637v1
fatcat:yeqkzgz6krhxnpsgeqrvitrz4u
« Previous
Showing results 1 — 15 out of 1,032 results