Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions

Published:01 April 2023Publication History
Skip Abstract Section

Abstract

The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.

References

  1. [1] Pandey S. R.et al., “A Crowdsourcing Framework for On-Device Federated Learning,” IEEE Trans. Wireless Commun., vol. 19, no. 5, 2020, pp. 324156.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Zhang W.et al., “Blockchain-Based Federated Learning for Device Failure Detection in Industrial IoT,” IEEE Internet of Things J., vol. 8, no. 7, 2020, pp. 592637.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Nguyen D. C.et al., “Federated Learning Meets Blockchain in Edge Computing: Opportunities and Challenges,” IEEE Internet of Things J., 2021.Google ScholarGoogle Scholar
  4. [4] Wang H.et al., “Attack of the Tails: Yes, You Really Can Backdoor Federated Learning,” Proc. Annual Conf. Neural Information Processing Systems, 2020.Google ScholarGoogle Scholar
  5. [5] Xie C.et al., “DBA: Distributed Backdoor Attacks Against Federated Learning,” Proc. Int'l. Conf. Learning Representations, 2019.Google ScholarGoogle Scholar
  6. [6] Eugene Bagdasaryan Y. H. D. E.et al., “How to Backdoor Federated Learning,” Proc. Int'l. Conf. Artificial Intelligence and Statistic, 2020, pp. 293848.Google ScholarGoogle Scholar
  7. [7] Bhagoji A. N.et al., “Analyzing Federated Learning Through an Adversarial Lens,” Proc. Int'l. Conf. Machine Learning, vol. 97. PMLR, 2019, pp. 63443.Google ScholarGoogle Scholar
  8. [8] Fang M.et al., “Local Model Poisoning Attacks to Byzan-tine-Robust Federated Learning,” Proc. USENIX Security Symposium, 2020, pp. 160522.Google ScholarGoogle Scholar
  9. [9] Fung C., Yoon C. J., and Beschastnikh I., “Mitigating Sybils in Federated Learning Poisoning,” arXiv preprint arXiv:1808.04866, 2018.Google ScholarGoogle Scholar
  10. [10] Li S.et al., “Learning to detect malicious clients for robust federated learning,” arXiv preprint arXiv:2002.00211, 2020.Google ScholarGoogle Scholar
  11. [11] Nguyen T. D.et al., “Flguard: Secure and Private Federated Learning,” IACR Cryptology ePrint Archive, 2021.Google ScholarGoogle Scholar
  12. [12] Sun Z.et al., “Can You Really Backdoor Federated Learning?Proc. Annual Conf. Neural Information Processing Systems, 2020.Google ScholarGoogle Scholar
  13. [13] Andreina S.et al., “BaFFLe: Backdoor Detection via Feedback-Based Federated learning,” Proc. IEEE lnt”. Conf. Distributed Computing Systems, 2021.Google ScholarGoogle Scholar
  14. [14] Wu C.et al., “Mitigating Backdoor Attacks in Federated learning,” arXiv preprint arXiv:2011.01767, 2020.Google ScholarGoogle Scholar
  15. [15] Xie C.et al., “CRFL: Certifiably Robust Federated learning Against Backdoor Attacks,” Proc. lnt'l. Conf. Machine Learning, vol. 139. PMLR, 2021, pp. 11,37282.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

  • Published in

    cover image IEEE Wireless Communications
    IEEE Wireless Communications  Volume 30, Issue 2
    April 2023
    149 pages

    Copyright © 2022

    Publisher

    IEEE Press

    Publication History

    • Published: 1 April 2023

    Qualifiers

    • research-article
  • Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics