论文标题
联邦学习中的隐私和鲁棒性:攻击和防御
Privacy and Robustness in Federated Learning: Attacks and Defenses
论文作者
论文摘要
随着数据越来越多地存储在不同的孤岛和社会中,对数据隐私问题越来越了解,人工智能(AI)模型的传统集中培训正面临着效率和隐私挑战。最近,联邦学习(FL)已成为一种替代解决方案,并继续在这一新现实中蓬勃发展。现有的FL协议设计已被证明容易受到系统内部或外部的对手的影响,从而损害了数据隐私和系统鲁棒性。除了培训强大的全球模型外,对于设计具有隐私保证并抵抗不同类型的对手的FL系统至关重要。在本文中,我们对该主题进行了首次全面调查。通过简要介绍FL的概念,以及独特的分类学涵盖:1)威胁模型; 2)中毒攻击和防御能力; 3)推理攻击和防御隐私,我们对此重要主题提供了无障碍评论。我们强调了直觉,关键技术以及各种攻击和防御所采用的基本假设。最后,我们讨论了有希望的未来研究方向,以坚固和隐私的联合学习。
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.