论文标题
联合学习系统中对抗培训模型的隐私泄漏
Privacy Leakage of Adversarial Training Models in Federated Learning Systems
论文作者
论文摘要
对抗性训练(AT)对于获得对对抗性攻击强大的深层神经网络至关重要,但是最近的作品发现,它也可能使模型更容易受到隐私攻击的影响。在这项工作中,我们通过设计一种新颖的隐私攻击,进一步揭示了这种不安的属性,该攻击实际上适用于对隐私敏感的联合学习(FL)系统。使用我们的方法,攻击者可以在FL系统中的模型中利用即使培训批次尺寸较大,可以准确地重建用户的私人培训图像。代码可在https://github.com/zjysteven/privayattack_at_fl上找到。
Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks. In this work, we further reveal this unsettling property of AT by designing a novel privacy attack that is practically applicable to the privacy-sensitive Federated Learning (FL) systems. Using our method, the attacker can exploit AT models in the FL system to accurately reconstruct users' private training images even when the training batch size is large. Code is available at https://github.com/zjysteven/PrivayAttack_AT_FL.