论文标题

通过生成梯度泄漏在联邦学习中审计隐私防御

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

论文作者

Li, Zhuohang, Zhang, Jiaxin, Liu, Luyang, Liu, Jian

论文摘要

联合学习(FL)框架通过允许多个客户在中央服务器的协调下参与学习任务而无需交换私人数据,从而为分布式学习系统带来了隐私益处。但是,最近的研究表明,私人信息仍然可以通过共享的梯度信息泄漏。为了进一步保护用户的隐私,已经提出了几种防御机制,以防止通过梯度信息退化方法泄漏隐私泄漏,例如在与服务器共享之前使用加法噪声或梯度压缩。在这项工作中,我们验证私人培训数据仍然可以在某些防御设置下泄漏,并具有新型的泄漏,即生成梯度泄漏(GGL)。与仅依靠梯度信息重建数据的现有方法不同,我们的方法利用了从公共图像数据集中学到的生成对抗网络(GAN)的潜在空间,作为在梯度退化期间的信息损失之前的补偿。为了解决梯度运算符和GAN模型引起的非线性,我们探索了各种无梯度优化方法(例如,进化策略和贝叶斯优化),并经验证明了它们在与梯度优化器相比从梯度重建高质量图像中的优势。我们希望所提出的方法可以作为凭经验衡量隐私泄漏量的工具,以促进设计更强大的防御机制的设计。

Federated Learning (FL) framework brings privacy benefits to distributed learning systems by allowing multiple clients to participate in a learning task under the coordination of a central server without exchanging their private data. However, recent studies have revealed that private information can still be leaked through shared gradient information. To further protect user's privacy, several defense mechanisms have been proposed to prevent privacy leakage via gradient information degradation methods, such as using additive noise or gradient compression before sharing it with the server. In this work, we validate that the private training data can still be leaked under certain defense settings with a new type of leakage, i.e., Generative Gradient Leakage (GGL). Unlike existing methods that only rely on gradient information to reconstruct data, our method leverages the latent space of generative adversarial networks (GAN) learned from public image datasets as a prior to compensate for the informational loss during gradient degradation. To address the nonlinearity caused by the gradient operator and the GAN model, we explore various gradient-free optimization methods (e.g., evolution strategies and Bayesian optimization) and empirically show their superiority in reconstructing high-quality images from gradients compared to gradient-based optimizers. We hope the proposed method can serve as a tool for empirically measuring the amount of privacy leakage to facilitate the design of more robust defense mechanisms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源