论文标题

联合学习中潜在信息泄漏的图层表征

Layer-wise Characterization of Latent Information Leakage in Federated Learning

论文作者

Mo, Fan, Borovykh, Anastasia, Malekzadeh, Mohammad, Haddadi, Hamed, Demetriou, Soteris

论文摘要

通过联合学习培训深层神经网络,只能分享客户而不是原始数据,而只能对其数据进行培训。先前的工作表明,在实践中,客户的私人信息与主要学习任务无关,可以从模型的梯度中发现,从而损害了承诺的隐私保护。但是,仍然没有正式的方法可以通过共享的更新模型或梯度量化私人信息的泄漏。在这项工作中,我们根据(i)改编经验$ \ Mathcal {V} $ - 信息来分析属性推理攻击,并定义两个指标,以及(ii)使用Jacobian矩阵进行灵敏度分析,使我们能够衡量梯度与潜在信息的变化。我们在以层次的方式和(i)有两种设置的情况下,在两个设置中,我们不知道攻击者的功能,以表明我们提出的指标在本地化私人潜在信息时的适用性。我们评估了提出的指标,以使用三种基准模型在三个现实世界数据集上量化信息泄漏。

Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data. Prior work has demonstrated that in practice a client's private information, unrelated to the main learning task, can be discovered from the model's gradients, which compromises the promised privacy protection. However, there is still no formal approach for quantifying the leakage of private information via the shared updated model or gradients. In this work, we analyze property inference attacks and define two metrics based on (i) an adaptation of the empirical $\mathcal{V}$-information, and (ii) a sensitivity analysis using Jacobian matrices allowing us to measure changes in the gradients with respect to latent information. We show the applicability of our proposed metrics in localizing private latent information in a layer-wise manner and in two settings where (i) we have or (ii) we do not have knowledge of the attackers' capabilities. We evaluate the proposed metrics for quantifying information leakage on three real-world datasets using three benchmark models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源