论文标题

输入扰动:中央和当地差异隐私之间的新范式

Input Perturbation: A New Paradigm between Central and Local Differential Privacy

论文作者

Kang, Yilin, Liu, Yong, Niu, Ben, Tong, Xinyi, Zhang, Likun, Wang, Weiping

论文摘要

传统上,有两个关于差异隐私的模型:中央模型和本地模型。中央模型着重于机器学习模型,本地模型专注于培训数据。在本文中,我们研究了\ textIt {输入扰动}方法中的私人经验风险最小化(DP-erm),保留了中心模型的隐私。通过在原始培训数据中添加噪音并使用“扰动数据”培训,我们实现了($ε$,$δ$) - 最终模型上的差异隐私,以及对原始数据的某种隐私。我们观察到,本地模型和中心模型之间存在有趣的联系:原始数据上的扰动会导致梯度上的扰动,最后是模型参数。该观察结果意味着我们的方法在本地模型和中心模型之间建立了一个桥梁,同时保护数据,梯度和模型,这比以前的中心方法更高。详细的理论分析和实验表明,我们的方法与一些以前最好的中心方法相同(甚至更好)的性能几乎相同(甚至更好),并具有对隐私的更多保护,这是一个有吸引力的结果。此外,我们将方法扩展到了一个更一般的情况:损失函数满足Polyak-lojasiewicz条件,该条件比强凸的更一般,这在大多数先前的工作中对损失函数的限制。

Traditionally, there are two models on differential privacy: the central model and the local model. The central model focuses on the machine learning model and the local model focuses on the training data. In this paper, we study the \textit{input perturbation} method in differentially private empirical risk minimization (DP-ERM), preserving privacy of the central model. By adding noise to the original training data and training with the `perturbed data', we achieve ($ε$,$δ$)-differential privacy on the final model, along with some kind of privacy on the original data. We observe that there is an interesting connection between the local model and the central model: the perturbation on the original data causes the perturbation on the gradient, and finally the model parameters. This observation means that our method builds a bridge between local and central model, protecting the data, the gradient and the model simultaneously, which is more superior than previous central methods. Detailed theoretical analysis and experiments show that our method achieves almost the same (or even better) performance as some of the best previous central methods with more protections on privacy, which is an attractive result. Moreover, we extend our method to a more general case: the loss function satisfies the Polyak-Lojasiewicz condition, which is more general than strong convexity, the constraint on the loss function in most previous work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源