论文标题
使用子空间扰动,高斯混合模型的隐私保护分布式期望最大化
Privacy-Preserving Distributed Expectation Maximization for Gaussian Mixture Model using Subspace Perturbation
论文作者
论文摘要
隐私已成为机器学习的主要问题。实际上,联合学习是出于隐私问题而激发的,因为它不允许传输私人数据,而仅传输中间更新。但是,联邦学习并不总是保证隐私保护,因为中间更新也可能揭示敏感信息。在本文中,我们对高斯混合模型的联合期望最大化算法进行了明确的信息理论分析,并证明中间更新可能导致严重的隐私泄漏。为了解决隐私问题,我们提出了一个完全分散的隐私解决方案,该解决方案能够在每个最大化步骤中安全地计算更新。此外,我们考虑了两种不同类型的安全攻击:诚实但令人惊讶和窃听的对手模型。数值验证表明,就准确性和隐私水平而言,与现有方法相比,所提出的方法具有优越的性能。
Privacy has become a major concern in machine learning. In fact, the federated learning is motivated by the privacy concern as it does not allow to transmit the private data but only intermediate updates. However, federated learning does not always guarantee privacy-preservation as the intermediate updates may also reveal sensitive information. In this paper, we give an explicit information-theoretical analysis of a federated expectation maximization algorithm for Gaussian mixture model and prove that the intermediate updates can cause severe privacy leakage. To address the privacy issue, we propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each maximization step. Additionally, we consider two different types of security attacks: the honest-but-curious and eavesdropping adversary models. Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.