论文标题
超级分散的联合学习超过时变通信图
Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph
论文作者
论文摘要
建立一组学习者如何以完全分散的(点对点,无协调员)的方式提供保护隐私的联合学习是一个开放的问题。我们为分布式学习者提出了第一个基于隐私共识的算法,以在高移动性的环境中实现分散的全球模型聚合,在该环境中,学习者之间的通信图可能在模型聚合的连续回合之间有所不同。特别是,在每一轮全球模型聚合中,都使用Metropolis-Hastings方法来根据当前的通信拓扑更新加权的邻接矩阵。此外,Shamir的秘密共享计划旨在促进隐私达成全球模型的共识。本文确定了所提出算法的正确性和隐私属性。计算效率是通过使用现实词数据集建立在联合学习框架上的模拟来评估的。
Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where the communication graph between the learners may vary between successive rounds of model aggregation. In particular, in each round of global model aggregation, the Metropolis-Hastings method is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir's secret sharing scheme is integrated to facilitate privacy in reaching consensus of the global model. The paper establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-word dataset.