论文标题

DEFL:交叉硅的分散体重聚集

DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning

论文作者

Han, Jialiang, Han, Yudong, Huang, Gang, Ma, Yun

论文摘要

联合学习(FL)是一种有希望的保险机器学习范式(ML)。 FL的一种重要类型是Cross-Silo FL,它使少数组织能够通过在本地保密数据并在中央参数服务器上汇总权重来合作训练共享模型。但是,在实践中,中央服务器可能容易受到恶意攻击或软件故障的影响。为了解决这个问题,在本文中,我们提出了DEFL,这是一个新型的分散体重聚集框架,用于跨索洛FL。 DEFL通过在每个参与节点上汇总权重来消除中央服务器,并且仅在所有节点之间维护并同步当前的训练回合的权重。我们使用Multi-Krum来启用诚实节点的正确权重,并使用HotStuff来确保训练循环数和权重的一致性。此外,我们从理论上分析了DEFL的拜占庭容错,收敛性和复杂性。我们对两个广泛的公共数据集进行了广泛的实验,即CIFAR-10和Sentiment140,以评估DEFL的性能。结果表明,与最先进的分散FL方法相比,DEFL可以防御通用的威胁模型,并以最小的准确性损失损失降低了100倍的存储空间和最多减少网络开销的12倍。

Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML). An important type of FL is cross-silo FL, which enables a small scale of organizations to cooperatively train a shared model by keeping confidential data locally and aggregating weights on a central parameter server. However, the central server may be vulnerable to malicious attacks or software failures in practice. To address this issue, in this paper, we propose DeFL, a novel decentralized weight aggregation framework for cross-silo FL. DeFL eliminates the central server by aggregating weights on each participating node and weights of only the current training round are maintained and synchronized among all nodes. We use Multi-Krum to enable aggregating correct weights from honest nodes and use HotStuff to ensure the consistency of the training round number and weights among all nodes. Besides, we theoretically analyze the Byzantine fault tolerance, convergence, and complexity of DeFL. We conduct extensive experiments over two widely-adopted public datasets, i.e. CIFAR-10 and Sentiment140, to evaluate the performance of DeFL. Results show that DeFL defends against common threat models with minimal accuracy loss, and achieves up to 100x reduction in storage overhead and up to 12x reduction in network overhead, compared to state-of-the-art decentralized FL approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源