论文标题

通过拜占庭式的客户加权来联合学习

Towards Federated Learning With Byzantine-Robust Client Weighting

论文作者

Portnoy, Amit, Tirosh, Yoav, Hendler, Danny

论文摘要

联合学习(FL)是一个分布式的机器学习范式,其中数据分布在由中央服务器协调的计算过程中协作训练模型的客户。通过根据其拥有的数据实例比例为每个客户分配权重,可以大大加速与准确的关节模型的收敛速率。以前的一些作品在拜占庭式的环境中研究了FL,其中一小部分客户可能会发送有关其模型的任意甚至恶意信息。但是,这些作品要么完全忽略数据不平衡性问题,要么假设客户端的权重是服务器已知的Apriori,而实际上,客户本身可能会将权重报告给服务器,因此不能依靠。我们首次提出一种基于重量截断的预处理方法,并在经验上证明它能够在模型质量和拜占庭式鲁棒性之间取得良好的平衡,这是第一次解决这个问题。我们还可以分析地确定我们的方法可以应用于随机选择的客户权重样本。

Federated Learning (FL) is a distributed machine learning paradigm where data is distributed among clients who collaboratively train a model in a computation process coordinated by a central server. By assigning a weight to each client based on the proportion of data instances it possesses, the rate of convergence to an accurate joint model can be greatly accelerated. Some previous works studied FL in a Byzantine setting, in which a fraction of the clients may send arbitrary or even malicious information regarding their model. However, these works either ignore the issue of data unbalancedness altogether or assume that client weights are apriori known to the server, whereas, in practice, it is likely that weights will be reported to the server by the clients themselves and therefore cannot be relied upon. We address this issue for the first time by proposing a practical weight-truncation-based preprocessing method and demonstrating empirically that it is able to strike a good balance between model quality and Byzantine robustness. We also establish analytically that our method can be applied to a randomly selected sample of client weights.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源