论文标题
FEDDANE:一种联合的牛顿型方法
FedDANE: A Federated Newton-Type Method
论文作者
论文摘要
联合学习旨在通过大量分布的远程设备共同学习统计模型。在这项工作中,我们提出了Feddane,这是一种从DANE适应的优化方法,它是一种经典分布式优化的方法,以处理联合学习的实际约束。通过学习凸功能和非凸功能,我们为此方法提供了收敛的保证。尽管鼓励了理论上的结果,但我们发现该方法在经验上具有巨大的性能。特别是,通过对合成和现实世界数据集的经验模拟,Feddane在现实的联合设置中始终不构成FedAvg和FedProx的表现。我们将低设备的参与和统计设备异质性确定为这一巨大绩效的两个根本原因,并通过建议未来工作的几个方向来得出结论。
Federated learning aims to jointly learn statistical models over massively distributed remote devices. In this work, we propose FedDANE, an optimization method that we adapt from DANE, a method for classical distributed optimization, to handle the practical constraints of federated learning. We provide convergence guarantees for this method when learning over both convex and non-convex functions. Despite encouraging theoretical results, we find that the method has underwhelming performance empirically. In particular, through empirical simulations on both synthetic and real-world datasets, FedDANE consistently underperforms baselines of FedAvg and FedProx in realistic federated settings. We identify low device participation and statistical device heterogeneity as two underlying causes of this underwhelming performance, and conclude by suggesting several directions of future work.