论文标题

动态的联合学习

Dynamic Federated Learning

论文作者

Rizk, Elsa, Vlaski, Stefan, Sayed, Ali H.

论文摘要

联邦学习已成为多方环境中集中协调策略的伞术语。尽管许多联合学习体系结构以在线方式处理数据,因此自然而然地是自适应的,但大多数绩效分析都存在静态优化问题,并且在问题解决方案或数据特征中存在漂移的情况下没有保证。我们考虑一个联合学习模型,在每个迭代中,可用代理的随机子集根据其数据执行本地更新。在用于总体优化问题的真实最小化器上的非平稳随机步行模型下,我们确定体系结构的性能由三个因素确定,即,每个代理的数据可变性,所有代理的模型变异性以及与算法的学习率相反的跟踪项。结果阐明了融合和跟踪性能之间的权衡。

Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments. While many federated learning architectures process data in an online manner, and are hence adaptive by nature, most performance analyses assume static optimization problems and offer no guarantees in the presence of drifts in the problem solution or data characteristics. We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data. Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm. The results clarify the trade-off between convergence and tracking performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源