论文标题

FEDGPO:异质性 - 感知全球参数优化,以进行有效的联合学习

FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient Federated Learning

论文作者

Kim, Young Geun, Wu, Carole-Jean

论文摘要

联邦学习(FL)已成为解决机器学习培训中隐私泄漏风险的解决方案。这种方法允许各种移动设备可以协作训练机器学习模型,而无需与云共享原始的启动培训数据。但是,由于系统/数据异质性和运行时差异,FL的有效边缘部署是具有挑战性的。本文通过考虑上述挑战,在保证模型收敛的同时优化了FL用例的能源效率。我们根据强化学习提出了FedGPO,该学习如何识别每个FL聚合圆形的最佳全局参数(B,E,K),以适应系统/数据异质性和随机运行时差异。在我们的实验中,FedGPO将模型收敛时间提高了2.4倍,并在基线设置上分别提高了3.6倍的能源效率。

Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training. This approach allows a variety of mobile devices to collaboratively train a machine learning model without sharing the raw on-device training data with the cloud. However, efficient edge deployment of FL is challenging because of the system/data heterogeneity and runtime variance. This paper optimizes the energy-efficiency of FL use cases while guaranteeing model convergence, by accounting for the aforementioned challenges. We propose FedGPO based on a reinforcement learning, which learns how to identify optimal global parameters (B, E, K) for each FL aggregation round adapting to the system/data heterogeneity and stochastic runtime variance. In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源