论文标题

联合持续学习,加权范围内转移

Federated Continual Learning with Weighted Inter-client Transfer

论文作者

Yoon, Jaehong, Jeong, Wonyong, Lee, Giwoong, Yang, Eunho, Hwang, Sung Ju

论文摘要

对持续学习和联合学习一直引起人们的兴趣,这在现实世界中的深层神经网络中都很重要。然而,关于每个客户从私人本地数据流中学习一系列任务的方案的研究很少。这个联邦持续学习的问题为持续学习带来了新的挑战,例如利用其他客户的知识,同时阻止了不相关的知识的干扰。为了解决这些问题,我们提出了一个新颖的联合持续学习框架,联合加权范围内转移(FedWeit),该框架将网络权重分解为全球联合参数和稀疏的任务特异性参数,并且每个客户都通过将其任务特异性参数的加权组合从其他客户那里获得选择性知识。 Fedweit最大程度地减少了不兼容的任务之间的干扰,还可以在学习过程中跨客户的积极知识转移。我们在不同程度的任务相似性的情况下验证了我们的联邦服务范围,并在不同程度的任务相似性下进行了持续的学习方法,并且我们的模型大大降低了沟通成本。代码可从https://github.com/wyjeong/fedweit获得

There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To resolve these issues, we propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients by taking a weighted combination of their task-specific parameters. FedWeIT minimizes interference between incompatible tasks, and also allows positive knowledge transfer across clients during learning. We validate our FedWeIT against existing federated learning and continual learning methods under varying degrees of task similarity across clients, and our model significantly outperforms them with a large reduction in the communication cost. Code is available at https://github.com/wyjeong/FedWeIT

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源