论文标题

RSCFED:联合半监督学习的随机抽样共识

RSCFed: Random Sampling Consensus Federated Semi-supervised Learning

论文作者

Liang, Xiaoxiao, Lin, Yiqun, Fu, Huazhu, Zhu, Lei, Li, Xiaomeng

论文摘要

联盟的半监督学习(FSSL)旨在通过培训完全标记且未标记的客户或培训部分标记的客户来得出全球模型。当本地客户端具有独立且分布相同的数据(IID)数据,但无法推广到更实用的FSSL设置,即非IID设置时,现有方法效果很好。在本文中,我们提出了一个随机的抽样共识联合学习,即RSCF,通过考虑已标记已标记的客户,未标记的客户或部分标记客户的模型之间的可靠性不均匀。我们的主要动机是,给定与标记客户端或未标记客户端的模型,可以通过对客户进行随机的子采样来达成共识。为了实现这一目标,我们首先不是直接汇总本地模型,而是首先通过随机的子采样来提炼几个子传感器模型,然后将亚传输模型汇总到全局模型。为了增强亚传输模型的鲁棒性,我们还开发了一种新型的距离型模型聚合方法。实验结果表明,我们的方法在三个基准数据集(包括自然图像和医学图像)上优于最先进的方法。该代码可在https://github.com/xmed-lab/rscfed上找到。

Federated semi-supervised learning (FSSL) aims to derive a global model by training fully-labeled and fully-unlabeled clients or training partially labeled clients. The existing approaches work well when local clients have independent and identically distributed (IID) data but fail to generalize to a more practical FSSL setting, i.e., Non-IID setting. In this paper, we present a Random Sampling Consensus Federated learning, namely RSCFed, by considering the uneven reliability among models from fully-labeled clients, fully-unlabeled clients or partially labeled clients. Our key motivation is that given models with large deviations from either labeled clients or unlabeled clients, the consensus could be reached by performing random sub-sampling over clients. To achieve it, instead of directly aggregating local models, we first distill several sub-consensus models by random sub-sampling over clients and then aggregating the sub-consensus models to the global model. To enhance the robustness of sub-consensus models, we also develop a novel distance-reweighted model aggregation method. Experimental results show that our method outperforms state-of-the-art methods on three benchmarked datasets, including both natural and medical images. The code is available at https://github.com/XMed-Lab/RSCFed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源