论文标题
Fedmed-GAN:无监督的交叉模式大脑图像合成的联合域翻译
FedMed-GAN: Federated Domain Translation on Unsupervised Cross-Modality Brain Image Synthesis
论文作者
论文摘要
利用多模式神经影像学数据已被证明可有效研究人类的认知活性和某些病理。但是,由于该系列面临着几个限制,例如高检查成本,长期获取时间和图像损坏,因此中心地获得了整个配对的神经影像学数据并不实用。此外,这些数据被分散到不同的医疗机构中,因此,考虑到隐私问题,不能汇总用于集中培训。显然需要启动联邦学习,并促进来自不同机构的分散数据的整合。在本文中,我们为无监督的大脑图像合成(称为FedMen-gan)提出了一个新的基准,以弥合联邦学习和医学gan之间的差距。 FedMed-GAN可减轻模式崩溃而不牺牲发电机的性能,并广泛应用于具有变化适应属性的不同比例的未配对和配对数据。我们通过联邦平均算法来处理梯度惩罚,然后利用差异隐私梯度下降来正规化训练动力。提供了全面的评估,用于比较FedMen-Gan和其他集中式方法,该方法显示了我们的FedMed-Gan的新最先进的性能。我们的代码已在网站上发布:https://github.com/m-3lab/fedmed-gan
Utilizing multi-modal neuroimaging data has been proved to be effective to investigate human cognitive activities and certain pathologies. However, it is not practical to obtain the full set of paired neuroimaging data centrally since the collection faces several constraints, e.g., high examination cost, long acquisition time, and image corruption. In addition, these data are dispersed into different medical institutions and thus cannot be aggregated for centralized training considering the privacy issues. There is a clear need to launch a federated learning and facilitate the integration of the dispersed data from different institutions. In this paper, we propose a new benchmark for federated domain translation on unsupervised brain image synthesis (termed as FedMed-GAN) to bridge the gap between federated learning and medical GAN. FedMed-GAN mitigates the mode collapse without sacrificing the performance of generators, and is widely applied to different proportions of unpaired and paired data with variation adaptation property. We treat the gradient penalties by federally averaging algorithm and then leveraging differential privacy gradient descent to regularize the training dynamics. A comprehensive evaluation is provided for comparing FedMed-GAN and other centralized methods, which shows the new state-of-the-art performance by our FedMed-GAN. Our code has been released on the website: https://github.com/M-3LAB/FedMed-GAN