论文标题
在小型非IID客户端数据集上对双重编码模型的联合培训
Federated Training of Dual Encoding Models on Small Non-IID Client Datasets
论文作者
论文摘要
编码一对输入的双重编码模型被广泛用于表示学习。许多方法通过在集中式培训数据上的编码对之间最大化一致性来训练双重编码模型。但是,在许多情况下,由于隐私问题,数据集固有地在许多客户(用户设备或组织)中分散,从而激发了联合学习的动机。在这项工作中,我们专注于对许多小型,非IID(独立和相同分布)客户端数据集组成的分散数据的双重编码模型的联合培训。我们表明,当使用联邦平均值适应此设置时,在集中设置中运行良好的现有方法效果很差。我们观察到,我们可以模拟各个客户对基于编码统计的损失功能的大批量损失计算。基于这种见解,我们提出了一种新颖的联合培训方法,分布式互相关优化(DCCO),该方法使用跨客户群汇总的编码统计数据训练双重编码模型,而无需共享单个数据示例。我们在两个数据集上的实验结果表明,所提出的DCCO方法的表现优于联合方法的变体,这是一个很大的边距。
Dual encoding models that encode a pair of inputs are widely used for representation learning. Many approaches train dual encoding models by maximizing agreement between pairs of encodings on centralized training data. However, in many scenarios, datasets are inherently decentralized across many clients (user devices or organizations) due to privacy concerns, motivating federated learning. In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets. We show that existing approaches that work well in centralized settings perform poorly when naively adapted to this setting using federated averaging. We observe that, we can simulate large-batch loss computation on individual clients for loss functions that are based on encoding statistics. Based on this insight, we propose a novel federated training approach, Distributed Cross Correlation Optimization (DCCO), which trains dual encoding models using encoding statistics aggregated across clients, without sharing individual data samples. Our experimental results on two datasets demonstrate that the proposed DCCO approach outperforms federated variants of existing approaches by a large margin.