论文标题
联合选择:用于交流和记忆有效的联合学习的原始性
Federated Select: A Primitive for Communication- and Memory-Efficient Federated Learning
论文作者
论文摘要
联合学习(FL)是以隐私方式进行异质客户设备进行机器学习的框架。迄今为止,大多数FL算法都在多个回合中学习一个“全局”服务器模型。在每个回合中,相同的服务器模型都会向所有参与的客户广播,在本地更新,然后在跨客户端进行汇总。在这项工作中,我们提出了一个更一般的过程,其中客户“选择”发送给他们的值的值。值得注意的是,这使客户可以在较小的数据依赖性切片上操作。为了使这种实用性,我们概述了原始的联合选择,该选择可以在现实的FL系统中进行特定于客户的选择。我们讨论了如何将联合选择的选择进行模型培训,并表明它可以导致通信和客户记忆使用量的大幅降低,从而有可能使模型的训练太大而无法适合智障。我们还讨论了联合选择对隐私和信任的含义,这反过来影响了可能的系统约束和设计。最后,我们讨论有关模型体系结构,隐私保护技术和实用的FL系统的开放问题。
Federated learning (FL) is a framework for machine learning across heterogeneous client devices in a privacy-preserving fashion. To date, most FL algorithms learn a "global" server model across multiple rounds. At each round, the same server model is broadcast to all participating clients, updated locally, and then aggregated across clients. In this work, we propose a more general procedure in which clients "select" what values are sent to them. Notably, this allows clients to operate on smaller, data-dependent slices. In order to make this practical, we outline a primitive, federated select, which enables client-specific selection in realistic FL systems. We discuss how to use federated select for model training and show that it can lead to drastic reductions in communication and client memory usage, potentially enabling the training of models too large to fit on-device. We also discuss the implications of federated select on privacy and trust, which in turn affect possible system constraints and design. Finally, we discuss open questions concerning model architectures, privacy-preserving technologies, and practical FL systems.