论文标题
苏格兰人:安全汇总的有效的安全计算框架
SCOTCH: An Efficient Secure Computation Framework for Secure Aggregation
论文作者
论文摘要
联合学习使多个数据所有者能够共同训练机器学习模型,而无需透露其私人数据集。但是,恶意聚合服务器可能会使用模型参数来获取有关所使用的培训数据集的敏感信息。为了解决此类泄漏,已经在先前的工作中研究了差异隐私和加密技术,但这些技术通常会导致沟通开销庞大或影响模型性能。为了减轻这种权力集中,我们提出了苏格兰威士忌,这是一个分散的M-Party Secure-Compunting框架,用于部署MPC原语的联合汇总框架,例如秘密共享。我们的协议简单,高效,与其他现有的最先进的隐私联盟联合学习框架相比,对好奇的聚合商或勾结的数据所有者提供了严格的隐私保证。我们通过在多个数据集上进行广泛的实验来评估我们的框架,并有令人鼓舞的结果。苏格兰人可以在3个参与的用户之间进行培训数据集训练标准的MLP NN和3个汇总服务器,而MNIST的精度为96.57%,而扩展的MNIST(DIGITS)数据集的精度为98.40%,同时提供了各种优化。
Federated learning enables multiple data owners to jointly train a machine learning model without revealing their private datasets. However, a malicious aggregation server might use the model parameters to derive sensitive information about the training dataset used. To address such leakage, differential privacy and cryptographic techniques have been investigated in prior work, but these often result in large communication overheads or impact model performance. To mitigate this centralization of power, we propose SCOTCH, a decentralized m-party secure-computation framework for federated aggregation that deploys MPC primitives, such as secret sharing. Our protocol is simple, efficient, and provides strict privacy guarantees against curious aggregators or colluding data-owners with minimal communication overheads compared to other existing state-of-the-art privacy-preserving federated learning frameworks. We evaluate our framework by performing extensive experiments on multiple datasets with promising results. SCOTCH can train the standard MLP NN with the training dataset split amongst 3 participating users and 3 aggregating servers with 96.57% accuracy on MNIST, and 98.40% accuracy on the Extended MNIST (digits) dataset, while providing various optimizations.