论文标题

在联邦学习方案中中毒基于深度学习的建议模型

Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios

论文作者

Rong, Dazhong, He, Qinming, Chen, Jianhai

论文摘要

在过去的几年中,已经提出了针对推荐系统的各种攻击方法,并且推荐系统的安全问题引起了极大的关注。传统攻击试图通过毒害培训数据来使目标物品推荐给尽可能多的用户。从保护用户的私人数据的功能中迎接联邦建议可以有效地捍卫此类攻击。因此,很多作品致力于开发联合推荐系统。为了证明当前的联合建议仍然很容易受到伤害,在这项工作中,我们探究了针对联合学习方案中基于深度学习的建议模型的设计攻击方法。具体而言,我们的攻击会产生中毒的梯度,以根据两种策略(即随机近似和硬用户开采)上传操纵恶意用户。广泛的实验表明,我们精心设计的攻击可以有效地毒害目标模型,并且攻击效率为最新的攻击效果。

Various attack methods against recommender systems have been proposed in the past years, and the security issues of recommender systems have drawn considerable attention. Traditional attacks attempt to make target items recommended to as many users as possible by poisoning the training data. Benifiting from the feature of protecting users' private data, federated recommendation can effectively defend such attacks. Therefore, quite a few works have devoted themselves to developing federated recommender systems. For proving current federated recommendation is still vulnerable, in this work we probe to design attack approaches targeting deep learning based recommender models in federated learning scenarios. Specifically, our attacks generate poisoned gradients for manipulated malicious users to upload based on two strategies (i.e., random approximation and hard user mining). Extensive experiments show that our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源