论文标题

通过强大的学习率在联邦学习中为后门辩护

Defending against Backdoors in Federated Learning with Robust Learning Rate

论文作者

Ozdayi, Mustafa Safa, Kantarcioglu, Murat, Gel, Yulia R.

论文摘要

联合学习(FL)允许一组代理商协作训练模型,而无需共享其潜在的敏感数据。这使得FL适用于保护隐私的应用程序。同时,由于分散和未经审查的数据,FL容易受到对抗攻击的影响。针对FL的一条重要攻击是后门攻击。在后门攻击中,对手试图在训练期间将后门功能嵌入到模型中,以后可以激活以引起所需的错误分类。为了防止后门攻击,我们提出了一种轻巧的防御,需要对FL协议进行最小的更改。在高水平上,我们的辩护是基于仔细调整总体维度和每回合的聚合服务器的学习率,并基于代理更新的符号信息。我们首先猜测在FL环境中成功进行后门攻击的必要步骤,然后根据我们的猜想明确制定防御。通过实验,我们提供了支持我们猜想的经验证据,并测试了在不同环境下对后门攻击的防御。我们观察到后门被完全消除,或者其准确性大大降低。总体而言,我们的实验表明,我们的防御大大优于文献中最近提出的防御能力。我们通过对受过训练模型的准确性的影响最小的影响来实现这一目标。此外,我们还为拟议方案提供收敛率分析。

Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, we propose a lightweight defense that requires minimal change to the FL protocol. At a high level, our defense is based on carefully adjusting the aggregation server's learning rate, per dimension and per round, based on the sign information of agents' updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence that supports our conjecture, and we test our defense against backdoor attacks under different settings. We observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggest that our defense significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models. In addition, we also provide convergence rate analysis for our proposed scheme.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源