论文标题
部分可观测时空混沌系统的无模型预测
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems
论文作者
论文摘要
深度学习(DL)方法已被广泛应用于基于异常的网络入侵检测系统(NIDS)以检测恶意流量。为了扩展基于DL的方法的使用方案,联合学习(FL)允许多个用户根据尊重各个数据隐私的培训全局模型。但是,尚未系统地评估基于FL的NIDS在现有防御措施下对现有的隐私攻击的稳健性。为了解决此问题,我们建议使用针对基于FL的NIDS设计的两个隐私评估指标,包括(1)隐私评分,使用重建攻击评估原始交通和回收的流量功能之间的相似性,以及(2)使用对抗性攻击与回收流量的对抗性攻击的逃避率。我们进行实验以说明现有的防御能力几乎没有保护,相应的对抗流量甚至可以逃避Sota nids kitsune。为了防御此类攻击并建立了更强大的基于FL的NID,我们进一步提出了FedDef,FedDef是一种基于优化的输入扰动防御策略,并具有理论保证。它通过最大程度地提高输入距离来最大程度地减少梯度距离和强大的隐私保护,从而实现了高实用性。我们通过实验评估四个数据集上的四个现有防御能力,并表明我们的防御在隐私保护方面优于所有基准,最高7倍的隐私分数,同时在最佳参数组合中保持模型准确性损失以内的模型准确性损失。
Deep learning (DL) methods have been widely applied to anomaly-based network intrusion detection system (NIDS) to detect malicious traffic. To expand the usage scenarios of DL-based methods, federated learning (FL) allows multiple users to train a global model on the basis of respecting individual data privacy. However, it has not yet been systematically evaluated how robust FL-based NIDSs are against existing privacy attacks under existing defenses. To address this issue, we propose two privacy evaluation metrics designed for FL-based NIDSs, including (1) privacy score that evaluates the similarity between the original and recovered traffic features using reconstruction attacks, and (2) evasion rate against NIDSs using adversarial attack with the recovered traffic. We conduct experiments to illustrate that existing defenses provide little protection and the corresponding adversarial traffic can even evade the SOTA NIDS Kitsune. To defend against such attacks and build a more robust FL-based NIDS, we further propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee. It achieves both high utility by minimizing the gradient distance and strong privacy protection by maximizing the input distance. We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection with up to 7 times higher privacy score, while maintaining model accuracy loss within 3% under optimal parameter combination.