论文标题

强化基于学习的黑盒逃避攻击以链接动态图中的预测

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

论文作者

Fan, Houxiang, Wang, Binghui, Zhou, Pan, Li, Ang, Pang, Meng, Xu, Zichuan, Fu, Cai, Li, Hai, Chen, Yiran

论文摘要

动态图(LPDG)中的链接预测是一个重要的研究问题,它具有多种应用,例如在线建议,疾病传染,组织研究研究等。基于图形嵌入和图形神经网络的各种LPDG方法和图形神经网络的各种方法。在本文中,我们研究了LPDG方法的脆弱性,并提出了第一次实用的黑盒逃避攻击。具体而言,给定受过训练的LPDG模型,我们的攻击旨在扰动图形结构,而不知道对参数,模型体系结构等进行建模,以便LPDG模型可以尽可能多地进行预测的链接。我们根据基于随机策略的RL算法设计攻击。此外,我们评估了来自不同应用程序域的三个现实世界图数据集的攻击。实验结果表明,我们的攻击既有效又有效。

Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications such as online recommendations, studies on disease contagion, organizational studies, etc. Various LPDG methods based on graph embedding and graph neural networks have been recently proposed and achieved state-of-the-art performance. In this paper, we study the vulnerability of LPDG methods and propose the first practical black-box evasion attack. Specifically, given a trained LPDG model, our attack aims to perturb the graph structure, without knowing to model parameters, model architecture, etc., such that the LPDG model makes as many wrong predicted links as possible. We design our attack based on a stochastic policy-based RL algorithm. Moreover, we evaluate our attack on three real-world graph datasets from different application domains. Experimental results show that our attack is both effective and efficient.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源