论文标题
通过注入恶性节点对图形数据的可扩展攻击
Scalable Attack on Graph Data by Injecting Vicious Nodes
论文作者
论文摘要
最近的研究表明,图形卷积网络(GCN)容易受到精心设计的攻击的影响,旨在导致对图表上特定节点的错误分类,并具有难以置信的扰动。但是,由于其高时间的复杂性,绝大多数现有作品无法处理大规模的图表。此外,现有作品主要集中于操纵图上的现有节点,而实际上,攻击者通常没有特权来修改现有节点的信息。在本文中,我们开发了一个更可扩展的框架,名为近似快速梯度标志方法(AFGSM),该框架考虑了一种更实用的攻击场景,在该场景中,对手只能将新的恶性节点注入图形,而无法控制原始图。从方法上讲,我们提供了一种近似策略,以使我们攻击的模型线性化,然后以较低的时间成本得出近似闭合的封闭式解决方案。为了与操纵原始图的现有攻击方法进行公平的比较,我们通过注入恶性节点来使其适应新的攻击方案。经验实验结果表明,我们提出的攻击方法可以显着降低GCN的分类准确性,并且比现有方法更快而不会危害攻击性能。
Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations. However, a vast majority of existing works cannot handle large-scale graphs because of their high time complexity. Additionally, existing works mainly focus on manipulating existing nodes on the graph, while in practice, attackers usually do not have the privilege to modify information of existing nodes. In this paper, we develop a more scalable framework named Approximate Fast Gradient Sign Method (AFGSM) which considers a more practical attack scenario where adversaries can only inject new vicious nodes to the graph while having no control over the original graph. Methodologically, we provide an approximation strategy to linearize the model we attack and then derive an approximate closed-from solution with a lower time cost. To have a fair comparison with existing attack methods that manipulate the original graph, we adapt them to the new attack scenario by injecting vicious nodes. Empirical experimental results show that our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance.