论文标题

扰动更多,更多陷阱:了解图神经网络的行为

Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks

论文作者

Ji, Chaojie, Wang, Ruxin, Wu, Hongyan

论文摘要

尽管图神经网络(GNNS)在图上的各种任务中都表现出巨大的潜力,但缺乏透明度阻碍了GNNS如何达到其预测。尽管探索了GNN的解释者很少,但对局部忠诚度的考虑,表明该模型应围绕实例的表现,被忽略了。在本文中,我们首先提出了一个基于局部忠诚度的新颖的事后框架,以实现任何受过训练的GNNS-TRAP2,该框架可以产生高保真的解释。考虑到需要突出显示每个节点内部的相关图形结构和重要特征,因此设计了TRAP2中的三层体系结构:i)解释域被预先通过翻译层定义; ii)通过扰动层对正在解释的GNN的局部预测行为进行了探测和监测,其中在解释域中对图形结构和特征级进行了多种扰动; iii)通过通过释义层拟合当地决策边界来产生高忠实的解释。最后,基于五个所需归因:准确性,忠诚度,果断,洞察力和灵感,在六个基准数据集上评估TRAP2,其实现$ 10.2 \%$ $的解释准确性比最先进的方法更高。

While graph neural networks (GNNs) have shown a great potential in various tasks on graph, the lack of transparency has hindered understanding how GNNs arrived at its predictions. Although few explainers for GNNs are explored, the consideration of local fidelity, indicating how the model behaves around an instance should be predicted, is neglected. In this paper, we first propose a novel post-hoc framework based on local fidelity for any trained GNNs - TraP2, which can generate a high-fidelity explanation. Considering that both relevant graph structure and important features inside each node need to be highlighted, a three-layer architecture in TraP2 is designed: i) interpretation domain are defined by Translation layer in advance; ii) local predictive behavior of GNNs being explained are probed and monitored by Perturbation layer, in which multiple perturbations for graph structure and feature-level are conducted in interpretation domain; iii) high faithful explanations are generated by fitting the local decision boundary through Paraphrase layer. Finally, TraP2 is evaluated on six benchmark datasets based on five desired attributions: accuracy, fidelity, decisiveness, insight and inspiration, which achieves $10.2\%$ higher explanation accuracy than the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源