论文标题

MotifeXplainer:基于基础的图形神经网络解释器

MotifExplainer: a Motif-based Graph Neural Network Explainer

论文作者

Yu, Zhaoning, Gao, Hongyang

论文摘要

我们考虑图形神经网络(GNN)的解释问题。大多数现有的GNN解释方法都识别最重要的边缘或节点,但无法考虑子结构,这对于图形数据更为重要。考虑子图的唯一方法试图搜索所有可能的子图并识别最重要的子图。但是,所鉴定的子图可能并不复发或统计上很重要。在这项工作中,我们提出了一种称为MotifeXplainer的新方法,以通过识别图表中的重复和统计学意义模式来解释GNN。与基于节点,边缘和常规子图的方法相比,我们提出的基于基序的方法可以提供更好的人为理解的解释。给定输入图和预训练的GNN模型,我们的方法首先使用精心设计的基序提取规则在图中提取图案。然后,我们通过将基序馈入预训练的GNN来生成基序。最后,我们采用一种基于注意力的方法来识别最有影响力的基础作为最终预测结果的解释。关于合成和现实世界数据集的经验研究证明了我们方法的有效性。

We consider the explanation problem of Graph Neural Networks (GNNs). Most existing GNN explanation methods identify the most important edges or nodes but fail to consider substructures, which are more important for graph data. The only method that considers subgraphs tries to search all possible subgraphs and identify the most significant subgraphs. However, the subgraphs identified may not be recurrent or statistically important. In this work, we propose a novel method, known as MotifExplainer, to explain GNNs by identifying important motifs, recurrent and statistically significant patterns in graphs. Our proposed motif-based methods can provide better human-understandable explanations than methods based on nodes, edges, and regular subgraphs. Given an input graph and a pre-trained GNN model, our method first extracts motifs in the graph using well-designed motif extraction rules. Then we generate motif embedding by feeding motifs into the pre-trained GNN. Finally, we employ an attention-based method to identify the most influential motifs as explanations for the final prediction results. The empirical studies on both synthetic and real-world datasets demonstrate the effectiveness of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源