论文标题

通过相关后传播解释动态图神经网络

Explaining Dynamic Graph Neural Networks via Relevance Back-propagation

论文作者

Xie, Jiaxuan, Liu, Yezi, Shen, Yanning

论文摘要

图形神经网络(GNN)在捕获图形结构数据中的丰富信息方面表现出显着的有效性。但是,GNNS的黑框性质阻碍用户理解和信任模型,从而导致其应用程序的困难。尽管近年来见证了有关解释GNN的研究的繁荣,但其中大多数都集中在静态图上,而动态GNN的解释几乎没有探索。由于它们的独特特征是随时间变化的图形结构,解释动态GNN是一项挑战。直接使用为动态图上静态图设计的现有模型是不可行的,因为它们忽略了快照之间的时间依赖性。在这项工作中,我们建议DGEXPLAINER为动态GNN提供可靠的解释。 DGEXPLAINER将动态GNN的输出激活评分重新分配到其上一层神经元的相关性,该分数迭代为止,直到获得输入神经元的相关性得分为止。我们在现实世界数据集上进行定量和定性实验,以证明拟议框架在识别动态GNN的链接预测和节点回归的重要节点方面的有效性。

Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源