论文标题
图形神经网络过度光滑的注释
A Note on Over-Smoothing for Graph Neural Networks
论文作者
论文摘要
图神经网络(GNN)在图形结构数据上取得了很大的成功。但是,观察到图形神经网络的性能不会随着层数的增加而没有改善。这种效果被称为过度平滑,主要在线性情况下进行了分析。在本文中,我们基于先前的结果\ cite {oono2019graph},以进一步分析一般图形神经网络体系结构中的过度平滑效果。我们显示重量矩阵何时满足通过增强归一化laplacian的光谱确定的条件,嵌入的dirichlet能量将收敛到零,从而导致歧视功率的损失。使用Dirichlet Energy测量嵌入的“表现力”在概念上是清洁的。它导致比\ cite {oono2019graph}更简单的证明,并且可以处理更多的非线性。
Graph Neural Networks (GNNs) have achieved a lot of success on graph-structured data. However, it is observed that the performance of graph neural networks does not improve as the number of layers increases. This effect, known as over-smoothing, has been analyzed mostly in linear cases. In this paper, we build upon previous results \cite{oono2019graph} to further analyze the over-smoothing effect in the general graph neural network architecture. We show when the weight matrix satisfies the conditions determined by the spectrum of augmented normalized Laplacian, the Dirichlet energy of embeddings will converge to zero, resulting in the loss of discriminative power. Using Dirichlet energy to measure "expressiveness" of embedding is conceptually clean; it leads to simpler proofs than \cite{oono2019graph} and can handle more non-linearities.