论文标题

图神经网络中的几何原则连接

Geometrically Principled Connections in Graph Neural Networks

论文作者

Gong, Shunwang, Bahri, Mehdi, Bronstein, Michael M., Zafeiriou, Stefanos

论文摘要

图形卷积操作员将深度学习的优势带入了以前认为无法触及的各种图形和网格处理任务。随着他们的持续成功,人们渴望通过将现有的深度学习技术调整为非欧几里得数据来设计更强大的体系结构。在本文中,我们认为几何形状应该仍然是新兴深度学习领域创新背后的主要驱动力。我们将图形神经网络与广泛成功的计算机图形和数据近似模型相关联:径向基础函数(RBFS)。我们猜想,像RBF一样,图形卷积层将受益于在强大的卷积内核中添加简单功能。我们介绍了Aggine Skip Connections,这是一个新颖的构建块,它通过将完全连接的层与任何图形卷积运算符相结合而形成。我们在实验上证明了我们技术的有效性,并表明性能的提高是参数数量增加的结果。配备了仿射连接的操作员在我们评估的每个任务上,即形状重建,致密形状的对应关系和图形分类明显优于其基本性能。我们希望我们的简单有效的方法能够成为扎实的基线,并帮助缓解图形神经网络的未来研究。

Graph convolution operators bring the advantages of deep learning to a variety of graph and mesh processing tasks previously deemed out of reach. With their continued success comes the desire to design more powerful architectures, often by adapting existing deep learning techniques to non-Euclidean data. In this paper, we argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning. We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs). We conjecture that, like RBFs, graph convolution layers would benefit from the addition of simple functions to the powerful convolution kernels. We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator. We experimentally demonstrate the effectiveness of our technique and show the improved performance is the consequence of more than the increased number of parameters. Operators equipped with the affine skip connection markedly outperform their base performance on every task we evaluated, i.e., shape reconstruction, dense shape correspondence, and graph classification. We hope our simple and effective approach will serve as a solid baseline and help ease future research in graph neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源