论文标题

简化架构搜索图形神经网络

Simplifying Architecture Search for Graph Neural Network

论文作者

Zhao, Huan, Wei, Lanning, Yao, Quanming

论文摘要

在各种情况下,近年来见证了图神经网络(GNN)的流行。为了获得最佳数据特定的GNN体系结构,研究人员转向神经体系结构搜索(NAS)方法,这些方法在发现卷积神经网络中的有效体系结构方面取得了令人印象深刻的进步。两项初步作品Gragennas和Auto-GNN已首次尝试将NAS方法应用于GNN。尽管结果有令人鼓舞,但由于设计的搜索空间,由于表达能力和搜索效率的表达能力和搜索效率有几个缺点。为了克服这些缺点,我们提出了SNAG框架(简化的神经体系结构搜索图形神经网络),该框架由新颖的搜索空间和基于增强学习的搜索算法组成。与人类设计的GNN和NAS方法相比,对现实世界数据集的广泛实验证明了障碍框架的有效性。

Recent years have witnessed the popularity of Graph Neural Networks (GNN) in various scenarios. To obtain optimal data-specific GNN architectures, researchers turn to neural architecture search (NAS) methods, which have made impressive progress in discovering effective architectures in convolutional neural networks. Two preliminary works, GraphNAS and Auto-GNN, have made first attempt to apply NAS methods to GNN. Despite the promising results, there are several drawbacks in expressive capability and search efficiency of GraphNAS and Auto-GNN due to the designed search space. To overcome these drawbacks, we propose the SNAG framework (Simplified Neural Architecture search for Graph neural networks), consisting of a novel search space and a reinforcement learning based search algorithm. Extensive experiments on real-world datasets demonstrate the effectiveness of the SNAG framework compared to human-designed GNNs and NAS methods, including GraphNAS and Auto-GNN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源