论文标题
与增强的图形对比度学习
Graph Contrastive Learning with Augmentations
论文作者
论文摘要
对于当前图神经网络(GNNS),对图形结构数据的可推广,可转移和健壮的表示学习仍然是一个挑战。与用于图像数据的卷积神经网络(CNN)开发的内容不同,GNN的自我监督学习和预培训的探索较少。在本文中,我们提出了一个图形对比度学习(GraphCl)框架,用于学习图形数据的无监督表示。我们首先设计了四种类型的图形增强量来融合各种先验。然后,我们系统地研究了在四种不同的设置中,图形增强对多个数据集的各种组合的影响:半监督,无监督和转移学习以及对抗性攻击。结果表明,即使不进行调整增强范围,也不使用复杂的GNN体系结构,我们的GraphCl框架也可以与最新方法相比,可以产生类似或更好的概括性,转移性和鲁棒性的图形表示。我们还研究了参数化图扩大范围和模式的影响,并观察到初步实验的进一步绩效提高。我们的代码可从https://github.com/shen-lab/graphcl获得。
Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at https://github.com/Shen-Lab/GraphCL.