论文标题
通过本地子图形学习元学习
Graph Meta Learning via Local Subgraphs
论文作者
论文摘要
图形的流行方法需要丰富的标签和边缘信息以进行学习。当稀缺新任务的数据时,元学习可以从先前的经验中学习,并形成急需的归纳偏见,以快速适应新任务。在这里,我们介绍了G-Meta,这是一种新颖的元学习算法,用于图形。 G-Meta使用本地子图来传输特定于子图的信息,并通过元梯度更快地学习可转移的知识。 G-Meta学习了如何仅使用新任务中的几个节点或边缘快速适应新任务,并通过从其他图或相关的数据点中学习,尽管是不相交的标签集。从理论上讲,G-Meta是合理的,因为我们表明可以在目标节点或边缘的局部子图中找到预测的证据。七个数据集和九种基线方法的实验表明,G-Meta的表现优于现有方法高达16.3%。与以前的方法不同,G-Meta在具有挑战性的,几乎没有概括的学习设置中成功学习,这些设置需要概括才能全新的图表和从未见过的标签。最后,G-META缩放到大图,我们在包括1,840个图的新的生活树数据集中证明了这一点,这是先前工作中使用的图形数量增加的两个幅度。
Prevailing methods for graphs require abundant label and edge information for learning. When data for a new task are scarce, meta-learning can learn from prior experiences and form much-needed inductive biases for fast adaption to new tasks. Here, we introduce G-Meta, a novel meta-learning algorithm for graphs. G-Meta uses local subgraphs to transfer subgraph-specific information and learn transferable knowledge faster via meta gradients. G-Meta learns how to quickly adapt to a new task using only a handful of nodes or edges in the new task and does so by learning from data points in other graphs or related, albeit disjoint label sets. G-Meta is theoretically justified as we show that the evidence for a prediction can be found in the local subgraph surrounding the target node or edge. Experiments on seven datasets and nine baseline methods show that G-Meta outperforms existing methods by up to 16.3%. Unlike previous methods, G-Meta successfully learns in challenging, few-shot learning settings that require generalization to completely new graphs and never-before-seen labels. Finally, G-Meta scales to large graphs, which we demonstrate on a new Tree-of-Life dataset comprising of 1,840 graphs, a two-orders of magnitude increase in the number of graphs used in prior work.