论文标题

基于特征的自适应增强图形对比度学习

Features Based Adaptive Augmentation for Graph Contrastive Learning

论文作者

Ali, Adnan, Li, Jinlong

论文摘要

自我监督的学习旨在消除图表表示学习中昂贵注释的需求,在该学习中,图形对比度学习(GCL)接受了包含数据数据对的自学信号培训。这些数据数据对是通过在原始图上使用随机函数的增强作用生成的。我们认为,根据下游任务,某些功能可能比其他功能更重要,并且统一地应用随机功能会破坏影响力的特征,从而导致准确性降低。为了解决此问题,我们介绍了一种基于功能的自适应增强(FEBAA)方法,该方法识别并保留了潜在的影响力并破坏其余的功能。我们实现FEBAA作为插头和播放图层,并将其与最先进的深图对比度学习(GRACE)和自举图形潜伏期(BGRL)一起使用。我们在八个图表学习的基准数据集上成功提高了GRACE和BGRL的准确性。

Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning, where graph contrastive learning (GCL) is trained with the self-supervision signals containing data-data pairs. These data-data pairs are generated with augmentation employing stochastic functions on the original graph. We argue that some features can be more critical than others depending on the downstream task, and applying stochastic function uniformly, will vandalize the influential features, leading to diminished accuracy. To fix this issue, we introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features and corrupts the remaining ones. We implement FebAA as plug and play layer and use it with state-of-the-art Deep Graph Contrastive Learning (GRACE) and Bootstrapped Graph Latents (BGRL). We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源