论文标题

XSIMGCL:朝着极其简单的图形对比学习以进行推荐

XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation

论文作者

Yu, Junliang, Xia, Xin, Chen, Tong, Cui, Lizhen, Hung, Nguyen Quoc Viet, Yin, Hongzhi

论文摘要

对比度学习(CL)最近已被证明在改善建议性能中至关重要。基于CL的建议模型的基本原理是确保从用户 - 项目二分组图的不同图形增强得出的表示之间的一致性。这种自我监督的方法允许从原始数据中提取一般特征,从而减轻数据稀疏的问题。尽管这种范式具有有效性,但导致其性能增长的因素尚未完全理解。本文提供了有关CL对建议的影响的新颖见解。我们的发现表明,CL使该模型能够学习更均匀分布的用户和项目表示,从而减轻了普遍的流行偏见和促进长尾项目。我们的分析还表明,在基于CL的建议中,以前认为是必不可少的图形增强量相对不可靠,并且具有有限的意义。基于这些发现,我们提出了一种非常简单的图形对比学习方法(XSIMGCL),以供推荐,该方法丢弃了无效的图形增强,而是采用了一种简单但有效的基于噪声的嵌入增强功能来生成CL的视图。一项对四个大型且高度稀疏的基准数据集的全面实验研究表明,尽管所提出的方法非常简单,但它可以平稳地调整学习表示的均匀性,并在建议精度和训练效率方面均优于其基于图的基于图的增强功能。代码和使用的数据集在https://github.com/coder-yu/selfrec上发布。

Contrastive learning (CL) has recently been demonstrated critical in improving recommendation performance. The underlying principle of CL-based recommendation models is to ensure the consistency between representations derived from different graph augmentations of the user-item bipartite graph. This self-supervised approach allows for the extraction of general features from raw data, thereby mitigating the issue of data sparsity. Despite the effectiveness of this paradigm, the factors contributing to its performance gains have yet to be fully understood. This paper provides novel insights into the impact of CL on recommendation. Our findings indicate that CL enables the model to learn more evenly distributed user and item representations, which alleviates the prevalent popularity bias and promoting long-tail items. Our analysis also suggests that the graph augmentations, previously considered essential, are relatively unreliable and of limited significance in CL-based recommendation. Based on these findings, we put forward an eXtremely Simple Graph Contrastive Learning method (XSimGCL) for recommendation, which discards the ineffective graph augmentations and instead employs a simple yet effective noise-based embedding augmentation to generate views for CL. A comprehensive experimental study on four large and highly sparse benchmark datasets demonstrates that, though the proposed method is extremely simple, it can smoothly adjust the uniformity of learned representations and outperforms its graph augmentation-based counterparts by a large margin in both recommendation accuracy and training efficiency. The code and used datasets are released at https://github.com/Coder-Yu/SELFRec.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源