论文标题
MGDCF:通过马尔可夫图扩散的远程学习,用于神经协作过滤
MGDCF: Distance Learning via Markov Graph Diffusion for Neural Collaborative Filtering
论文作者
论文摘要
Graph神经网络(GNN)最近已被用于构建协作过滤(CF)模型,以根据历史用户项目交互来预测用户偏好。但是,对基于GNN的CF模型与某些传统网络表示学习(NRL)方法的关系相对较少。在本文中,我们显示了基于上下文编码的一些基于GNN的CF模型与传统的1层NRL模型之间的等效性。基于Markov的流程,该过程交易了两种类型的距离,我们提出了Markov Graph扩散协作过滤(MGDCF),以推广一些基于GNN的最新CF模型。我们将GNN视为传播可学习的用户/物品顶点嵌入的可训练的黑匣子,而是将GNN视为一个无法实现的马尔可夫进程,可以为传统的NRL模型构造顶点的恒定上下文功能,该模型将上下文特征编码具有完全连接的层的上下文功能。这种简化可以帮助我们更好地了解GNN如何使CF模型受益。尤其是,它有助于我们意识到排名损失在基于GNN的CF任务中起着至关重要的作用。借助我们提出的简单而强大的排名损失InfoBPR,如果没有GNN构建的上下文功能,NRL模型仍然可以很好地表现。我们进行实验以对MGDCF进行详细分析。
Graph Neural Networks (GNNs) have recently been utilized to build Collaborative Filtering (CF) models to predict user preferences based on historical user-item interactions. However, there is relatively little understanding of how GNN-based CF models relate to some traditional Network Representation Learning (NRL) approaches. In this paper, we show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding. Based on a Markov process that trades off two types of distances, we present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models. Instead of considering the GNN as a trainable black box that propagates learnable user/item vertex embeddings, we treat GNNs as an untrainable Markov process that can construct constant context features of vertices for a traditional NRL model that encodes context features with a fully-connected layer. Such simplification can help us to better understand how GNNs benefit CF models. Especially, it helps us realize that ranking losses play crucial roles in GNN-based CF tasks. With our proposed simple yet powerful ranking loss InfoBPR, the NRL model can still perform well without the context features constructed by GNNs. We conduct experiments to perform detailed analysis on MGDCF.