论文标题

轻巧的构图嵌入,用于增量流建议

Lightweight Compositional Embeddings for Incremental Streaming Recommendation

论文作者

Hang, Mengyue, Schnabel, Tobias, Yang, Longqi, Neville, Jennifer

论文摘要

基于图的推荐系统中的大多数工作都考虑{\ em static}设置,其中有关测试节点(即用户和项目)的所有信息在培训时都可以预先获得。但是,这种静态设置对于许多真实世界应用程序毫无意义,在这些应用程序中,数据连续以新的边缘和节点流呈现,并且必须逐步更新模型预测以反映最新状态。为了充分利用流中新可用的数据,最新的基于图的建议模型将需要重复重新训练,这在实践中是不可行的。 在本文中,我们研究了基于图的流媒体建议设置,并提出了一个组成建议模型 - 轻质构图嵌入(LCE) - 支持低计算成本下的增量更新。 LCE并没有通过基于图形的交互作用来了解整个节点的明确嵌入,而是学习一个节点子集的明确嵌入,并代表其他节点{\ em隐式}。当一种节点类型(例如项目)更适合静态表示时,这提供了一种有效但有效的手段来利用流图数据。我们进行了一项广泛的经验研究,将LCE与三个大规模用户信息推荐数据集进行了与流媒体设置相互作用的一组竞争基线进行比较。结果表明,LCE的表现出色,表明它的表现几乎与基于图的替代图模型相比,参数几乎少得多。

Most work in graph-based recommender systems considers a {\em static} setting where all information about test nodes (i.e., users and items) is available upfront at training time. However, this static setting makes little sense for many real-world applications where data comes in continuously as a stream of new edges and nodes, and one has to update model predictions incrementally to reflect the latest state. To fully capitalize on the newly available data in the stream, recent graph-based recommendation models would need to be repeatedly retrained, which is infeasible in practice. In this paper, we study the graph-based streaming recommendation setting and propose a compositional recommendation model -- Lightweight Compositional Embedding (LCE) -- that supports incremental updates under low computational cost. Instead of learning explicit embeddings for the full set of nodes, LCE learns explicit embeddings for only a subset of nodes and represents the other nodes {\em implicitly}, through a composition function based on their interactions in the graph. This provides an effective, yet efficient, means to leverage streaming graph data when one node type (e.g., items) is more amenable to static representation. We conduct an extensive empirical study to compare LCE to a set of competitive baselines on three large-scale user-item recommendation datasets with interactions under a streaming setting. The results demonstrate the superior performance of LCE, showing that it achieves nearly skyline performance with significantly fewer parameters than alternative graph-based models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源