论文标题

通过上下文不变式转移学习,以进行一对多跨域建议

Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation

论文作者

Krishnan, Adit, Das, Mahashweta, Bendre, Mangesh, Yang, Hao, Sundaram, Hari

论文摘要

社交网络上新用户和项目的快速扩散使灰色 - 舒普用户/长尾项目挑战中的挑战加剧了推荐系统。从历史上看,跨域共聚类方法已成功利用了密集和稀疏域中的共享用户和项目以提高推理质量。但是,他们依靠共享的评级数据,并且不能扩展到多个稀疏目标域(即一对多传输设置)。加上神经推荐架构的越来越多的采用,激励我们开发可扩展的神经层转移方法来进行跨域学习。我们的关键直觉是指导神经协作过滤,并使用在密集和稀疏的域共享域不变的组件,从而改善在稀疏域中学到的用户和项目表示。我们利用跨域的上下文不断增长来开发这些共享模块,并证明,即使使用稀疏的交互数据,我们也可以学习到学习信息的表示空间。我们显示了我们在两个公共数据集上的方法和可扩展性的有效性和可扩展性,以及来自全球支付技术公司Visa的大规模交易数据集(19%的项目回忆,每个域更快3倍与培训单独的模型)。我们的方法适用于隐式和明确的反馈设置。

The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems. Historically, cross-domain co-clustering methods have successfully leveraged shared users and items across dense and sparse domains to improve inference quality. However, they rely on shared rating data and cannot scale to multiple sparse target domains (i.e., the one-to-many transfer setting). This, combined with the increasing adoption of neural recommender architectures, motivates us to develop scalable neural layer-transfer approaches for cross-domain learning. Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains, improving the user and item representations learned in the sparse domains. We leverage contextual invariances across domains to develop these shared modules, and demonstrate that with user-item interaction context, we can learn-to-learn informative representation spaces even with sparse interaction data. We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa, a global payments technology company (19% Item Recall, 3x faster vs. training separate models for each domain). Our approach is applicable to both implicit and explicit feedback settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源