论文标题

通过增强模型改善对比度学习

Improving Contrastive Learning with Model Augmentation

论文作者

Liu, Zhiwei, Chen, Yongjun, Li, Jia, Luo, Man, Yu, Philip S., Xiong, Caiming

论文摘要

顺序建议旨在预测用户行为中的下一个项目,可以通过以序列表征项目关系来解决。由于序列中的数据稀疏性和噪声问题,因此提出了一种新的自学学习(SSL)范式来改善性能,该范围在序列的正面和负面观点之间采用了对比学习。 但是,现有方法通过从数据角度采用增强来构建视图,而我们认为1)很难设计最佳数据增强方法,2)数据增强方法破坏了顺序相关性,3)数据增强未能包含全面的自我审查信号。 因此,我们研究了模型增强构建视图对的可能性。我们提出了三个级别的模型增强方法:神经元掩盖,层掉落和编码器补充。 这项工作为构建对比度SSL的观点打开了一个新颖的方向。实验验证了顺序建议中SSL模型增强的功效。代码可用\ footNote {\ url {https://github.com/salesforce/srma}}。

The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences. Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance, which employs contrastive learning between positive and negative views of sequences. However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals. Therefore, we investigate the possibility of model augmentation to construct view pairs. We propose three levels of model augmentation methods: neuron masking, layer dropping, and encoder complementing. This work opens up a novel direction in constructing views for contrastive SSL. Experiments verify the efficacy of model augmentation for the SSL in the sequential recommendation. Code is available\footnote{\url{https://github.com/salesforce/SRMA}}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源