论文标题

clseg:故事结束一代的对比度学习

CLSEG: Contrastive Learning of Story Ending Generation

论文作者

Xie, Yuqiang, Hu, Yue, Xing, Luxi, Li, Yunpeng, Peng, Wei, Guo, Ping

论文摘要

故事结束一代(SEG)是自然语言生成中的一项具有挑战性的任务。最近,基于预训练的语言模型(PLM)的方法实现了巨大的繁荣,可以产生流利而连贯的故事结局。但是,基于PLM的方法的预训练目标无法建模故事上下文与结束之间的一致性。本文的目的是采用对比度学习,以使结局更加与故事环境一致,而在对比的学习中,有两个主要的挑战。首先是错误的结局的负面抽样与故事环境不一致。第二个挑战是对SEG的对比度学习的适应。为了解决这两个问题,我们提出了一个新颖的故事结尾的对比学习框架(CLSEG),该框架有两个步骤:多光值采样和特定于故事的对比学习。特别是,对于第一个问题,我们利用新颖的多光值抽样机制来获得错误的结尾,考虑到秩序,因果关系和情感的一致性。为了解决第二个问题,我们精心设计了针对SEG的特定于故事的对比培训策略。实验表明,CLSEG的表现优于基准,并且可以以更强的一致性和理性的形式产生故事结局。

Story Ending Generation (SEG) is a challenging task in natural language generation. Recently, methods based on Pre-trained Language Models (PLM) have achieved great prosperity, which can produce fluent and coherent story endings. However, the pre-training objective of PLM-based methods is unable to model the consistency between story context and ending. The goal of this paper is to adopt contrastive learning to generate endings more consistent with story context, while there are two main challenges in contrastive learning of SEG. First is the negative sampling of wrong endings inconsistent with story contexts. The second challenge is the adaptation of contrastive learning for SEG. To address these two issues, we propose a novel Contrastive Learning framework for Story Ending Generation (CLSEG), which has two steps: multi-aspect sampling and story-specific contrastive learning. Particularly, for the first issue, we utilize novel multi-aspect sampling mechanisms to obtain wrong endings considering the consistency of order, causality, and sentiment. To solve the second issue, we well-design a story-specific contrastive training strategy that is adapted for SEG. Experiments show that CLSEG outperforms baselines and can produce story endings with stronger consistency and rationality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源