论文标题
通过解开模板重写有程式化的知识基础对话生成
Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
论文作者
论文摘要
当前的知识对话生成(KDG)模型专门产生理性和事实响应。但是,要与用户建立长期关系,KDG模型需要能够以所需的样式或属性生成响应。因此,我们研究了一个新问题:风格化知识的对话生成(SKDG)。它提出了两个挑战:(1)如何训练SKDG模型,而没有<上下文,知识,程式化响应>可用的三元组。 (2)如何与上下文相处并在产生风格化响应时保留知识。在本文中,我们提出了一种新颖的删除模板重写(DTR)方法,该方法通过梳理删除的样式模板(来自单语的样式化语料库)和内容模板(来自KDG Corpus)生成响应。整个框架是端到端的可区分,并且在没有监督的情况下学习。对两个基准测试的广泛实验表明,与先前最先进的风格对话生成方法相比,DTR对所有评估指标都有显着改善。此外,在标准KDG评估设置中,DTR与最先进的KDG方法达到了可比的性能。
Current Knowledge-Grounded Dialogue Generation (KDG) models specialize in producing rational and factual responses. However, to establish long-term relationships with users, the KDG model needs the capability to generate responses in a desired style or attribute. Thus, we study a new problem: Stylized Knowledge-Grounded Dialogue Generation (SKDG). It presents two challenges: (1) How to train a SKDG model where no <context, knowledge, stylized response> triples are available. (2) How to cohere with context and preserve the knowledge when generating a stylized response. In this paper, we propose a novel disentangled template rewriting (DTR) method which generates responses via combing disentangled style templates (from monolingual stylized corpus) and content templates (from KDG corpus). The entire framework is end-to-end differentiable and learned without supervision. Extensive experiments on two benchmarks indicate that DTR achieves a significant improvement on all evaluation metrics compared with previous state-of-the-art stylized dialogue generation methods. Besides, DTR achieves comparable performance with the state-of-the-art KDG methods in standard KDG evaluation setting.