论文标题

通过多任务学习将联合嵌入到目标对话中

Incorporating Joint Embeddings into Goal-Oriented Dialogues with Multi-Task Learning

论文作者

Kassawat, Firas, Chaudhuri, Debanjan, Lehmann, Jens

论文摘要

基于注意力的编码器 - 模型神经网络模型最近在目标对话系统中显示出有希望的结果。但是,这些模型在保留其端到端的文本生成功能的同时,努力推理并纳入国家实现的知识。由于这样的模型可以从用户意图和知识图集成中受益匪浅,因此在本文中,我们提出了一种基于RNN的端到端编码器架构体系结构,该体系结构接受了知识图的联合嵌入和语料库作为输入进行培训。该模型提供了用户意图与文本生成的额外集成,并通过多任务学习范式训练,并提供了另外的正规化技术,以惩罚生成错误的实体作为输出。该模型在推断过程中进一步合并了知识图实体查找,以确保基于所提供的本地知识图,生成的输出是符合状态的。我们最终使用BLEU分数评估了该模型,经验评估描述了我们提出的架构可以帮助改善以任务为导向的对话系统的性能。

Attention-based encoder-decoder neural network models have recently shown promising results in goal-oriented dialogue systems. However, these models struggle to reason over and incorporate state-full knowledge while preserving their end-to-end text generation functionality. Since such models can greatly benefit from user intent and knowledge graph integration, in this paper we propose an RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input. The model provides an additional integration of user intent along with text generation, trained with a multi-task learning paradigm along with an additional regularization technique to penalize generating the wrong entity as output. The model further incorporates a Knowledge Graph entity lookup during inference to guarantee the generated output is state-full based on the local knowledge graph provided. We finally evaluated the model using the BLEU score, empirical evaluation depicts that our proposed architecture can aid in the betterment of task-oriented dialogue system`s performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源