论文标题

范围敏感和结果的专注模型,用于多英格语言理解

A Scope Sensitive and Result Attentive Model for Multi-Intent Spoken Language Understanding

论文作者

Cheng, Lizhi, Yang, Wenmian, Jia, Weijia

论文摘要

多英格语的语言理解(SLU)是SLU的一种新颖而复杂的场景,正在吸引越来越多的关注。与传统SLU不同,在这种情况下,每个意图都有其特定范围。范围之外的语义信息甚至阻碍了预测,这极大地增加了意图检测的困难。更严重的是,指导插槽填充这些不准确的意图标签会遭受错误传播问题,从而导致整体性能不满意。为了解决这些挑战,在本文中,我们提出了一个基于变压器的新型范围敏感的结果网络(SSRAN),其中包含范围识别器(SR)和结果注意网络(RAN)。范围识别器分配范围信息到每个令牌,从而减少了范围内令牌的注意力。结果注意网络有效地利用了插槽填充和意图检测结果之间的双向相互作用,从而减轻了误差传播问题。两个公共数据集的实验表明,我们的模型可显着提高SLU性能(5.4 \%\%和2.1 \%的总体准确性)。

Multi-Intent Spoken Language Understanding (SLU), a novel and more complex scenario of SLU, is attracting increasing attention. Unlike traditional SLU, each intent in this scenario has its specific scope. Semantic information outside the scope even hinders the prediction, which tremendously increases the difficulty of intent detection. More seriously, guiding slot filling with these inaccurate intent labels suffers error propagation problems, resulting in unsatisfied overall performance. To solve these challenges, in this paper, we propose a novel Scope-Sensitive Result Attention Network (SSRAN) based on Transformer, which contains a Scope Recognizer (SR) and a Result Attention Network (RAN). Scope Recognizer assignments scope information to each token, reducing the distraction of out-of-scope tokens. Result Attention Network effectively utilizes the bidirectional interaction between results of slot filling and intent detection, mitigating the error propagation problem. Experiments on two public datasets indicate that our model significantly improves SLU performance (5.4\% and 2.1\% on Overall accuracy) over the state-of-the-art baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源