论文标题

通过说话改写提高机器人响应矛盾检测

Improving Bot Response Contradiction Detection via Utterance Rewriting

论文作者

Jin, Di, Liu, Sijia, Liu, Yang, Hakkani-Tur, Dilek

论文摘要

尽管基于大型神经模型的聊天机器人通常会在开放域对话中产生流利的响应,但一种显着的错误类型是矛盾或与前对话转弯的不一致性。先前的工作将机器人响应中的矛盾检测视为类似于自然语言推断的任务,例如检测一对机器人话语之间的矛盾。但是,对话中的话语可能包含共同参考或省略号,并且使用这些话语可能并不总是足以识别矛盾。这项工作旨在通过重写所有机器人话语来恢复前因和省略号来改善矛盾检测。我们策划了一个新的数据集来重写话语,并在其上构建了重写模型。我们从经验上证明,该模型可以产生令人满意的重写,以使机器人说话更加完整。此外,使用重写的话语可以显着提高矛盾的检测性能,例如AUPR和关节精度得分(检测矛盾以及证据)分别增加6.5%和4.5%(绝对增加)。

Though chatbots based on large neural models can often produce fluent responses in open domain conversations, one salient error type is contradiction or inconsistency with the preceding conversation turns. Previous work has treated contradiction detection in bot responses as a task similar to natural language inference, e.g., detect the contradiction between a pair of bot utterances. However, utterances in conversations may contain co-references or ellipsis, and using these utterances as is may not always be sufficient for identifying contradictions. This work aims to improve the contradiction detection via rewriting all bot utterances to restore antecedents and ellipsis. We curated a new dataset for utterance rewriting and built a rewriting model on it. We empirically demonstrate that this model can produce satisfactory rewrites to make bot utterances more complete. Furthermore, using rewritten utterances improves contradiction detection performance significantly, e.g., the AUPR and joint accuracy scores (detecting contradiction along with evidence) increase by 6.5% and 4.5% (absolute increase), respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源