论文标题
不要让历史困扰您 - 减轻会话问题中的复杂错误
Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering
论文作者
论文摘要
会话问题回答(COQA)任务涉及回答有关上下文段落的一系列相互关联的会话问题。尽管现有的方法采用人写的地面真实答案在测试时间回答会话问题,但在现实的情况下,COQA模型将无法访问以前问题的基础真实答案,从而迫使该模型依靠其先前预测的答案来回答后续问题。在本文中,我们发现在测试时使用先前预测的答案时会发生复杂错误,从而大大降低了COQA系统的性能。为了解决这个问题,我们提出了一种抽样策略,该策略在训练过程中在目标答案和模型预测之间动态选择,从而在测试时密切模拟情况。此外,我们将这种现象的严重程度分析为问题类型,对话长度和域类型的函数。
The Conversational Question Answering (CoQA) task involves answering a sequence of inter-related conversational questions about a contextual paragraph. Although existing approaches employ human-written ground-truth answers for answering conversational questions at test time, in a realistic scenario, the CoQA model will not have any access to ground-truth answers for the previous questions, compelling the model to rely upon its own previously predicted answers for answering the subsequent questions. In this paper, we find that compounding errors occur when using previously predicted answers at test time, significantly lowering the performance of CoQA systems. To solve this problem, we propose a sampling strategy that dynamically selects between target answers and model predictions during training, thereby closely simulating the situation at test time. Further, we analyse the severity of this phenomena as a function of the question type, conversation length and domain type.