论文标题

通过证据解释:一个可解释的基于内存的神经网络,用于提问回答

Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering

论文作者

Tran, Quan, Dam, Nhan, Lai, Tuan, Dernoncourt, Franck, Le, Trung, Le, Nham, Phung, Dinh

论文摘要

由于其规模,复杂性以及解释过程所依赖的令人愉快的概念,深层神经网络的解释性和解释性是具有挑战性的。尤其是先前的工作,它专注于通过人类友好的视觉效果和概念来代表神经网络的内部组成部分。另一方面,在现实生活中,在做出决定时,人类倾向于依靠过去的情况和/或关联。因此,可以说,使模型透明的一种有希望的方法是以某种方式设计它,以使模型将当前样本与所见样本明确连接起来,并将其决定基于这些样本。基于该原则,我们在本文中提出了一个可解释的,基于证据的内存网络体系结构,该结构学会了总结数据集并提取支持证据以做出决定的证据。我们的模型在两个流行的问题答案数据集(即Trecqa和Wikiqa)上实现了最新的性能。通过进一步的分析,我们表明该模型可以可靠地追踪其在验证步骤中犯的错误,以介绍可能导致这些错误的培训实例。我们认为,这种错误追踪功能为改善许多应用程序的数据集质量提供了重大好处。

Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests. Previous work, in particular, has focused on representing internal components of neural networks through human-friendly visuals and concepts. On the other hand, in real life, when making a decision, human tends to rely on similar situations and/or associations in the past. Hence arguably, a promising approach to make the model transparent is to design it in a way such that the model explicitly connects the current sample with the seen ones, and bases its decision on these samples. Grounded on that principle, we propose in this paper an explainable, evidence-based memory network architecture, which learns to summarize the dataset and extract supporting evidences to make its decision. Our model achieves state-of-the-art performance on two popular question answering datasets (i.e. TrecQA and WikiQA). Via further analysis, we show that this model can reliably trace the errors it has made in the validation step to the training instances that might have caused these errors. We believe that this error-tracing capability provides significant benefit in improving dataset quality in many applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源