论文标题
基于统一的科学问题多跳的解释重建
Unification-based Reconstruction of Multi-hop Explanations for Science Questions
论文作者
论文摘要
本文提出了一个新颖的框架,用于重建科学问题回答(QA)中的多跳解释。尽管现有的多跳推理方法构建解释,考虑每个问题孤立地考虑了每个问题,但我们提出了一种利用科学解释语料库中出现的解释模式的方法。具体而言,该框架通过将词汇相关性与统一功率的概念相结合,对语料库中类似问题的解释进行了估计的分析来对一组原子事实进行排名。 对WorldTree语料库进行了广泛的评估,集成了K-NN聚类和信息检索(IR)技术。 We present the following conclusions: (1) The proposed method achieves results competitive with Transformers, yet being orders of magnitude faster, a feature that makes it scalable to large explanatory corpora (2) The unification-based mechanism has a key role in reducing semantic drift, contributing to the reconstruction of many hops explanations (6 or more facts) and the ranking of complex inference facts (+12.0 Mean Average Precision) (3)至关重要的是,构造的解释可以支持下游质量检查模型,从而提高了BERT的准确性高达10%。
This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA). While existing approaches for multi-hop reasoning build explanations considering each question in isolation, we propose a method to leverage explanatory patterns emerging in a corpus of scientific explanations. Specifically, the framework ranks a set of atomic facts by integrating lexical relevance with the notion of unification power, estimated analysing explanations for similar questions in the corpus. An extensive evaluation is performed on the Worldtree corpus, integrating k-NN clustering and Information Retrieval (IR) techniques. We present the following conclusions: (1) The proposed method achieves results competitive with Transformers, yet being orders of magnitude faster, a feature that makes it scalable to large explanatory corpora (2) The unification-based mechanism has a key role in reducing semantic drift, contributing to the reconstruction of many hops explanations (6 or more facts) and the ranking of complex inference facts (+12.0 Mean Average Precision) (3) Crucially, the constructed explanations can support downstream QA models, improving the accuracy of BERT by up to 10% overall.