论文标题
XTE:可解释的文本需要
XTE: Explainable Text Entailment
论文作者
论文摘要
文本需要,确定一段文本是否逻辑上从另一个文本中遵循的任务是NLP中的一个关键组成部分,为许多语义应用程序(例如问题回答,文本摘要,信息提取和机器翻译等)提供了输入。需要从简单的句法变化到文本之间更复杂的语义关系范围,但是大多数方法都尝试使用一种尺寸适合的解决方案,通常有利于某些方案,从而损害了另一个方案。此外,对于需要世界知识的需要,大多数系统仍然是“黑匣子”,提供的是/否答案,该答案无法解释基本的推理过程。在这项工作中,我们介绍了XTE-可解释的文本含义 - 一种新颖的综合方法,用于识别文本需要,该方法分析了元素对,以决定是否必须通过语法或语义解决。同样,如果涉及语义匹配,我们使用由结构化词汇定义组成的外部知识基础来解释答案,以生成自然语言理由,以解释文本之间的语义关系。除了胜过完善的元素算法外,我们的综合方法还为可解释的AI提供了重要一步,允许推理模型解释,从而使语义推理过程明确且易于理解。
Text entailment, the task of determining whether a piece of text logically follows from another piece of text, is a key component in NLP, providing input for many semantic applications such as question answering, text summarization, information extraction, and machine translation, among others. Entailment scenarios can range from a simple syntactic variation to more complex semantic relationships between pieces of text, but most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. Furthermore, for entailments requiring world knowledge, most systems still work as a "black box", providing a yes/no answer that does not explain the underlying reasoning process. In this work, we introduce XTE - Explainable Text Entailment - a novel composite approach for recognizing text entailment which analyzes the entailment pair to decide whether it must be resolved syntactically or semantically. Also, if a semantic matching is involved, we make the answer interpretable, using external knowledge bases composed of structured lexical definitions to generate natural language justifications that explain the semantic relationship holding between the pieces of text. Besides outperforming well-established entailment algorithms, our composite approach gives an important step towards Explainable AI, allowing the inference model interpretation, making the semantic reasoning process explicit and understandable.