论文标题

回到未来:针对反事实和绑架常识性推理的无监督反向解码

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

论文作者

Qin, Lianhui, Shwartz, Vered, West, Peter, Bhagavatula, Chandra, Hwang, Jena, Bras, Ronan Le, Bosselut, Antoine, Choi, Yejin

论文摘要

绑架和反事实推理,日常人类认知的核心能力,需要在时间t时可能发生的事情进行推理,同时在相对过去和未来的多种情况下进行调节。但是,使用生成语言模型(LMS)同时合并过去和将来的上下文可能具有挑战性,因为它们仅在过去的上下文中进行条件或执行狭窄的文本填充。在本文中,我们提出了一种新的无监督解码算法DeLorean,它可以灵活地使用仅货架,左右语言模型和不进行监督的过去和将来的上下文。我们算法的关键直觉是通过后传播结合了未来,在此期间,我们仅在固定模型参数的同时更新输出的内部表示。通过向前和向后传播之间的交替,Delorean可以解码反映左和右上下文的输出表示。我们证明了我们的方法是一般的,并且适用于两项非单调推理任务:绑架文本生成和反事实故事修订版,DeLorean的表现优于基于自动和人类评估的一系列无监督和一些监督的方法。

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future. However, simultaneous incorporation of past and future contexts using generative language models (LMs) can be challenging, as they are trained either to condition only on the past context or to perform narrowly scoped text-infilling. In this paper, we propose DeLorean, a new unsupervised decoding algorithm that can flexibly incorporate both the past and future contexts using only off-the-shelf, left-to-right language models and no supervision. The key intuition of our algorithm is incorporating the future through back-propagation, during which, we only update the internal representation of the output while fixing the model parameters. By alternating between forward and backward propagation, DeLorean can decode the output representation that reflects both the left and right contexts. We demonstrate that our approach is general and applicable to two nonmonotonic reasoning tasks: abductive text generation and counterfactual story revision, where DeLorean outperforms a range of unsupervised and some supervised methods, based on automatic and human evaluation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源