论文标题
扭曲解码:不同的发电机相互指导
Twist Decoding: Diverse Generators Guide Each Other
论文作者
论文摘要
现在,许多语言生成模型都可以用于各种一系列的任务,包括机器翻译和摘要。结合这种多样化的模型可能会导致进一步的进步,但是结合生成模型在推理过程中挑战:传统的结合方法(例如,浅融合)要求模型共享词汇/令牌化方案。我们介绍了Twist Decoding,这是一种简单而通用的文本生成算法,在推理时期从不同的模型中受益。我们的方法不假定共享词汇,令牌化甚至发电顺序。我们对机器翻译和科学论文摘要的广泛评估表明,在各种情况下,扭曲解码大大优于每个模型,包括各种情况,包括域特异性和通用模型都可用的情况。 Twist解码还始终胜过流行的Reranking启发式,其中一种模型的输出候选者被另一种模型撤回。我们希望我们的工作将鼓励研究人员和从业人员共同检查发电模型,而不仅仅是独立研究,并寻求与当前可用模型具有互补优势的模型。我们的代码可在https://github.com/jungokasai/twist_decoding上找到。
Many language generation models are now available for a wide range of generation tasks, including machine translation and summarization. Combining such diverse models may lead to further progress, but ensembling generation models is challenging during inference: conventional ensembling methods (e.g., shallow fusion) require that the models share vocabulary/tokenization schemes. We introduce Twist decoding, a simple and general text generation algorithm that benefits from diverse models at inference time. Our method does not assume the vocabulary, tokenization or even generation order is shared. Our extensive evaluations on machine translation and scientific paper summarization demonstrate that Twist decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domain-specific and general-purpose models are both available. Twist decoding also consistently outperforms the popular reranking heuristic where output candidates from one model are rescored by another. We hope that our work will encourage researchers and practitioners to examine generation models collectively, not just independently, and to seek out models with complementary strengths to the currently available models. Our code is available at https://github.com/jungokasai/twist_decoding.