论文标题
思想计划提示:将计算与数值推理任务的推理解开
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
论文作者
论文摘要
最近,在教授语言模型中取得了重大进展,以执行逐步推理以解决复杂的数值推理任务。迄今为止,对这些任务的最新方法是对链条的提示(COT)。 COT使用语言模型在多步进过程中同时执行推理和计算。为了将计算与推理相关,我们建议使用语言模型(主要是codex)作为程序表达推理过程的“思想程序”(POT)。该计算降级为外部计算机,该计算机执行生成的程序以得出答案。我们在五个数学单词问题数据集(GSM,Aqua,SVAMP,TABMWP,Multiarith)和三个财务-QA数据集(FinQA,Convinqa,TatQA)上评估POT。在几次射击和零射击设置下,Pot可以在所有评估的数据集中显示出比COT的平均性能增益约为12 \%。通过将锅与自一致性解码相结合,我们可以在所有数学问题数据集中实现SOTA性能,并在金融数据集上实现近SOTA性能。我们所有的数据和代码均在github https://github.com/wenhuchen/program-of-thoughts中发布
Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-step `thought' process. To disentangle computation from reasoning, we propose `Program of Thoughts' (PoT), which uses language models (mainly Codex) to express the reasoning process as a program. The computation is relegated to an external computer, which executes the generated programs to derive the answer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP, TabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA) for both few-shot and zero-shot setups. Under both few-shot and zero-shot settings, PoT can show an average performance gain over CoT by around 12\% across all the evaluated datasets. By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets. All of our data and code are released in Github https://github.com/wenhuchen/Program-of-Thoughts