SCREWS: A Modular Framework for Reasoning with RevisionsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We propose SCREWS, a modular framework for reasoning with revisions
Abstract: Large language models (LLMs) can improve their accuracy on various tasks through iteratively refining and revising their output based on feedback. Sometimes these revisions can introduce errors, in which case it is better to roll back to a previous result. Further, revisions are typically homogeneous where the same reasoning method that produced the initial answer is used for revisions, which may not correct errors. We present SCREWS, a modular framework for reasoning with revisions, which is comprised of three main modules: Sampling, Conditional Resampling, and Selection, each consisting of sub-modules that can be hand-selected per task. We apply SCREWS for arithmetic word problems and multi-hop question answering tasks with multiple state-of-the-art LLMs, and find that: pursuing a heterogeneous mixture of reasoning strategies proves beneficial for refinement, and selection between the original and revised responses is needed to fix the errors introduced during refinement.
Paper Type: long
Research Area: Machine Learning for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
0 Replies

Loading