From Answers to Questions: A Study on Backward Reasoning in Large Language Models

ACL ARR 2024 April Submission207 Authors

15 Apr 2024 (modified: 11 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multi-step reasoning through Chain-of-Thought (CoT) prompting has been extensively explored, observing the abilities of Large Language Models (LLMs) to generate answers from a given question. However, the focus is on forward reasoning abilities manifested in a series of general premises leading to a final solution. This leaves backward reasoning, the inference that leads to the causal hypothesis, unexplored. In this paper, we take the reverse perspective by analyzing the backward reasoning abilities of LLMs by exploring their ability to seek a hypothesis that best fits or explains a set of observations. In particular, we contextualize the hypothesis and observations in Question-answering (QA) tasks. Therefore, by proposing the Hiding and the Blanking approaches that strategically revise the input-problem instances, we analyze whether the LLMs are able to reason about the conclusions and deliver the original question that leads to the final answers. Using three Multiple Choice Questions and six Math Word Problems QA tasks: (i) we observe a performance gap between standard and proposed approaches; hence (ii) we propose several methods to elicit the LLMs to generate the answer by considering the backward direction.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: LLMs CoT
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Section 2 Permission To Publish Peer Reviewers Content Agreement: Authors grant permission for ACL to publish peer reviewers' content
Submission Number: 207
Loading