From Answers to Questions: A Study on Backward Reasoning in Large Language Models

ACL ARR 2024 June Submission339 Authors

10 Jun 2024 (modified: 02 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multi-step reasoning through Chain-of-Thought (CoT) prompting has been extensively explored, observing the abilities of Large Language Models (LLMs) to generate answers from a given question. However, the focus is on forward reasoning abilities manifested in a series of general premises leading to a final solution. This leaves backward reasoning, the inference that leads to the causal hypothesis, unexplored. In this paper, we take the reverse perspective by analyzing the backward reasoning abilities of LLMs by exploring their ability to seek a hypothesis that best fits or explains a set of observations. In particular, we contextualize the hypothesis and observations in Question-answering (QA) tasks. Therefore, by proposing the Hiding and the Blanking approaches that strategically revise the input-problem instances, we analyze whether the LLMs are able to reason about the conclusions and deliver the original question that leads to the final answers. Using three Multiple Choice Questions and six Math Word Problems QA tasks: (i) we observe a performance gap between standard and proposed approaches; hence (ii) we propose several methods to elicit the LLMs to generate the answer by considering the backward direction.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: generative reasoning
Contribution Types: NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 339
Loading