It's Not Easy Being Wrong: Evaluating Process of Elimination Reasoning in Large Language ModelsDownload PDFOpen Website

Nishant Balepur, Shramay Palta, Rachel Rudinger

Published: 01 Jan 2023, Last Modified: 18 Nov 2023CoRR 2023Readers: Everyone
Abstract: Chain-of-thought (COT) prompting can help large language models (LLMs) reason toward correct answers, but its efficacy in reasoning toward incorrect answers is unexplored. This strategy of process of elimination (PoE), when used with COT, has the potential to enhance interpretability in tasks like medical diagnoses of exclusion. Thus, we propose PoE with COT, a new task where LLMs must reason toward incorrect options on multiple-choice questions. We evaluate the ability of GPT-3.5, LLaMA-2, and Falcon to perform PoE with COT on 2-choice commonsense and scientific reasoning datasets. We show that PoE consistently underperforms directly choosing the correct answer. The agreement of these strategies is also lower than the self-consistency of each strategy. To study these issues further, we conduct an error analysis and give suggestions for future work.
0 Replies

Loading