Can Large Language Models Reason Robustly with Noisy Rationales?

Published: 05 Mar 2024, Last Modified: 08 May 2024ICLR 2024 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Noisy Rationales, Chain-of-Thought.
TL;DR: We investigate the under-explored robustness of LLMs' reasoning with noisy rationales—irrelevant or inaccurate reasoning steps.
Abstract: This paper investigates an under-explored challenge in large language models (LLMs): their reasoning with noisy rationales—irrelevant or inaccurate reasoning steps—despite advancements in in-context learning and chain-of-thought strategies. We construct the NoRa dataset, specifically designed to evaluate LLMs' robustness to noisy rationales, based on which we reveal a widespread vulnerability among LLMs to such noise, with limited efficacy from existing robust methods.
Submission Number: 56
Loading