Chain-of-Thought Unfaithfulness as Disguised Accuracy

TMLR Paper2254 Authors

17 Feb 2024 (modified: 15 May 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: Understanding the extent to which Chain-of-Thought (CoT) generations align with a large language model’s (LLM) internal computations is critical for deciding whether to trust an LLM’s output. As a proxy for CoT faithfulness, Lanham et al. (2023) propose a metric that measures a model’s dependence on its CoT for producing an answer. Within a single family of proprietary models, they find that LLMs exhibit a scaling-then-inverse-scaling relation- ship between model size and their measure of faithfulness, and that a 13 billion parameter model exhibits increased faithfulness compared to models ranging from 810 million to 175 billion parameters in size. We evaluate whether these results generalize as a property of all LLMs. We replicate the experimental setup in their section focused on scaling experiments with three different families of models and, under specific conditions, successfully reproduce the scaling trends for CoT faithfulness they report. However, after normalizing the metric to account for a model’s bias toward certain answer choices, unfaithfulness drops significantly for smaller less-capable models. This normalized faithfulness metric is also strongly corre- lated ($R^2$=0.74) with accuracy, raising doubts about its validity as a construct for evaluating faithfulness.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Pascal_Poupart2
Submission Number: 2254
Loading