Can LLMs Reconcile Knowledge Conflicts in Counterfactual Reasoning?

ICLR 2026 Conference Submission22559 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Counterfactual, Causal, Parametric Knowledge
TL;DR: Our work reveals important limitations of current LLM's abilities to re-purpose parametric knowledge in novel multi-hop counterfactual settings.
Abstract: Large Language Models have been shown to contain extensive world knowledge in their parameters, enabling impressive performance on many knowledge intensive tasks. However, when deployed in novel settings, LLMs often encounter situations where they must integrate parametric knowledge with new or unfamiliar information. In this work, we explore whether LLMs can combine knowledge in-context with their parametric knowledge through the lens of counterfactual reasoning. Through synthetic and real experiments in multi-hop reasoning problems, we show that LLMs generally struggle with counterfactual reasoning, often resorting to exclusively using their parametric knowledge. Moreover, we show that simple post-hoc finetuning can struggle to instill counterfactual reasoning ability -- often leading to degradation in stored parametric knowledge. Ultimately, our work reveals important limitations of current LLM's abilities to re-purpose parametric knowledge in novel settings.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 22559
Loading