Contextual Effects in LLM and Human Causal Reasoning

Published: 14 Jul 2025, Last Modified: 14 Jul 2025ICML 2025 World Models WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal reasoning, LLMs, attention, causal models, world models, situation models
TL;DR: Common attention based mechanisms predict reasoning patterns in humans and LLMs.
Abstract: What type of knowledge is required to infer the outcomes of everyday actions such as a glass being more likely to break when it falls onto tile than onto a carpet? One possibility is that such inference requires highly robust and general world models. Another possibility is that making such inferences is a much more contextual process, the success of which depends on the particulars of the scenario being probed. We evaluate causal inferences in people and LLMs and show that although human accuracy far exceeds that of LLMs, there is a surprising degree of alignment in human and LLM performance. Both show a high degree of specificity. Seemingly superficial differences in probing causal knowledge matter for both people and LLMs. We then show that prompts that elicit more integrated patterns of attention predict both higher model accuracy and closer alignment to human performance.
Submission Number: 47
Loading