Keywords: large language models, illusions of causality, contingency judgement
TL;DR: In this paper, we investigate the extent to which state-of-the-art LLMs exhibit the illusion of causality when faced with a classic cognitive science paradigm: the contingency judgment task.
Abstract: Causal learning is the cognitive process of developing the capability of making causal inferences based on available information, often guided by normative principles. This process is prone to errors and biases, such as the illusion of causality, in which people perceive a causal relationship between two variables despite lacking supporting evidence. This cognitive bias has been proposed to underlie many societal problems, including social prejudice, stereotype formation, misinformation, and superstitious thinking. In this work, we examine whether large language models are prone to developing causal illusions in null contingency scenarios (in which no information is sufficient to establish a causal relationship between variables) within medical contexts. To investigate this, we constructed a dataset of 1,000 samples and prompted LLMs to evaluate the effectiveness of potential causes. Our findings show that all evaluated models systematically inferred unwarranted causal relationships, revealing a strong susceptibility to the illusion of causality. Code, data, and analysis scripts are publicly available for reproducibility at: https://anonymous.4open.science/r/CogInterp25-6DB0/README.md
Submission Number: 36
Loading