Are UFOs Driving Innovation? The Illusion of Causality in Large Language Models

Published: 10 Oct 2024, Last Modified: 31 Oct 2024CaLM @NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Casuality, illusions of causality, large language models
TL;DR: Evaluation of the illusion of causality in large language models through the generation of news headlines.
Abstract: Illusions of causality occur when people develop the belief that there is a causal connection between two variables with no supporting evidence. This cognitive bias has been proposed to underlie many societal problems including social prejudice, stereotype formation, misinformation and superstitious thinking. In this research we investigate whether large language models develop the illusion of causality in real-world settings. We evaluated and compared news headlines generated by GPT-4o-Mini, Claude-3.5-Sonnet, and Gemini-1.5-Pro to determine whether the models incorrectly framed correlations as causal relationships. In order to also measure sycophancy behavior, which occurs when a model aligns with a user’s beliefs in order to look favorable even if it is not objectively correct, we additionally incorporated the bias into the prompts, observing if this manipulation increases the likelihood of the models exhibiting the illusion of causality. We found that Claude is the model that presents the lowest degree of causal illusion aligned with experiments on Correlation-to-Causation Exaggeration in human-written press releases. On the other hand, our findings suggest that while sycophancy mimicry increases the likelihood of causal illusions in these models, especially in ChatGPT, Claude remains the most robust against this cognitive bias.
Submission Number: 21
Loading