Humans are more gullible than LLMs in believing common psychological myths

ACL ARR 2025 May Submission2484 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite widespread debunking, many psychological myths remain deeply entrenched. This paper investigates whether Large Language Models (LLMs) mimic human behaviour of myth belief and explores methods to mitigate such tendencies. Using 50 popular psychological myths, we evaluate myth belief across multiple LLMs under different prompting strategies, including retrieval-augmented generation and swaying prompts. Results show that LLMs exhibit significantly lower myth belief rates than humans, though user prompting can influence responses. RAG proves effective in reducing myth belief and reveals latent debiasing potential within LLMs. Our findings contribute to the emerging field of Machine Psychology and highlight how cognitive science methods can inform the evaluation and development of LLM-based systems.
Paper Type: Short
Research Area: Human-Centered NLP
Research Area Keywords: Ethics Bias and Fairness, Human-Centered NLP, Linguistic Theories, Cognitive Modeling, and Psycholinguistics
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 2484
Loading