People will agree what I think: Investigating LLM's False Consensus Effect

ACL ARR 2024 June Submission2484 Authors

15 Jun 2024 (modified: 20 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have recently been widely adopted on interactive systems requiring communications. As the false belief in a model can harm the usability of such systems, LLMs should not have cognitive biases that humans have. Especially psychologists focused on the False Consensus Effect (FCE), which can distract smooth communication by posing false beliefs. However, previous studies have less examined FCE in LLMs thoroughly, which needs more consideration of confounding biases, general situations, and prompt changes. Therefore, in this paper, we conduct two studies to deeply examine the FCE phenomenon in LLMs. In Study 1, we investigate whether LLMs have FCE. In Study 2, we explore how various prompting styles affect the demonstration of FCE. As a result of these studies, we identified that popular LLMs have FCE. Also, the result specifies the conditions when the strength of FCE becomes larger or smaller compared to normal usage.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/fairness evaluation, model bias/unfairness mitigation,
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 2484
Loading