Quantifying Cognitive Bias Induction in LLM-Generated Content

ACL ARR 2025 July Submission932 Authors

29 Jul 2025 (modified: 19 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) are increasingly integrated into applications ranging from shopping review summarization to medical diagnosis support, where they affect human decisions. Although LLMs often perform well on common evaluation metrics, they may inherit societal or cognitive biases. When humans get exposed to content that was processed by an LLM that shows bias, for example, a summary of a piece of text, any bias introduced through content processing by the LLM will inadvertently have an effect on the human. We investigate the extent to which LLMs expose users to biased content. We assess three LLM families in summarization and news fact-checking tasks, evaluating the consistency of LLMs with their context and their tendency to hallucinate. Our findings show that LLMs expose users to content that changes the sentiment of the context in 21.86% of cases, hallucinates on post-knowledge-cutoff data questions in 60.33% of cases, and highlights context from earlier parts of the prompt (primacy bias) in 5.94% of cases. To alleviate the issue, we evaluate 18 distinct mitigation methods across three LLM families and find that targeted interventions can be effective.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/fairness evaluation, model bias/unfairness mitigation, ethical considerations in NLP applications, reflections and critiques
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Previous URL: https://openreview.net/forum?id=KdhQcCwxkt
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: The previous review did not offer any specific technical critiques. It also dismissed the value of our contribution without explaining how our approach, metrics, or findings are redundant or lacking. The review also rated the reproducibility and dataset 5, but the overall assessment was one without elaboration.
Data: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: 7
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: 4.2
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: 7
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: 4.2
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: Yes
B5 Elaboration: 1,3,4.2, Appendix B
B6 Statistics For Data: Yes
B6 Elaboration: 4.2
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: 7
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Appendix B
C3 Descriptive Statistics: Yes
C3 Elaboration: 5
C4 Parameters For Packages: Yes
C4 Elaboration: Appendix B
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: ChatGPT and Grammarly were used only to improve grammar, latex formatting and writing clarity. Their use did not influence the research content or findings, and therefore, we did not include this information in the main paper.
Author Submission Checklist: yes
Submission Number: 932
Loading