Nudge LLM-based Multi-Agent Collaboration into Effective Cognitive Bias Mitigation

ACL ARR 2025 February Submission232 Authors

05 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Cognitive biases stem from the irrationality of human cognition, which is closely intertwined with natural language. Given that large language models (LLMs) are trained on vast amounts of text data, they are also reported susceptible to cognitive biases. Insights from organizational psychology and behavioral economics suggest that strategies such as nudge and playing devil's advocate are effective in mitigating cognitive biases within human societies. Additionally, diversity of thought enhances decision-making quality in groups as well. Inspired by those findings, we have designed a multi-agent system, NudgeCoR, which combines both nudge and collaboration among multiple agents. The results demonstrate that NudgeCoR is highly effective in addressing cognitive biases in both simple and complex decision-making scenarios, with an improvement of about 30\% and 50\% respectively. Ablation studies further confirm the importance of nudge and diversity of thought among agents. Our work indicates the great promise for integrating established insights from other disciplines, such as psychology, into the design of multi-agent systems.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: large language model, multi-agent system, cognitive bias, nudge, diversity
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 232
Loading