Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View

08 May 2024 (modified: 06 Nov 2024)Submitted to NeurIPS 2024EveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Multi-Large Model Agents,Social Intelligence,Framework,Interpretability
TL;DR: We reveal the key attribute behind the social potential of LLM Agents and propose CogMir: a Multi-Agents framework for assessing and exploiting LLM Agents' social intelligence through cognitive biases, showing that LLM agents exhibit prosociality.
Abstract: Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias; whether this is reflected in the decision-making process of LLM agents remains under-explored. As LLM Agents are increasingly employed in intricate social environments, a pressing and natural question emerges: Can LLM Agents leverage hallucinations to mirror human cognitive biases, thus exhibiting irrational social intelligence? In this paper, we probe the irrational behavior among contemporary LLM agents by melding practical social science experiments with theoretical insights. Specifically, We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence through cognitive biases. Experimental results on CogMir subsets show that LLM Agents and humans exhibit high consistency in irrational and prosocial decision-making under uncertain conditions, underscoring the prosociality of LLM Agents as social entities, and highlighting the significance of hallucination properties. Additionally, CogMir framework demonstrates its potential as a valuable platform for encouraging more research into the social intelligence of LLM Agents.
Supplementary Material: zip
Primary Area: Machine learning for social sciences
Flagged For Ethics Review: true
Submission Number: 2935
Loading