Assessing Behavioral Alignment of Personality-Driven Generative Agents in Social Dilemma Games

Published: 10 Oct 2024, Last Modified: 31 Oct 2024NeurIPS 2024 Workshop on Behavioral MLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: artificial intelligence, generative agents, behavioral alignment, large language models, ultimatum game, prisoner's dilemma, personality
TL;DR: The behaviors of generative agents prompted with "big-five" personality traits were assessed in social dilemma games, and the outcomes were compared against those by humans with similar personality traits.
Abstract: Proxies of human behavior using large language models (LLMs) have been demonstrated in limited settings where their actions appear to be plausible. In this study, we examine the variation and fidelity of observed behaviors in LLM agents with respect to the "Big Five" personality traits. Experiments based on two social dilemma games were conducted using LLM agents whose prompts included their personality profile and whether or not the agent could reflect on past rounds of the game. Results indicate that behavioral outcomes can be influenced by stipulating the magnitude of an agent’s personality traits. Comparing these results with human studies indicates some degree of behavioral alignment and highlights gaps that stand in the way of accurately emulating human behavior.
Submission Number: 43
Loading