Learning to Be Fair: Modeling Fairness Dynamics by Simulating Moral-Based Multi-Agent Resource Allocation
Keywords: fairness, multi-agent simulation, consensus, llm-based agents, morality
TL;DR: Modeling Fairness Dynamics by Simulating Moral-Based Multi-Agent Resource Allocation
Abstract: Fairness is a foundational social construct for stable, resilient societies, yet its meaning is dynamic, context-dependent, and inherently subjective. This multifaceted nature reveals a gap between traditional social science and contemporary computational approaches: the former offers rich conceptual accounts but limited computational models, while the latter often relies on static objectives or purely data-driven criteria that overlook the subjective and communicative nature of fairness. We address this gap through a computational framework and two resource-allocation scenarios in which large language model (LLM)–based cognitive agents operate with heterogeneous roles, relationships, and moral commitments. The framework supports agent reflection and negotiation via explicit, language-based feedback, enabling the study of norm evolution and consensus formation of fairness in multi-agent social systems. Using standard objective metrics from resource allocation, we demonstrate that our approach captures key complexities of fairness, such as ambiguity, procedural justice, and subjective satisfaction—while remaining quantitatively evaluable. This work also offers actionable insights for designing tractable AI systems that can navigate evolving social norms in dynamic, multi-stakeholder environments.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 4053
Loading