Maturity Expectation Bias in Multi-User LLMs Mediation

AAAI 2026 Workshop AIGOV Submission15 Authors

17 Oct 2025 (modified: 26 Nov 2025)AAAI 2026 Workshop AIGOV SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-User Interaction, AI Mediation, Ethical Asymmetry, Vulnerable User Protection, Disciplinary Design in LLMs, Multilingual Family Dynamics
TL;DR: Maturity Expectation Bias occurs when LLM systems implicitly assign greater moral responsibility to cognitively older users, regardless of actual emotional capacity.
Abstract: Trust in Large Language Models (LLMs) hinges on balancing empathy and fairness, yet multi-user conflicts expose persistent ethical asymmetries in accountability. This paper analyzes how Claude, GPT and Grok mediate sibling disputes in age-asymmetric, multilingual family settings. Through a quasi-natural triadic scenario, we identify Maturity Expectation Bias (MEB) as a systematic fairness violation. While prompt-level interventions suppressed MEB, they revealed a deeper Disciplinary Asymmetry: manifesting as compensatory over-discipline (Claude), permissive avoidance (Grok), and unstable intervention (GPT). These findings suggest current LLM architectures face challenges in balancing empathy with disciplinary equity, motivating the ANHA framework—emotionally responsive yet normatively grounded mediation design.
Submission Number: 15
Loading