RoleCDE: Benchmarking and Mitigating Role–Alignment Trade-offs in Role-Playing Agents

ACL ARR 2026 January Submission3221 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Role Playing, Value Evaluation, Value Alignment
Abstract: Role-playing agents(RPAs) are widely used to steer large language models(LLMs) toward role-consistent behavior, yet existing benchmarks mainly evaluate surface-level fidelity and offer limited insight into decision making under role–alignment value conflicts. To address this gap, we introduce \textbf{RoleCDE}, the first benchmark designed to evaluate RPAs under structured conflicts between role-specific values and alignment-oriented constraints. RoleCDE formulates role-aware decision making as cognitive dilemma scenarios, jointly evaluating role–scenario grounding, value conflict resolution, and decision tendencies. The benchmark is constructed at scale, covering approximately 8k diverse role profiles and scenarios and nearly 240k dilemma instances across three difficulty levels and eight role categories. Evaluation of several mainstream LLMs reveals a "Role Value Decoupling" phenomenon, where agents systematically default to alignment- and morality-consistent decisions rather than role-specific values when the two conflict, even under explicit role conditioning. This behavior is largely invariant to dilemma difficulty but varies substantially across role categories. We further show that RoleCDE-based fine-tuning effectively mitigates this decoupling by improving value trade-off reasoning, while preserving general role-playing fidelity and general reasoning performance. Code is available at: \url{https://anonymous.4open.science/r/RoleCDE/}.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking,evaluation,evaluation methodologies
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 3221
Loading