Keywords: social interaction, reinforcement learning
Abstract: Social intelligence has become a critical capability for large language models (LLMs), enabling them to engage effectively in real-world social tasks such as collaboration and negotiation. Reinforcement learning (RL) is a natural fit for training socially intelligent agents because it allows models to learn sophisticated strategies directly through social interactions without requiring human annotations. However, there are two unique parts about social intelligence tasks: (1) the quality of individual utterances in social interactions is not strictly related to final success; (2) social interactions require multi-dimensional rubrics for success. Therefore, we argue that it is necessary to design rewards for building utterance-level multi-dimensional reward models to facilitate RL training for social intelligence tasks. To address these challenges, we propose Sotopia-RL, a novel framework that refines coarse episode-level feedback into utterance-level, multi-dimensional rewards. Utterance-level credit assignment attributes outcomes to individual utterances, while multi-dimensional rewards capture the full richness of social interactions and reduce reward hacking.
Experiments in Sotopia, an open‑ended social learning environment, demonstrate that Sotopia-RL achieves state‑of‑the‑art social goal completion scores (7.17 on Sotopia-hard and 8.31 on Sotopia-all), significantly outperforming existing approaches. Ablation studies confirm the necessity of both utterance‑level credit assignment and multi‑dimensional reward design for RL training.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14787
Loading