Offline Reinforcement Learning of High-Quality Behaviors Under Robust Style Alignment

ICLR 2026 Conference Submission20108 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Diversity in RL, Offline RL
TL;DR: This paper proposes a new reinforcement learning method which uses explicit style supervision via subtrajectory labeling functions to perform efficient optimization of task performance while preserving style alignment.
Abstract: We study offline reinforcement learning of style-conditioned policies using explicit style supervision via subtrajectory labeling functions. In this setting, aligning style with high task performance is particularly challenging due to distribution shift and inherent conflicts between style and reward. Existing methods, despite introducing numerous definitions of style, often fail to reconcile these objectives effectively. To address these challenges, we propose a unified definition of behavior style and instantiate it into a practical framework. Building on this, we introduce Style-Conditioned Implicit Q-Learning (SCIQL), which leverages offline goal-conditioned reinforcement learning techniques, such as hindsight relabeling and value learning, and combine it with a new Gated Advantage Weighted Regression mechanism to efficiently optimize task performance while preserving style alignment. Experiments demonstrate that SCIQL achieves superior performance on both objectives compared to prior offline methods.
Primary Area: reinforcement learning
Submission Number: 20108
Loading