Causally Robust Reward Learning from Reason-Augmented Preference Feedback

Published: 26 Jan 2026, Last Modified: 05 May 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Preference-based learning, causal confusion, learning from human feedback, reward modeling
TL;DR: We develop a framework that utilizes natural language rationales to mitigate causal confusion in preference learning.
Abstract: Preference‑based reward learning is widely used for shaping agent behavior to match a user's preference, yet its sparse binary feedback makes it especially vulnerable to causal confusion. The learned reward often latches onto spurious features that merely co‑occur with preferred trajectories during training, collapsing when those correlations disappear or reverse at test time. We introduce ReCouPLe, a lightweight framework that uses natural language rationales to provide the missing causal signal. Each rationale is treated as a guiding projection axis in an embedding space, training the model to score trajectories based on features aligned with that axis while de-emphasizing context that is unrelated to the stated reason. Because the same rationales (e.g., "_avoids collisions_", "_completes the task faster_") can appear across multiple tasks, ReCouPLe naturally reuses the same causal direction whenever tasks share semantics, and transfers preference knowledge to novel tasks without extra data or language‑model fine‑tuning. Our learned reward model can ground preferences on the articulated reason, aligning better with user intent and generalizing beyond spurious features. ReCouPLe outperforms baselines by up to 1.5x in reward accuracy under distribution shifts, and 2x in downstream policy performance in novel tasks. We have released our code at [https://github.com/mj-hwang/ReCouPLe](https://github.com/mj-hwang/ReCouPLe).
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 21131
Loading