Keywords: Behavioral Cloning, Preference-Based Reinforcement Learning, Reinforcement Learning
TL;DR: We show theoretical guarantees & experiments for a novel two-stage reinforcement learning method that first learns an optimal policy estimate from an offline, expert dataset, and then refines the estimate via online preference-based human feedback.
Abstract: Deploying reinforcement learning (RL) in robotics, industry, and health care is blocked by two obstacles: the difficulty of specifying accurate rewards and the risk of unsafe, data-hungry exploration. We address this by proposing a two-stage framework that first learns a safe initial policy from a reward-free dataset of expert demonstrations, then fine-tunes it online using preference-based human feedback. We provide the first principled analysis of this offline-to-online approach and introduce BRIDGE, a unified algorithm that integrates both signals via an uncertainty-weighted objective. We derive regret bounds that shrink with the number of offline demonstrations, explicitly connecting the quantity of offline data to online sample efficiency. We validate BRIDGE in discrete and continuous control MuJoCo environments, showing it achieves lower regret than both standalone behavioral cloning and online preference-based RL. Our work establishes a theoretical foundation for designing more sample-efficient interactive agents.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 18591
Loading