Abstract: Offline Reinforcement Learning (RL) aims to learn effective policies from a static dataset
without requiring further agent environment interactions. However, its practical adoption is often hindered by the need for explicit reward annotations, which can be costly to engineer or difficult to obtain retrospectively. To address this, we propose ReLOAD (Reinforcement Learning with Offline Reward Annotation via Distillation), a novel reward annotation framework for offline RL. Unlike existing methods that depend on complex alignment procedures, our approach adapts Random Network Distillation (RND) to generate intrinsic rewards from expert demonstrations using a simple yet effective embedding discrepancy measure. First, we train a predictor network to mimic a fixed target network’s embeddings based on expert state transitions. Later, the prediction error between these networks serves as a reward
signal for each transition in the static dataset. This mechanism provides a structured reward signal without requiring
handcrafted reward annotations. We provide a formal theoretical construct that provides insights into how RND prediction errors effectively serve as intrinsic rewards by distinguishing expert-like transitions. Experiments on the D4RL benchmark demonstrate that ReLOAD
enables robust offline policy learning and achieves performance competitive with traditional
reward-annotated methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Romain_Laroche1
Submission Number: 4773
Loading