Keywords: Reinforcement Learning, Reward Design, Large Language Model
TL;DR: ORSO is an algorithm that automatically generates and efficiently selects effective dense reward functions for reinforcement learning via online model selection.
Abstract: Reinforcement learning (RL) algorithms require carefully designed shaped reward functions to learn effective policies, especially in environments with sparse task rewards. However, manually designing a suitably shaped reward function is challenging and often requires extensive domain knowledge and trial and error. Current methods for automating reward design can be prohibitively time-consuming. In this paper, we cast the reward design process as an online model selection problem and propose ORSO (Online Reward Selection and Policy Optimization), a novel algorithm to efficiently design shaped reward functions. Because existing online model selection algorithms are provably efficient, ORSO can identify effective reward functions efficiently. We provide regret guarantees for ORSO and demonstrate its effectiveness on several continuous control benchmarks. Compared to prior methods, ORSO is more sample-efficient, consistently finds high-quality dense reward functions, and achieves similar performance to hand-engineered rewards created by domain experts.
Submission Number: 112
Loading