Towards User-Interactive Offline Reinforcement LearningDownload PDF

05 Oct 2022 (modified: 05 May 2023)Offline RL Workshop NeurIPS 2022Readers: Everyone
Keywords: Offline RL, Reinforcement Learning, User, Model-based, Adaptive
TL;DR: Offline RL policies need to be adaptvie after training so that a user can alter its behavior to its needs.
Abstract: Offline reinforcement learning algorithms are still not fully trusted by practitioners due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. At the same time, offline RL algorithms are not able to tune their arguably most important hyperparameter - the proximity of the learned policy to the original policy. We propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously.
1 Reply

Loading