DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement LearningDownload PDF

Anonymous

Sep 29, 2021 (edited Oct 05, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Abstract: Offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experiment can be acquired. However, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is also costly and laborious. In this paper, we thus 1) formulate the offline dynamics adaptation by using (source) offline data collected from another dynamics to relax the requirement for the extensive (target) offline data, 2) characterize the dynamics shift problem in which prior offline methods does not scale well, and 3) derive a simple dynamics-aware reward augmentation (DARA) framework from both model-free and model-based offline settings. Specifically, DARA emphasizes learning from those source transition pairs that are adaptive for the target environment and mitigates the offline dynamics shift by characterizing state-action-next-state pair instead of the typical state-action distribution supported by prior offline RL methods. The experimental evaluation demonstrates that DARA, by augmenting the reward in the source offline dataset, can acquire an adaptive policy for the target environment and yet significantly reduce the amount of target offline data. With only modest amounts of target offline data, our performance consistently outperforms the prior offline RL methods in both simulated and real-world tasks.
  • Supplementary Material: zip
0 Replies

Loading