Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RLDownload PDF

12 Oct 2021 (modified: 05 May 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: offline reinforcement learning, model-based reinforcement learning, behavioral priors
TL;DR: We present a new offline RL algorithms that combines adaptive behavioral priors with dynamics models to achieve better generalization and robustness that both leading model-based and model-free offline RL baselines on the D4RL benchmark.
Abstract: Offline Reinforcement Learning (RL) aims to extract near-optimal policies from imperfect offline data without additional environment interactions. Extracting policies from diverse offline datasets has the potential to expand the range of applicability of RL by making the training process safer, faster, and more streamlined. We investigate how to improve the performance of offline RL algorithms, its robustness to the quality of offline data, as well as its generalization capabilities. To this end, we introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary. When combined together, they substantially improve the performance and generalization of offline RL policies. In the widely studied D4RL offline RL benchmark, we find that MABE achieves higher average performance compared to prior model-free and model-based algorithms. In experiments that require cross-domain generalization, we find that MABE outperforms prior methods.
Supplementary Material: zip
0 Replies

Loading