Domain Generalization for Robust Model-Based Offline Reinforcement LearningDownload PDF

05 Oct 2022 (modified: 17 Nov 2024)Offline RL Workshop NeurIPS 2022Readers: Everyone
Keywords: Offline Reinforcement Learning, Domain Generalisation, Invariant Prediction
TL;DR: We apply methods for invariant prediction to learn robust dynamics and rewards models from multi-demonstrator datasets, which are used to train policies in the offline model-based RL setting.
Abstract: Existing offline reinforcement learning (RL) algorithms typically assume that training data is either: 1) generated by a known policy, or 2) of entirely unknown origin. We consider multi-demonstrator offline RL, a middle ground where we know which demonstrators generated each dataset, but make no assumptions about the underlying policies of the demonstrators. This is the most natural setting when collecting data from multiple human operators, yet remains unexplored. Since different demonstrators induce different data distributions, we show that this can be naturally framed as a domain generalization problem, with each demonstrator corresponding to a different domain. Specifically, we propose Domain-Invariant Model-based Offline RL (DIMORL), where we apply Risk Extrapolation (REx) (Krueger et al., 2020) to the process of learning dynamics and rewards models. Our results show that models trained with REx exhibit improved domain generalization performance when compared with the natural baseline of pooling all demonstrators' data. We observe that the resulting models frequently enable the learning of superior policies in the offline model-based RL setting, can improve the stability of the policy learning process, and potentially enable increased exploration.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/domain-generalization-for-robust-model-based/code)
2 Replies

Loading