A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement LearningDownload PDF

Anonymous

Sep 29, 2021 (edited Oct 05, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: Model-Based Reinforcement Learning, Unsupervised Dynamics Generalization
  • Abstract: The generalization of model-based reinforcement learning (MBRL) methods to environments with unseen dynamics is an important yet challenging problem. Existing methods try to make dynamics prediction models robust to the change of environmental dynamics by incorporating the context information $Z$ learned from past transition segments. However, the redundant information unrelated to the dynamics change in transition segments will create a spurious statistical association with dynamics and thus undermines generalization ability. In this paper, we model the dynamics change as the variation of unobserved environment-specified factors $Z$ across environments. Because environment labels are unavailable, it is challenging to only learn environmental invariant information into $Z$ without introducing redundant information. To tackle this problem, we introduce an interventional prediction module to find $Z$ belonging to the same environment. Furthermore, by utilizing the $Z$'s invariance within a single environment, a relational head is proposed to enforce the similarity between $\hat{{Z}}$ from the same environment. As a result, the redundant information unrelated to environmental specifics will be eliminated in the estimated $\hat{Z}$, thus improving the generalization ability of the dynamics prediction model. The experimental results on several benchmark datasets demonstrate that our approach can significantly reduce dynamics prediction errors and improve the performance of model-based RL methods on zero-shot new environments with unseen dynamics.
  • One-sentence Summary: This paper proposes a new model-based RL that could generalize to new environments.
0 Replies

Loading