Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal ReasoningDownload PDF

Published: 31 Oct 2022, Last Modified: 28 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Reinforcement Learning, Generalization, Causal Reasoning
TL;DR: We provably improve the generalization of goal-conditioned reinfocement learning by discovering a causal graph and using it to guide the policy learning.
Abstract: As a pivotal component to attaining generalizable solutions in human intelligence, reasoning provides great potential for reinforcement learning (RL) agents' generalization towards varied goals by summarizing part-to-whole arguments and discovering cause-and-effect relations. However, how to discover and represent causalities remains a huge gap that hinders the development of causal RL. In this paper, we augment Goal-Conditioned RL (GCRL) with Causal Graph (CG), a structure built upon the relation between objects and events. We novelly formulate the GCRL problem into variational likelihood maximization with CG as latent variables. To optimize the derived objective, we propose a framework with theoretical performance guarantees that alternates between two steps: using interventional data to estimate the posterior of CG; using CG to learn generalizable models and interpretable policies. Due to the lack of public benchmarks that verify generalization capability under reasoning, we design nine tasks and then empirically show the effectiveness of the proposed method against five baselines on these tasks. Further theoretical analysis shows that our performance improvement is attributed to the virtuous cycle of causal discovery, transition modeling, and policy training, which aligns with the experimental evidence in extensive ablation studies.
Supplementary Material: zip
19 Replies

Loading