Contrastive Unsupervised Learning of World Model with Invariant Causal FeaturesDownload PDF

03 Oct 2022, 16:46 (modified: 09 Nov 2022, 11:56)CML4ImpactReaders: Everyone
Keywords: causality, world model, contrastive learning, model-based reinforcement learning, reinforcement learning, out-of-distribution generalisation, sim-to-real transfer, robot navigation
TL;DR: We present a world model, which learns the causal features using invariance principle and achieves state-of-the-art performance on out-of-distribution generalisation.
Abstract: In this paper we present a world model, which learns causal features using the invariance principle. In particular, we use contrastive unsupervised learning to learn the invariant causal features, which enforces invariance across augmentations of irrelevant parts or styles of the observation. The world-model-based reinforcement learning methods independently optimize representation learning and the policy. Thus naive contrastive loss implementation collapses due to a lack of supervisory signals to the representation learning module. We propose an intervention invariant auxiliary task to mitigate this issue. Specifically, we utilize depth prediction to explicitly enforce the invariance and use data augmentation as style intervention on the RGB observation space. Our design leverages unsupervised representation learning to learn the world model with invariant causal features. Our proposed method significantly outperforms current state-of-the-art model-based and model-free reinforcement learning methods on out-of-distribution point navigation tasks on the iGibson dataset. Moreover, our proposed model excels at the sim-to-real transfer of our perception learning module.
0 Replies

Loading