Mix-up Consistent Cross Representations for Data-Efficient Reinforcement LearningDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023IJCNN 2022Readers: Everyone
Abstract: Deep reinforcement learning (RL) has achieved re-markable performance in sequential decision-making problems. However, it is a challenge for deep RL methods to extract task-relevant semantic information when interacting with limited data from the environment. In this paper, we propose Mix-up Consistent Cross Representations (MCCR), a novel self-supervised auxiliary task, which aims to improve data efficiency and encourage representation prediction. Specifically, we calculate the contrastive loss between low-dimensional and high-dimensional representations of different state observations to boost the mutual information between states, thus improving data efficiency. Furthermore, we employ a mixed strategy to generate intermediate samples, increasing data diversity and the smoothness of representations prediction in nearby timesteps. Experimental results show that MCCR achieves competitive results over the state-of-the-art approaches for complex control tasks in DeepMind Control Suite, notably improving the ability of pretrained encoders to generalize to unseen tasks.
0 Replies

Loading