Improving Multi-Task Reinforcement Learning through Disentangled Representation LearningDownload PDF

21 May 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: When humans learn to perform a task, they tend to also improve their skills on related tasks, even without explicitly practicing these other tasks. In reinforcement learning (RL), the multi-task setting aims to leverage similarities across tasks to help agents more quickly learn multiple tasks simultaneously. However, multi-task RL has a number of key issues, such as negative interference, that make it difficult to implement in practice. We propose an approach that makes use of disentangled feature learning to alleviate these issues and find effective multi-task policies in a high-dimensional raw-pixel observation space. We show that this approach can be superior to other multi-task RL techniques with little additional cost. Finally, we investigate disentanglement itself by capturing, adjusting, and reconstructing latent representations that have been learned from Atari images and gain insight into their underlying meaning.
0 Replies

Loading