AARL: Automated Auxiliary Loss for Reinforcement LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Reinforcement learning, Representation learning, Auxiliary Loss
Abstract: A good state representation is crucial to reinforcement learning (RL) while an ideal representation is hard to learn only with signals from the RL objective. Thus, many recent works manually design auxiliary losses to improve sample efficiency and decision performance. However, handcrafted auxiliary losses rely heavily on expert knowledge, and therefore lack scalability and can be suboptimal for boosting RL performance. In this work, we introduce Automated Auxiliary loss for Reinforcement Learning (AARL), a principled approach that automatically searches the optimal auxiliary loss function for RL. Specifically, based on the collected trajectory data, we define a general auxiliary loss space of size $4.6\times10^{19}$ and explore the space with an efficient evolutionary search strategy. We evaluate AARL on the DeepMind Control Suite and show that the searched auxiliary losses have significantly improved RL performance in both pixel-based and state-based settings, with the largest performance gain observed in the most challenging tasks. AARL greatly outperforms state-of-the-art methods and demonstrates strong generalization ability in unseen domains and tasks. We further conduct extensive studies to shed light on the effectiveness of auxiliary losses in RL.
One-sentence Summary: In this work, we introduce Automated Auxiliary Loss for Reinforcement Learning (AARL), a principled approach that automatically searches for the optimal auxiliary loss function for RL.
14 Replies

Loading