PreND: Enhancing Intrinsic Motivation in Reinforcement Learning through Pre-trained Network Distillation
Track: Full track
Keywords: Intrinsic Motivation, Reinforcement Learning, Prediction-based, Random Network Distillation
TL;DR: PreND enhances intrinsic motivation in reinforcement learning by using pre-trained representations to improve the stability and quality of intrinsic rewards, indicating promising results against traditional RND in Atari experiments.
Abstract: Intrinsic motivation, inspired by the psychology of developmental learning in infants, stimulates exploration in agents without relying solely on sparse external rewards. Existing methods in reinforcement learning like Random Network Distillation (RND) face significant limitations, including (1) relying on raw visual inputs, leading to a lack of meaningful representations, (2) the inability to build a robust latent space, (3) poor target network initialization and (4) rapid degradation of intrinsic rewards. In this paper, we introduce ***Pre**-trained **N**etwork **D**istillation* (**PreND**), a novel approach to enhance intrinsic motivation in reinforcement learning (RL) by improving upon the widely used prediction-based method, RND. **PreND** addresses these challenges by incorporating pre-trained representation models into both the target and predictor networks, resulting in more meaningful and stable intrinsic rewards, while enhancing the representation learned by the model. We also tried simple but effective variants of the predictor network optimization by controlling the learning rate.
Through experiments on the Atari domain, we demonstrate that **PreND** significantly outperforms RND, offering a more robust intrinsic motivation signal that leads to better exploration, improving overall performance and sample efficiency. This research highlights the importance of target and predictor networks representation in prediction-based intrinsic motivation, setting a new direction for improving RL agents' learning efficiency in sparse reward environments.
Submission Number: 48
Loading