Keywords: reinforcement, learning, intrinsic, motivation, exploration, curiosity
TL;DR: This paper introduces the use of a learned prior within a Variational Autoencoder to measure novelty through KL divergence, enhancing exploration efficiency and increasing reward collection in environments with sparse rewards.
Abstract: Efficient exploration is a fundamental challenge in reinforcement learning, especially in environments with sparse rewards. Intrinsic motivation can improve exploration efficiency by rewarding agents for encountering novel states. In this work, we propose a method called Variation Learned Priors for intrinsic motivation that estimates state novelty through variational state encoding. Specifically, novelty is measured using the Kullback-Leibler divergence between a Variational Autoencoder's learned prior and posterior distributions. When tested across various domains, our approach improves the latent space quality of the Variational Autoencoder, leading to increased exploration efficiency and better task performance for the reinforcement learning agent.
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9634
Loading