Self-Supervised Exploration via Latent Bayesian SurpriseDownload PDF

Anonymous

Published: 15 Jun 2022, Last Modified: 22 Oct 2023SSL-RL 2021 PosterReaders: Everyone
Keywords: self-supervised, exploration, reinforcement, learning, bayesian, surprise, latent, dynamics
TL;DR: An intrinsic bonus for self-supervised exploration in Reinforcement Learning based on the concept of Bayesian surprise, and computed with respect to a latent state variable in the dynamics.
Abstract: Training with Reinforcement Learning requires a reward function that is used to guide the agent towards achieving its objective. However, designing smooth and well-behaved rewards is in general not trivial and requires significant human engineering efforts. Generating rewards in self-supervised way, by inspiring the agent with an intrinsic desire to learn and explore the environment, might induce more general behaviours. In this work, we propose a curiosity-based bonus as intrinsic reward for Reinforcement Learning, computed as the Bayesian surprise with respect to a latent state variable, learnt by reconstructing fixed random features. We extensively evaluate our model by measuring the agent's performance in terms of environment exploration, for continuous tasks, and looking at the game scores achieved, for video games. Our model is computationally cheap and empirically shows state-of-the-art performance on several problems. Furthermore, experimenting on an environment with stochastic actions, our approach emerged to be the most resilient to simple stochasticity. Further visualization is available on the project webpage.(https://lbsexploration.github.io/)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/arxiv:2104.07495/code)
0 Replies

Loading