Large-Scale Study of Curiosity-Driven LearningDownload PDF

27 Sept 2018, 22:36 (modified: 10 Feb 2022, 11:31)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: exploration, curiosity, intrinsic reward, no extrinsic reward, unsupervised, no-reward, skills
TL;DR: An agent trained only with curiosity, and no extrinsic reward, does surprisingly well on 54 popular environments, including the suite of Atari games, Mario etc.
Abstract: Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is such intrinsic reward function which uses prediction error as a reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. {\em without any extrinsic rewards}, across $54$ standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/.
Code: [![github](/images/github_icon.svg) openai/large-scale-curiosity](https://github.com/openai/large-scale-curiosity) + [![Papers with Code](/images/pwc_icon.svg) 3 community implementations](https://paperswithcode.com/paper/?openreview=rJNwDjAqYX)
Data: [Arcade Learning Environment](https://paperswithcode.com/dataset/arcade-learning-environment)
8 Replies

Loading