Intrinsic Motivation via Surprise Memory

TMLR Paper1209 Authors

31 May 2023 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise novelty as retrieval errors of a memory network wherein the memory stores and reconstructs surprises. Our surprise memory (SM) augments the capability of surprise-based intrinsic motivators, maintaining the agent's interest in exciting exploration while reducing unwanted attraction to unpredictable or noisy observations. Our experiments demonstrate that the SM combined with various surprise predictors exhibits efficient exploring behaviors and significantly boosts the final performance in sparse reward environments, including Noisy-TV, navigation and challenging Atari games.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - We have polished the writing and tables, fixing typos and redundant words/sentences - We have clarified the contribution of our paper in the introduction and method section - We have added experiments with NGU agent in the Atari benchmark
Assigned Action Editor: ~Josh_Merel1
Submission Number: 1209
Loading