Cyclophobic Reinforcement Learning

Published: 20 Aug 2023, Last Modified: 20 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In environments with sparse rewards, finding a good inductive bias for exploration is crucial to the agent's success. However, there are two competing goals: novelty search and systematic exploration. While existing approaches such as curiosity-driven exploration find novelty, they sometimes do not systematically explore the whole state space, akin to depth-first-search vs breadth-first-search. In this paper, we propose a new intrinsic reward that is cyclophobic, i.e., it does not reward novelty, but punishes redundancy by avoiding cycles. Augmenting the cyclophobic intrinsic reward with a sequence of hierarchical representations based on the agent's cropped observations we are able to achieve excellent results in the MiniGrid and MiniHack environments. Both are particularly hard, as they require complex interactions with different objects in order to be solved. Detailed comparisons with previous approaches and thorough ablation studies show that our newly proposed cyclophobic reinforcement learning is more sample efficient than other state of the art methods in a variety of tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - We added the subsection "Contribution to existing literature" to section 5 in order to elaborate on what makes cyclophobic reinforcement learning distininct from other count-based exploration methods.
Supplementary Material: zip
Assigned Action Editor: ~Josh_Merel1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1090
Loading