Learning to Explore in Diverse Reward Settings via Temporal-Difference-Error Maximization

Published: 09 May 2025, Last Modified: 02 Jun 2025RLC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep RL, Exploration, TD-error Maximization
TL;DR: We propose an exploration method that works robustly across dense, sparse and exploration adverse reward setting without hyperparameter adjustments.
Abstract: Numerous heuristics and advanced approaches have been proposed for exploration in different settings for deep reinforcement learning. Noise-based exploration generally fares well with dense-shaped rewards and bonus-based exploration with sparse rewards. However, these methods usually require additional tuning to deal with undesirable reward settings by adjusting hyperparameters and noise distributions. Rewards that actively discourage exploration, i.e., with an action cost and no other dense signal to follow, can pose a major challenge. We propose a novel exploration method, Stable Error-seeking Exploration (SEE), that is robust across dense, sparse, and exploration-adverse reward settings. To this endeavor, we revisit the idea of maximizing the TD-error as a separate objective. Our method introduces three design choices to mitigate instability caused by far-off-policy learning, the conflict of interest of maximizing the cumulative TD-error in an episodic setting, and the non-stationary nature of TD-errors. SEE can be combined with off-policy algorithms without modifying the optimization pipeline of the original objective. In our experimental analysis, we show that a Soft-Actor Critic agent with the addition of SEE performs robustly across three diverse reward settings in a variety of tasks without hyperparameter adjustments.
Submission Number: 126
Loading