β-DQN: Improving Deep Q-Learning By Evolving the Behavior

Published: 2025, Last Modified: 08 Jan 2026CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: While many sophisticated exploration methods have been proposed, their lack of generality and high computational cost often lead researchers to favor simpler methods like $ε$-greedy. Motivated by this, we introduce $β$-DQN, a simple and efficient exploration method that augments the standard DQN with a behavior function $β$. This function estimates the probability that each action has been taken at each state. By leveraging $β$, we generate a population of diverse policies that balance exploration between state-action coverage and overestimation bias correction. An adaptive meta-controller is designed to select an effective policy for each episode, enabling flexible and explainable exploration. $β$-DQN is straightforward to implement and adds minimal computational overhead to the standard DQN. Experiments on both simple and challenging exploration domains show that $β$-DQN outperforms existing baseline methods across a wide range of tasks, providing an effective solution for improving exploration in deep reinforcement learning.
Loading