SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning

Published: 21 Sept 2023, Last Modified: 22 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Deep Reinforcement Learning, Ensemble Q-learning
TL;DR: We propose tractable Q-ensemble independence regularization for ensemble Q-learning based on the random matrix theory.
Abstract: Alleviating overestimation bias is a critical challenge for deep reinforcement learning to achieve successful performance on more complex tasks or offline datasets containing out-of-distribution data. In order to overcome overestimation bias, ensemble methods for Q-learning have been investigated to exploit the diversity of multiple Q-functions. Since network initialization has been the predominant approach to promote diversity in Q-functions, heuristically designed diversity injection methods have been studied in the literature. However, previous studies have not attempted to approach guaranteed independence over an ensemble from a theoretical perspective. By introducing a novel regularization loss for Q-ensemble independence based on random matrix theory, we propose spiked Wishart Q-ensemble independence regularization (SPQR) for reinforcement learning. Specifically, we modify the intractable hypothesis testing criterion for the Q-ensemble independence into a tractable KL divergence between the spectral distribution of the Q-ensemble and the target Wigner's semicircle distribution. We implement SPQR in several online and offline ensemble Q-learning algorithms. In the experiments, SPQR outperforms the baseline algorithms in both online and offline RL benchmarks.
Submission Number: 10747
Loading