Is Q-learning an Ill-posed Problem?

Published: 22 Apr 2025, Last Modified: 12 Jul 2025The 33th European Symposium on Artificial Neural Networks (ESANN 2025)EveryoneRevisionsCC BY 4.0
Abstract: This paper investigates the instability of Q-learning in continuous environments, a challenge frequently encountered by practitioners. Traditionally, this instability is attributed to bootstrapping and regression model errors. Using a representative reinforcement learning benchmark, we systematically examine the effects of bootstrapping and model inaccuracies by incrementally eliminating these potential error sources. Our findings reveal that even in relatively simple benchmarks, the fundamental task of Q-learning - iteratively learning a Q-function from policy-specific target values - can be inherently ill-posed and prone to failure. These insights cast doubt on the reliability of Q-learning as a universal solution for reinforcement learning problems.
Loading