Confirmation: Yes
Keywords: Reinforcement learning, statistical inference, Q-learning
TL;DR: This paper introduces a statistical framework for sample-averaged Q-learning, using the functional central limit theorem to construct confidence intervals for Q-values, and tests it against vanilla Q-learning through experiments on two environments.
Abstract: Reinforcement learning algorithms have been widely used for decision-making tasks in various domains. However, the performance of these algorithms can be impacted by high variance and instability, particularly in environments with noise or sparse rewards. In this paper, we propose a framework to perform statistical online inference for a sample-averaged Q-learning approach. We adapt the functional central limit theorem (FCLT) for the modified algorithm under some general conditions and then construct confidence intervals for the Q-values via random scaling. We conduct experiments to perform inference on both the modified approach and its traditional counterpart, Q-learning using random scaling and report their coverage rates and confidence interval widths on two problems: a grid world problem as a simple toy example and a dynamic resource-matching problem as a real-world example for comparison between the two solution approaches.
Submission Number: 14
Loading