Is Value Functions Estimation with Classification Plug-and-play for Offline Reinforcement Learning?

Published: 17 Jun 2024, Last Modified: 22 Jun 2024AutoRL@ICML 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: offline reinforcement learning
TL;DR: We intensively test the replacement of MSE objective with cross-entropy for Q-function estimation in deep offline Reinforcement Learning
Abstract: In deep Reinforcement Learning (RL), value functions are typically approximated using deep neural networks and trained via mean squared error regression objectives to fit the true value functions. Recent research has proposed an alternative approach, utilizing the cross-entropy classification objective, which has demonstrated improved performance and scalability of RL algorithms. However, existing study have not extensively benchmarked the effects of this replacement across various domains, as the primary objective was to demonstrate the efficacy of the concept across a broad spectrum of tasks, without delving into in-depth analysis. Our work seeks to empirically investigate the impact of such a replacement in an offline RL setup and analyze the effects of different aspects on performance. Through large-scale experiments conducted across a diverse range of tasks using different algorithms, we aim to gain deeper insights into the implications of this approach. Our results reveal that incorporating this change can lead to superior performance over state-of-the-art solutions for some algorithms in certain tasks, while maintaining comparable performance levels in other tasks, however for other algorithms this modification might lead to the dramatic performance drop. This findings are crucial for further application of classification approach in research and practical tasks. Our code is available at https://github.com/DT6A/ClORL
Submission Number: 15
Loading