Is Value Functions Estimation with Classification Plug-and- play for Offline Reinforcement Learning?

Published: 16 Nov 2024, Last Modified: 16 Nov 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In deep Reinforcement Learning (RL), value functions are typically approximated using deep neural networks and trained via mean squared error regression objectives to fit the true value functions. Recent research has proposed an alternative approach, utilizing the cross-entropy classification objective, which has demonstrated improved performance and scalability of RL algorithms. However, existing study have not extensively benchmarked the effects of this replacement across various domains, as the primary objective was to demonstrate the efficacy of the concept across a broad spectrum of tasks, without delving into in-depth analysis. Our work seeks to empirically investigate the impact of such a replacement in an offline RL setup and analyze the effects of different aspects on performance. Through large-scale experiments conducted across a diverse range of tasks using different algorithms, we aim to gain deeper insights into the implications of this approach. Our results reveal that incorporating this change can lead to superior performance over state-of-the-art solutions for some algorithms in certain tasks, while maintaining comparable performance levels in other tasks, however for other algorithms this modification might lead to the dramatic performance drop. This findings are crucial for further application of classification approach in research and practical tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Video: https://youtu.be/xwfQ2Oa6ycs?si=l3umhIAGUW19csYl
Code: https://github.com/DT6A/ClORL
Supplementary Material: zip
Assigned Action Editor: ~Dmitry_Kangin1
Submission Number: 3259
Loading