Deep autoregressive density nets vs neural ensembles for model-based offline reinforcement learningDownload PDF


22 Sept 2022, 12:42 (modified: 18 Nov 2022, 20:51)ICLR 2023 Conference Blind SubmissionReaders: Everyone
Keywords: Offline reinforcement learning, batch reinforcement learning, ensemble, autoregressive, D4RL, model-based
TL;DR: We show in model-based offline reinforcement learning a better performance can be obtained with a single well-calibrated autoregressive system model than with the usual ensembles.
Abstract: We consider the problem of offline reinforcement learning where only a set of system transitions is made available for policy optimization. Following recent advances in the field, we consider a model-based reinforcement learning algorithm that infers the system dynamics from the available data and performs policy optimization on imaginary model rollouts. This approach is vulnerable to exploiting model errors which can lead to catastrophic failures on the real system. The standard solution is to rely on ensembles for uncertainty heuristics and to avoid exploiting the model where it is too uncertain. We challenge the popular belief that we must resort to ensembles by showing that better performance can be obtained with a single well-calibrated autoregressive model on the D4RL benchmark. We also analyze static metrics of model-learning and conclude on the important model properties for the final performance of the agent.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
13 Replies