Control-Oriented Model-Based Reinforcement Learning with Implicit DifferentiationDownload PDF

Published: 22 Jul 2021, Last Modified: 16 Mar 2025URL 2021 PosterReaders: Everyone
Keywords: reinforcement learning, model-based reinforcement learning, bi-level optimization
TL;DR: We propose a model learning method that directly optimizes the sum of rewards instead of likelihood, a proxy to the agent's objective.
Abstract: The shortcomings of maximum likelihood estimation in the context of model-based reinforcement learning have been highlighted by an increasing number of papers. When the model class is misspecified or has a limited representational capacity, model parameters with high likelihood might not necessarily result in high performance of the agent on a downstream control task. To alleviate this problem, we propose an end-to-end approach for model learning which directly optimizes the expected returns using implicit differentiation. We treat a value function that satisfies the Bellman optimality operator induced by the model as an implicit function of model parameters and show how to differentiate the function. We provide theoretical and empirical evidence highlighting the benefits of our approach in the model misspecification regime compared to likelihood-based methods.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/control-oriented-model-based-reinforcement/code)
1 Reply

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview