Mismatched No More: Joint Model-Policy Optimization for Model-Based RLDownload PDF

12 Oct 2021 (modified: 08 Sept 2024)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: reinforcement learning, model-based RL, GAN
TL;DR: A model-based RL method where the model and policy jointly optimize the same objective: a lower bound on expected return.
Abstract: Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance (e.g., lower MSE) are not necessarily better for control: an RL agent may seek out the small fraction of states where an accurate model makes mistakes, or it might act in ways that do not expose the errors of an inaccurate model. As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them. In this work we propose a model-learning objective that directly optimizes a model to be useful for model-based RL. This objective, which depends on samples from the learned model, is a (global) lower bound on the expected return in the real environment. We jointly optimize the policy and model using this one objective, thus mending the objective mismatch in prior work. The resulting algorithm (MnM) is conceptually similar to a GAN: a classifier distinguishes between real and fake transitions, the model is updated to produce transitions that look realistic, and the policy is updated to avoid states where the model predictions are unrealistic. Our theory justifies the intuition that the best dynamics for learning a good policy are not necessarily the correct dynamics.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/mismatched-no-more-joint-model-policy/code)
0 Replies

Loading