When to Update Your Model: Constrained Model-based Reinforcement LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 12 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Model-Based Reinforcement Learning, monotonic improvements, event-triggered mechanism
TL;DR: We analyze the optimization monotonicity for MBRL algorithms under a novel and general scheme, upon which we develop an algorithm CMLO equipped with an event-triggered mechanism to learn the model from a dynamically-varying number of explorations.
Abstract: Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.
Supplementary Material: pdf
44 Replies

Loading