Model-Based Policy Search Using Monte Carlo Gradient Estimation With Real Systems Application

Published: 01 Jan 2022, Last Modified: 19 Feb 2025IEEE Trans. Robotics 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this article, we present a model-based reinforcement learning (MBRL) algorithm named Monte Carlo probabilistic inference for learning control (MC-PILCO). This algorithm relies on Gaussian processes (GPs) to model the system dynamics and on a Monte Carlo approach to estimate the policy gradient. This defines a framework in which we ablate the choice of the components, which are the selection of the cost function, the optimization of policies using dropout, and an improved data efficiency through the use of structured kernels in the GP models. The combination of the aforementioned aspects affects dramatically the performance of MC-PILCO. Numerical comparisons in a simulated cart–pole environment show that MC-PILCO exhibits better data efficiency and control performance w.r.t. state-of-the-art GP-based MBRL algorithms. Finally, we apply MC-PILCO to real systems, considering, in particular, systems with partially measurable states. We discuss the importance of modeling both the measurement system and the state estimators during policy optimization. The effectiveness of the proposed solutions has been tested in simulation and on two real systems, which are the Furuta pendulum and the ball-and-plate rig.
Loading