Keywords: Sim2Real, Reinforcement Learning
Abstract: In recent years, reinforcement learning (RL) has shown remarkable success in robotics when a fast and accurate simulator is available for a given task.
When using RL and simulation, more simulator realism is generally beneficial but becomes harder to obtain as robots are deployed in increasingly complex and widescale domains. In such settings, simulators will likely fail to model all relevant details of a given target task. In this paper, we formalize and study the abstract sim2real problem: given an abstract simulator that models a target task at a coarse level of abstraction, how can we train a policy with RL in the abstract simulator and successfully transfer it to the real-world?
We formalize this problem using the language of state abstraction from the RL literature. This framing shows that an abstract simulator can be grounded to match the target task if the abstract dynamics take the history of states into account. Based on the formalism, we then introduce a method that uses a small amount of real-world task data and learns to correct the dynamics of the abstract simulator. We then show that these methods enable successful policy transfer both in sim2sim and sim2real evaluation.
Submission Number: 71
Loading