Temporal Difference Models: Model-Free Deep RL for Model-Based ControlDownload PDF

15 Feb 2018 (modified: 25 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods.
TL;DR: We show that a special goal-condition value function trained with model free methods can be used within model-based control, resulting in substantially better sample efficiency and performance.
Keywords: model-based reinforcement learning, model-free reinforcement learning, temporal difference learning, predictive learning, predictive models, optimal control, off-policy reinforcement learning, deep learning, deep reinforcement learning, q learning
9 Replies

Loading