TD-MPC2: Scalable, Robust World Models for Continuous Control

Published: 07 Nov 2023, Last Modified: 24 Nov 2023FMDM@NeurIPS2023EveryoneRevisionsBibTeX
Keywords: reinforcement learning, model-based reinforcement learning, world models
TL;DR: TD-MPC2 is a scalable, robust model-based RL algorithm that can be applied to diverse single-task and multi-task continuous control domains with a single set of hyperparameters.
Abstract: TD-MPC is a model-based reinforcement learning (RL) algorithm that performs local trajectory optimization in the latent space of a learned implicit (decoder-free) world model. In this work, we present TD-MPC2: a series of improvements upon the TD-MPC algorithm. We demonstrate that TD-MPC2 improves significantly over baselines across 104 online RL tasks spanning 4 diverse task domains, achieving consistently strong results with a single set of hyperparameters. We further show that agent capabilities increase with model and data size, and successfully train a single 317M parameter agent to perform 80 tasks across multiple task domains, embodiments, and action spaces. We conclude with an account of lessons, opportunities, and risks associated with large TD-MPC2 agents. Explore videos, models, data, code, and more at
Submission Number: 28