From Multimodal LLMs to Generalist Embodied Agents: Methods and Lessons
Abstract: We examine the capability of Multimodal Large Language
Models (MLLMs) to tackle diverse domains that extend be-
yond the traditional language and vision tasks these models
are typically trained on. Specifically, our focus lies in areas
such as Embodied AI, Games, UI Control, and Planning.
To this end, we introduce a process of adapting an MLLM
to a Generalist Embodied Agent (GEA). GEA is a single
unified model capable of grounding itself across these var-
ied domains through a multi-embodiment action tokenizer.
GEA is trained with supervised learning on a large dataset
of embodied experiences and with online RL in interactive
simulators. We explore the data and algorithmic choices
necessary to develop such a model. Our findings reveal
the importance of training with cross-domain data and on-
line RL for building generalist agents. The final GEA model
achieves strong generalization performance to unseen tasks
across diverse benchmarks compared to other generalist
models and benchmark-specific approaches.
Loading