STEVE-1: A Generative Model for Text-to-Behavior in Minecraft (Abridged Version)

Published: 03 Nov 2023, Last Modified: 27 Nov 2023GCRL WorkshopEveryoneRevisionsBibTeX
Confirmation: I have read and confirm that at least one author will be attending the workshop in person if the submission is accepted
Keywords: minecraft, instruction following, foundation models, sequence models, reinforcement learning, sequential decision making, goal conditioned reinforcement learning, text conditioned reinforcement learning, transformers, deep learning
TL;DR: We train an instruction-following agent for Minecraft by finetuning VPT with both text and visual goals and hindsight relabeling.
Abstract: Constructing AI models that respond to text instructions is challenging, especially for sequential decision-making tasks. This work introduces an instruction-tuned Video Pretraining (VPT) model for Minecraft called STEVE-1, demonstrating that the unCLIP approach, utilized in DALL•E 2, is also effective for creating instruction-following sequential decision-making agents. By leveraging pretrained models like VPT and MineCLIP and employing best practices from text-conditioned image generation, STEVE-1 costs just $60 to train and can follow a wide range of short-horizon open-ended text and visual instructions in Minecraft. STEVE-1 sets a new bar for open-ended instruction following in Minecraft with low-level controls (mouse and keyboard) and raw pixel inputs, far outperforming previous baselines. All resources, including our model weights, training scripts, and evaluation tools are made available for further research.
Submission Number: 36
Loading