TL;DR: Hi Robot enables robots to follow open-ended, complex instructions, adapt to feedback, and interact with humans.
Abstract: Generalist robots that can perform a range of different tasks in open-world settings must be able to not only reason about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback during task execution. Intricate instructions (e.g., "Could you make me a vegetarian sandwich?" or "I don't like that one") require not just the ability to physically perform the individual steps, but the ability to situate complex commands and feedback in the physical world. In this work, we describe a system that uses vision-language models in a hierarchical structure, first reasoning over complex prompts and user feedback to deduce the most appropriate next step to fulfill the task, and then performing that step with low-level actions. In contrast to direct instruction following methods that can fulfill simple commands ("pick up the cup"), our system can reason through complex prompts and incorporate situated feedback during task execution ("that's not trash"). We evaluate our system across three robotic platforms, including single-arm, dual-arm, and dual-arm mobile robots, demonstrating its ability to handle tasks such as cleaning messy tables, making sandwiches, and grocery shopping.
Videos are available at https://www.pi.website/research/hirobot
Lay Summary: Imagine teaching a robot to cook a new dish by having it talk through each step the same way you do with that little voice in your head. Our “Hi Robot” system gives machines two modes: a fast, instinctive layer that handles familiar actions like picking up objects, and a slower, thoughtful layer that breaks complicated requests—like “make me a sandwich without tomatoes” or “only pick up the trash, not the dishes”—into simple steps. The thoughtful layer literally “whispers” instructions to the fast layer, guiding the robot through the task and adapting if you say things like “that’s not trash.” We trained the robot by generating lots of example conversations between people and robots, so it learned to understand and respond to complex prompts and mid-task corrections. On real-world chores—like bussing tables, making sandwiches, and shopping for groceries—Hi Robot followed instructions far more accurately than previous methods, showing that giving robots an inner voice and the ability to think through problems makes them much more flexible and reliable in everyday settings.
Primary Area: Applications->Robotics
Keywords: Machine Learning, Robotics, Language, Vision-Language Models
Submission Number: 14903
Loading