Towards Interpretable Foundation Models of Robot Behavior: A Task Specific Policy Generation Approach
Track Selection: Short paper track.
Keywords: robot foundation models, reinforcement learning, user-centered learning, human robot interaction
TL;DR: We consider and discuss limitations with the "generalist policy" approach to robot foundation models. We present an more modular and interpretable alternative, Diffusion for Policy Parameters (DPP), which generates standalone RL-like policies.
Abstract: Foundation models are a promising path toward general-purpose and user-friendly robots. The prevalent approach involves training a "generalist policy'' that, like a reinforcement learning policy, uses observations to output actions. Although this approach has seen much success, several concerns arise when considering deployment and end-user interaction with these systems. In particular, the lack of modularity between tasks means that when model weights are updated (e.g., when a user provides feedback), the behavior in other, unrelated tasks may be affected. This can negatively impact the system's interpretability and usability. We present an alternative approach to the design of robot foundation models, Diffusion for Policy Parameters (DPP), which generates stand-alone, task-specific policies. Since these policies are detached from the foundation model, they are updated only when a user wants, either through feedback or personalization, allowing them to gain a high degree of familiarity with that policy. We demonstrate a proof-of-concept of DPP in simulation then discuss its limitations and the future of interpretable foundation models.
Submission Number: 13
Loading