Keywords: robotics, multi-task learning
TL;DR: Training a visual navigation policy on multiple different datasets (across robots of different sizes and forms) gives us a strong "omnipolicy" that outperforms policies trained on any single dataset, and generalizes to new robots!
Abstract: Learning provides a powerful tool for vision-based navigation, but the capabilities of learning-based policies are constrained by limited training data. If we could combine data from all available sources, including multiple kinds of robots, we could train more powerful navigation models. In this paper, we study how goal-conditioned policies for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots, and enable broad generalization across environments and embodiments. We analyze the necessary design decisions for effective data sharing across different robots, including the use of temporal context and standardized action spaces, and demonstrate that an omnipolicy trained from heterogeneous datasets outperforms policies trained on any single dataset. We curate 60 hours of navigation trajectories from 6 distinct robots, and deploy the trained omnipolicy on a range of new robots, including an underactuated quadrotor. We also find that training on diverse, multi-robot datasets leads to robustness against degradation in sensing and actuation. Using a pre-trained base navigational omnipolicy with broad generalization capabilities can bootstrap navigation applications on novel robots going forward, and we hope that GNM represents a step in that direction.
0 Replies
Loading