Keywords: Vision-based Navigation, Reactive Planning, Collision Avoidance
TL;DR: We present CARE, an attachable safety enhancement module that integrates repulsive force-based trajectory adjustment into RGB-based navigation models, reducing collisions by up to 50% in real-world OOD scenarios without requiring fine-tuning.
Abstract: We propose CARE (Collision Avoidance via Repulsive Estimation) for improving the robustness of learning-based visual navigation methods. Recently, visual navigation models, particularly foundation models, have demonstrated promising performance by generating viable trajectories using only RGB images. However, these policies can generalize poorly to environments containing out-of-distribution (OOD) scenes characterized by unseen objects or different camera setups (e.g., variations in field of view, camera pose, or focal length). Without fine-tuning, such models could produce trajectories that lead to collisions, necessitating substantial efforts in data collection and additional training.
To address this limitation, we introduce CARE, an attachable module that enhances the safety of visual navigation without requiring additional range sensors or fine-tuning of pretrained models. CARE can be integrated seamlessly into any RGB-based navigation model that generates local robot trajectories. It dynamically adjusts trajectories produced by a pretrained model using repulsive force vectors computed from depth images estimated directly from RGB inputs.
We evaluate CARE by integrating it with state-of-the-art visual navigation models across diverse robot platforms. Real-world experiments show that CARE significantly reduces collisions (up to 100%) without compromising navigation performance in goal-conditioned navigation, and further improves collision-free travel distance (up to 10.7×) in exploration tasks.
Supplementary Material: zip
Spotlight: zip
Submission Number: 568
Loading