Keywords: Robot security, Navigation, Adversarial Attacks, VLM
TL;DR: This paper developed the first adversarial attack for robot navigation systems through physically manipulating the robot's working environments.
Abstract: Mobile robots are becoming an integral part of everyday life. These systems typically rely on generating maps of the environment and using them for navigation. While significant progress has been made in improving the localization and navigation of mobile robots, their vulnerability to adversarial environment changes remains largely unexplored. This paper investigates the adversarial robustness of robot navigation systems and introduces attacks designed to manipulate the navigation environment with minimal modifications. Our proposed attack leverages vision-language models and pre-existing maps to identify objects whose repositioning could cause navigation errors. We also propose a defense mechanism to monitor the confidence of self-localization to detect changes in the environment and bypass attacked areas. Evaluations show that our attacks reduce the navigation success rate from $100$\% to $8.0$\% in simulation and from $100$\% to $40.0$\% in the real world, while our defense mechanism increases the navigation success rate to $75.3$\% in simulation and $86.7$\% in the real world.
Submission Number: 8
Loading