Manipulating Neural Path Planners via Slight Perturbations
Abstract: Data-driven neural path planners are attracting
increasing interest in the robotics community. However, their
neural network components typically come as black boxes,
obscuring their underlying decision-making processes. Their
black-box nature exposes them to the risk of being compromised
via the insertion of hidden malicious behaviors. For example,
an attacker may hide behaviors that, when triggered, hijack
a delivery robot by guiding it to a specific (albeit wrong)
destination, trapping it in a predefined region, or inducing unnecessary energy expenditure by causing the robot to repeatedly
circle a region. In this paper, we propose a novel approach
to specify and inject a range of hidden malicious behaviors,
known as backdoors, into neural path planners. Our approach
provides a concise but flexible way to define these behaviors,
and we show that hidden behaviors can be triggered by slight
perturbations (e.g., inserting a tiny unnoticeable object), that
can nonetheless significantly compromise their integrity. We
also discuss potential techniques to identify these backdoors
aimed at alleviating such risks. We demonstrate our approach
on both sampling-based and search-based neural path planners.
Loading