Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving

Published: 01 Jan 2024, Last Modified: 21 Feb 2025CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In autonomous driving, behavior prediction is funda-mental for safe motion planning, hence the security and ro-bustness of prediction models against adversarial attacks are of paramount importance. We propose a novel adver-sarial backdoor attack against trajectory prediction models as a means of studying their potential vulnerabilities. Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach. First, the triggers are crafted by perturbing the trajectory of attacking vehicle and then disguised by transforming the scene using a bi-level optimization technique. The proposed attack does not depend on a particu-lar model architecture and operates in a black-box manner, thus can be effective without any knowledge of the victim model. We conduct extensive empirical studies using state-of-the-art prediction models on two benchmark datasets using metrics customized for trajectory prediction. We show that the proposed attack is highly effective, as it can sig-nificantly hinder the performance of prediction models, un-noticeable by the victims, and efficient as it forces the vic-tim to generate malicious behavior even under constrained conditions. Via ablative studies, we analyze the impact of different attack design choices followed by an evaluation of existing defence mechanisms against the proposed attack.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview