Lurking in the Shadows: Imperceptible Shadow Black-Box Attacks Against Lane Detection Models

Published: 01 Jan 2024, Last Modified: 13 May 2025KSEM (3) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Lane detection is a crucial component in autonomous driving systems, relying on deep neural networks (DNNs) for precise vehicle positioning, path planning, and decision control. However, recent studies have found that DNNs models are susceptible to adversarial samples, where attackers can induce serious errors by adding seemingly small adversarial noise. Although much effort has been devoted to investigating adversarial attacks on conventional computer vision tasks, they overlook potential threats from the real physical world, such as shadow disturbances under complex lighting conditions. To bridge this research gap, in this paper, we propose a novel black-box adversarial attack based on shadow disturbance against lane detection models in autonomous driving systems. Specifically, we conceptualize the shadow attack model and treat the problem of constructing shadow positions as a problem of polygon vertex selection. Next, we convert the input image from RGB to LAB color space, retaining only the luminance channel (L-channel) to achieve realistic shadow cast. And then, we utilize the Particle Swarm Optimization (PSO) algorithm to search and iterate for optimal shadow parameters, aiming to generate adversarial samples with strong confusion capability. We conduct experiments on three commonly used datasets (CULane, ApolloScape, and tvtLane), using two mainstream lane detection models (UNet-ConvLSTM and SegNet-ConvLSTM). Extensive experimental results demonstrate the effectiveness of our proposed shadow adversarial attack. This exposes serious robustness deficiencies in existing models when encountering complex lighting and shadow conditions.
Loading