Improving Deep Detector Robustness via Detection-Related Discriminant Maximization and Reorganization

Published: 01 Jan 2025, Last Modified: 28 Jul 2025WACV 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep visual detectors are known to be vulnerable to adversarial attacks, raising concerns about their real-world applications (e.g., self-driving perception). We argue that this vulnerability arises from the spurious dependency of final detections on irrelevant/loophole latent dimensions. The greater the number of such dimensions, the higher the likelihood of the detector being compromised by adversarial attacks, making it more susceptible to input perturbations. To enhance detection robustness, we propose Detection-related Discriminant Maximization and Reorga-nization (DDMR), condensing the detection utility to a compressed number of relevant dimensions while deactivating the influence of irrelevant ones. This approach also alleviates the misalignment issue between the two task domains in visual detection and, consequently, their gradients. This enables the generation of more potent adversarial attacks and defenses for visual detectors within the adversarial training framework. Extensive experiments conducted with four cutting-edge visual detectors on the KITTI and COCO datasets showcase the efficacy of the proposed approach in improving the adversarial robustness of deep visual detectors against both white-box and black-box attacks. For example, on the KITTI dataset, our method demonstrates an increase in robustness of up to 12.4% and 28.0% without and with adversarial training, respectively.
Loading