Saliency and location aware pruning of deep visual detectors for autonomous driving

Published: 01 Jan 2025, Last Modified: 11 Nov 2024Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite the remarkable achievements of deep neural networks, their high computational complexity limits their wide use in many real-world embedded applications, such as autonomous driving perception. While current neural network pruning approaches can reduce model complexity to various extents, they often adopt local or ad hoc importance measures that are not directly related to the final task. More importantly, most of them focus on classification tasks and do not take location information into consideration during pruning. To address these issues, we present a novel channel importance measure that incorporates detection-related saliency and location awareness, specifically designed for the pruning of self-driving visual detectors. Our comprehensive experiments on the KITTI and COCO_traffic datasets demonstrate that our pruning method achieves significant reductions in model size and computational operations with little performance degradation. Notably, it outperforms other state-of-the-art methods across various pruning rates and base detectors. Our pruned YOLOX-S model with 40.2% fewer parameters even improves the original model’s mAP by 1.8% on KITTI. Moreover, we experimentally highlight the potential of our pruning approach in effectively detecting small-scale objects.
Loading