Scale-Aware Pruning Framework for Remote Sensing Object Detection via Multifeature Representation

Published: 2025, Last Modified: 03 Feb 2026IEEE Trans. Geosci. Remote. Sens. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the rapid advancements in computer vision, high-resolution remote sensing imagery has become a crucial data source for object detection. Nevertheless, effectively utilizing limited computational resources and reducing the burden on satellite edge devices remains a significant challenge. To effectively reduce model complexity while maintaining its representational capacity, this article proposes a scale-aware pruning framework (SAPF) to enhance remote sensing object detection ability. First, this article classifies the convolutional layers in object detection models into two categories: layers with a single-scale feature representation and layers with a multiscale feature representation. For convolutional layers with single-scale features, we utilize singular value decomposition (SVD) to quantify feature importance and assess filter redundancy to enhance model efficiency. By removing less critical filters, this pruning criteria aims to reduce the model size and computational load without compromising performance. However, convolutional layers with multiscale features are crucial for optimizing feature extraction and balancing information capture across various scales. To address this, this article evaluates the similarity between convolutional layers with different scales to determine the contribution of various scale features in multiscale fusion. Surprisingly, the SAPF can reduce the FLOPs and parameters, as well as ensure the representational ability obviously when the YOLO v5s and Faster-RCNN are adopted to classify the NWPU VHR-10, RSOD, and SIMD datasets. This means we can save the training computation resources for the model. Additionally, SAPF can significantly improve the efficiency of the model in object detection to ensure its real-time performance.
Loading