Abstract: The increase in lightweight models based on the YOLO architecture has been a notable trend in recent years. However, most of these models are designed for specific application scenarios, often requiring researchers to redesign lightweight models for different scenarios, thereby increasing development costs. To address this challenge, this paper proposes an efficient lightweight strategy that can be integrated into existing YOLO models, eliminating the need for network architecture redesign. Specifically, the strategy comprises three modules: the lightweight feature extraction module (LFEM), the multi-feature shared convolution module (MSCH), and the feature segmentation and sparse optimization module (LDown). The LFEM module enhances the efficiency of feature extraction in the backbone network by generating potentially redundant feature maps employing standard convolution. The MSCH module reduces the number of parameters in the detection head using shared convolutional layers. The LDown module utilizes feature segmentation for downsampling operations and incorporates the NAM attention mechanism to leverage sparse feature information. To validate the effectiveness of this strategy, we applied it to various YOLO models, including YOLOv5, YOLOv6, YOLOv8, YOLOv10, YOLO-lite, SPDConv, YOLO-Drone, Hyper-YOLO, RCS-YOLO, and ASF-YOLO. We conducted experiments on the DOTA, Pascal VOC2012, and TZ-Plane datasets. The experimental results demonstrate that our proposed lightweight strategy effectively adapts to different models across various target detection scenarios. Moreover, it achieves a substantial reduction in computational complexity while maintaining detection performance. The source code will be made available at https://github.com/lixiaobai-star/EPLM-YOLO.
External IDs:dblp:journals/tjs/DangLLQZ25
Loading