Lightweight Deep Neural Network Model With Padding-Free Downsampling

Published: 2024, Last Modified: 22 Jan 2026IEEE Signal Process. Lett. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks have achieved impressive performance in image classification tasks. However, due to limitations in hardware resources, including computing units and storage capacity, deploying these networks directly on resource-constrained devices such as mobile and edge devices is challenging. While lightweight network models have made significant advancements, the downsampling stage has received little attention. As the feature map is reused multiple times, the reduction of its size during the downsampling stage not only reduces the computational cost of the downsampling module itself but also lowers the computational burden of subsequent stages. This letter addresses this gap by proposing a padding-free downsampling module that effectively reduces computational costs and can seamlessly integrates into various deep learning models. Furthermore, we introduce a hybrid stem layer to obtain competitive accuracy. Extensive experiments were conducted on CIFAR-100, Stanford Dogs, and ImageNet datasets. On the CIFAR-100 dataset, the results show that the proposed module reduces computational costs by approximately 20% and improves inference speed on resource-constrained devices by around 10%.
Loading