DualConvNet: Enhancing CNN Inference Efficiency Through Compressed Convolutions and Reparameterization

Published: 01 Jan 2024, Last Modified: 15 May 2025TrustCom 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Convolutional Neural Networks (CNNs) have significantly advanced computer vision tasks, but their increasing complexity poses challenges for efficient inference, particularly on resource-constrained devices. We present DualConvNet, a novel CNN architecture that enhances inference efficiency through two key innovations: compressed convolutions and reparameterization. Compressed convolutions reduce computational complexity by selectively processing input channel subsets during both training and inference. For inference, we introduce a reparameterization technique that merges the multi-branch structure into a single, efficient operation, significantly improving speed. Experiments on CIFAR-10, CIFAR-100, and ImageNet-1k demonstrate DualConvNet’s effectiveness, consistently outperforming state-of-the-art models in both accuracy and inference speed. On the COCO dataset, DualConvNet shows competitive accuracy in object detection and instance segmentation tasks while substantially reducing GPU latency. Ablation studies validate the impact of our dual-strategy approach, revealing significant improvements in both accuracy and computational efficiency compared to alternative designs. These results demonstrate DualConvNet’s effectiveness in improving inference efficiency while maintaining high accuracy across various tasks and datasets, making it particularly suitable for real-time applications in resource-constrained scenarios.
Loading