Assembling Extra Features with Grouped Pointwise Convolutions for MobileNets

Published: 2023, Last Modified: 18 Jun 2025DICTA 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Taking into account depthwise separable convolutions (DSCs) in MobileNets leads to a deep architecture having a low memory footprint and effective complexity. Such learned models still lack feature diversification for image representation. Several approaches have been introduced to deal with this issue via exploiting various dilation features to enrich more discriminative information. However, these dilation features are just embedded partly into some layers of MobileNets. Otherwise, the number of the learnable parameters would be sharply increased if they had been completely deployed for the whole backbone of the concerned networks. To this end, we propose to assemble grouped dilation features for MobileNets by addressing a depthwise separable dilated convolution located in parallel with the corresponding DSC. The resultant feature maps will be concatenated and permuted together, and then fed into a grouped convolution for learning diverse information. This grouped operation insignificantly increases the number of the learning parameters. Also, we propose to make an adaptation of a residual mechanism for enhancing the performance of MobileNetV1. Experimental results on benchmark datasets for image classification have validated the competence of our proposals.
Loading