WLA-Net: A Whole New Light-weight Architecture For Visual TaskOpen Website

2022 (modified: 17 Apr 2023)VRCAI 2022Readers: Everyone
Abstract: In this paper, we introduce WLA-Net, a whole new convolutional networks that have smaller parameters and FLOPs model. WLA-Net are based on a cross architecture that uses mechanism of attention and Residual block to build light deep neural networks. While improving the classification accuracy, the parameters of model is reduced, make the model more lightweight and improving resource utilization. A lightweight convolution module is designed in the network that can perform image classification tasks accurately and efficiently while introducing a module that large Convolution attention to improve image classification accuracy. In addition, an new AttentionModule is proposed, which mines information aggregations in the channel direction as much as possible to extract more efficient depth features. It can effectively fuse the features of the channels in the image to obtain higher accuracy. At the same time, a new residual structure is designed to fuse the information between feature channels to make it more closely related. The image classification accuracy of the model is verified on the large natural images datasets. Experimental results show that the proposed method has SOTA performance.
0 Replies

Loading