Fusion of Deep Transfer Learning with Mixed convolution networkDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Transfer Learning, Fusion model, Mixed Convolution Network, Feature Enhancement Network
Abstract: Global thirst in computer vision for image classification is performance improvement and parameter optimization. The evolution of deep learning raised the boundaries of the model size to hundreds of millions of parameters. This obliquely influences the training time of the model. Thus, contemporary research has diverted to parameter optimization par with performance. In this paper, a fusion-based deep transfer learning approach has been furbished with a mixed convolution block. The proposed mixed convolution block has been designed using two convolution paths including residual and separable convolutions. The residual convolution avoids vanishing gradient while separable convolution includes depthwise features. The experiments on the popular Fashion-MNIST bench mark dataset have proved that the proposed mixed convolution enticed the pre-trained models. It has been observed that there is a clear improvement of 1\% than the base models. Further, the proposed fusion model exhibits a competing performance of 96.04\% with existing models.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: Fusion of Deep Transfer Learning
5 Replies

Loading