Improved Transfer Learning Based on Increased Model Capacity and Weight Re-initialization for ResNet
Abstract: The Residual Network (ResNet) architecture is among the most renowned in the field of deep neural networks. It has been widely employed for tasks such as image classification, object detection, semantic segmentation, and other related applications. In practical settings, when models are deployed on novel tasks or custom datasets, transfer learning is frequently utilized by leveraging a pretrained model and fine-tuning its final layer for the new task. In this paper, we present a novel transfer learning framework for ResNet that combines enhanced model capacity with weight re-initialization. The proposed framework specifically explores weight re-initialization strategies for ResNet variants with increased capacity. Experimental results on the CUB-200-2011 and Food-101 datasets demonstrate that the proposed approach outperforms baseline methods.
External IDs:dblp:conf/icic/PangHKXLXX25
Loading