Knowledge-Distillation-Warm-Start Training Strategy for Lightweight Super-Resolution Networks

Published: 01 Jan 2023, Last Modified: 19 May 2025ICONIP (12) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, studies on lightweight networks have made rapid progress in the field of image Super-Resolution (SR). Although the lightweight SR network is computationally efficient and saves parameters, the simplification of the structure inevitably leads to limitations in its performance. To further enhance the efficacy of lightweight networks, we propose a Knowledge-Distillation-Warm-Start (KDWS) training strategy. This strategy enables further optimization of lightweight networks using dark knowledge from traditional large-scale SR networks during warm-start training and can empirically improve the performance of lightweight models. For experiment, we have chosen several traditional large-scale SR networks and lightweight networks as teacher and student networks, respectively. The student network is initially trained with a conventional warm-start strategy, followed by additional supervision from the teacher network for further warm-start training. The evaluation on common test datasets shows that our proposed training strategy can result in better performance for a lightweight SR network. Furthermore, our proposed approach can also be adopted in any deep learning network training process, not only image SR tasks, as it is not limited by network structure or task type.
Loading