Knowledge Distillation for Single Image Super-Resolution via Contrastive Learning

Published: 01 Jan 2024, Last Modified: 11 Apr 2025ICMR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, thanks to the vigorous development of deep learning, single image super-resolution has advanced greatly. Most super-resolution (SR) methods use convolution layers to construct the network, which achieves superior results over the traditional methods based on manual features. However, most methods based on convolutional neural networks (CNN) blindly deepen the depth of the network leading to a large number of model parameters, which inevitably brings huge computing overhead and memory consumption, and greatly limits the application in resource-limited devices. In order to alleviate this problem, a knowledge distillation framework based on contrastive learning is proposed to compress and accelerate the SR model with enormous parameters. The student network is directly constructed by reducing the number of layers of the teacher network. In particular, the proposed method distills the statistical information of the intermediate feature maps from the teacher network to train the lightweight student network. In addition, through explicit knowledge transfer, a novel contrastive loss is introduced to improve the reconstruction performance of the student network. Experiments show that the proposed contrastive distillation framework can effectively compress the model scale with an acceptable loss of performance.
Loading