Prototypical Distribution Divergence Loss for Image Restoration

Published: 2025, Last Modified: 21 Jan 2026IEEE Trans. Image Process. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural networks have achieved significant advances in the field of image restoration and much research has focused on designing new architectures for convolutional neural networks (CNNs) and Transformers. The choice of loss functions, despite being a critical factor when training image restoration networks, has attracted little attention. The existing losses are primarily based on semantic or hand-crafted representations. Recently, discrete representations have demonstrated strong capabilities in representing images. In this work, we explore the loss of discrete representations for image restoration. Specifically, we propose a Local Residual Quantized Variational AutoEncoder (Local RQ-VAE) to learn prototype vectors that represent the local details of high-quality images. Then we propose a Prototypical Distribution Divergence (PDD) loss that measures the Kullback-Leibler divergence between the prototypical distributions of the restored and target images. Experimental results demonstrate that our PDD loss improves the restored images in both PSNR and visual quality for state-of-the-art CNNs and Transformers on several image restoration tasks, including image super-resolution, image denoising, image motion deblurring, and defocus deblurring.
Loading