A Robust and Lightweight Generative Adversarial Network with Zero-Shot Learning for Image Super-Resolution

Masuma Aktar, Kuldeep Singh Yadav, Rabul Hussain Laskar

Published: 2025, Last Modified: 27 Feb 2026NCC 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Most Single Image Super-Resolution (SISR) meth-ods rely on paired training data, typically generated through bicubic downsampling, which fails to capture the complex degra-dations seen in real-world scenarios. This reliance on synthetic data creates a performance gap when deploying SISR models on actual degraded images. To address this limitation, we pro-pose a Robust Zero-Shot learning-based Generative Adversarial Network for Super-Resolution (RZSGAN-SR) framework that leverages zero-shot learning to handle unknown degradations effectively. Our approach incorporates a Zero-Shot learning-based Degradation Correction Network (ZSDCN) to translate real-world degraded Low-Resolution (LR) images into synthetic LR images with known degradations. These translated images are then fed into a lightweight, robust Generative Adversarial Network (GAN)-based SR network to generate high-quality, visually realistic Super-Resolved (SR) images. More specifically, the proposed RZSGA-SR is a two-phase framework consisting of zero-shot degradation correction and efficient GAN-based upsampling. This hybrid model leverages the adaptability of Zero-Shot Learning (ZSL) with the realism of a robust GAN-based SR network with high fidelity and perceptual quality SR reconstruction. Extensive experiments show that RZSGAN-SR surpasses state-of-the-art methods, achieving superior reconstruction (PSNR, SSIM) and perceptual quality (LPIPS) on real-world degraded images.
Loading