Abstract: This paper presents a demo of our novel approach to improve single-image super-resolution methods by integrating trainable regularization techniques. Recent advancements, such as the noise Enhanced Super Resolution Generative Adversarial Network Plus (nESRGAN+), have shown promising results in enhancing the performance of ESRGAN. However, despite its success, nESRGAN+ still faces limitations in perceptual quality due to the absence of detailed hallucinations and the presence of unwanted artifacts with slow convergence rates. To address these challenges, we propose the integration of multiple parametric regularization algorithms, enabling iterative adjustment of network gradients. Through a series of experiments, we demonstrate that our approach yields high-quality reconstructed images, effectively restoring complex textures even in previously unseen scenarios. Moreover, the introduced loss functions contribute to accelerated convergence rates and substantial improvements in the visual fidelity of the reconstructed outputs. Our online demo system can accept input images and show the super-resolution images using our method and the two state-of-the-art methods.
Loading