Image super-resolution (SR), aiming to restore accurate high-resolution images from low-resolution ones, plays a pivotal role in image processing. However, the performance of SR models is often hindered by conventional data augmentation and data degradation techniques. Conventional data augmentation methods for SR are typically limited to geometric transformations, lacking semantic richness. Traditional data degradation methods simulate degradation through a series of blurring, noise addition, compression, and resizing processes, lacking the complexity essential for robust model training. In this paper, based on pre-trained large-scale text-to-image diffusion models, we propose a novel data augmentation method and an innovative data degradation method in SR modeling. Our data augmentation method utilizes Stable Diffusion to modify image content at the semantic level for controlled data augmentation, enriching training datasets with nuanced variations while preserving the quality of the original images. Moreover, after fine-tuning Stable Diffusion with domain-matched data we further enhance the augmentation efficacy. Besides, by carefully designing control signals, our data degradation method utilizes diffusion to emulate degradation, simulating various unknown input corruptions to improve the performance of SR models across unfamiliar image degradation patterns. Our data augmentation method improves PSNR by 0.8 dB on the FFHQ dataset and by 0.28 dB on the Manga109 dataset for the SR tasks. Meanwhile, our data degradation technique has proven effective in significantly reducing artifacts in real-world SR imagery, distinctly exceeding the performance of traditional ones.
Keywords: data augmentation, data degradation, super-resolution
Abstract:
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3560
Loading