Keywords: real-world super-resolution, domain adaptation, rank-aware allocation, low-cost training, parameter-efficient finetuning
TL;DR: AdaptSR, a rank-aware low-rank adaptation framework that enables sustainable, budget-friendly domain adaptation of super-resolution models, achieving state-of-the-art real-world performance with minimal training cost and no inference overhead.
Abstract: Recovering high-frequency details from low-resolution images remains a central challenge in super-resolution (SR), particularly under complex and unknown real-world degradations. While GAN-based methods improve perceptual sharpness, they are unstable and introduce artifacts, and diffusion models achieve strong fidelity but demand excessive computation, even in few-step variants. We present AdaptSR, a rank-aware low-rank adaptation framework that efficiently repurposes bicubic-trained CNN and Transformer SR backbones for real-world tasks. Unlike full fine-tuning, AdaptSR inserts lightweight LoRA modules into convolution, attention, and MLP layers, updates them under a rank-aware allocation strategy guided by layer importance, and merges them back after training—ensuring no additional inference cost. This design reduces trainable parameters by up to 92% and shortens adaptation time from days to just 1–4 hours on a single GPU, aligning with the goals of sustainable and budget-friendly AI. Extensive experiments across diverse SR backbones and datasets show that AdaptSR consistently matches or surpasses full fine-tuning, outperforms recent GAN- and diffusion-based methods in distortion metrics, and delivers competitive perceptual quality. Comparisons with other parameter-efficient fine-tuning (PEFT) baselines further confirm the advantages of our rank-aware allocation. By unifying efficiency, scalability, and practical deployment, AdaptSR establishes a sustainable path for adapting SR models to real-world degradations. The code will be made publicly available.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9011
Loading