DAMamba: Semantic Aware One-shot Test-time Domain Adaptation for Super-resolution

17 Sept 2025 (modified: 26 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Domain adaptation, Super resolution
TL;DR: We propose a semantic-aware one-shot test-time domain adaptation method for super-resolution Mamba (DAMamba), achieving efficient domain adaptation training without source samples and using only a single target domain sample.
Abstract: Domain adaptation methods effectively reduce the negative impact of domain gaps on the performance of the Mamba-based super-resolution (SR) network. Considering data privacy restrictions that prevent access to source domain samples, along with the tendency of users to capture or upload only a single image for SR, we propose a semantic-aware one-shot test-time domain adaptation method for the super-resolution Mamba (DAMamba). Among them, the semantic prior-guided cross-training method employs Alpha-CLIP semantic priors with a global perspective to guide feature scanning, effectively enhancing key information extraction efficiency. Additionally, it addresses the limited diversity of target domain sample caused by single-sample constraints through pairwise patch combinations for domain adaptation training. Given the strong contextual dependency unique to the Mamba network, we propose a random blur data augmentation strategy, improving network robustness while avoiding disruptions from zero-value masking. Finally, the proposed adaptive learning strategy dynamically identifies salient and ordinary layers, further enhancing domain adaptation efficiency. Extensive experiments demonstrate the effectiveness of DAMamba, with its performance on a single target domain sample surpassing that of the state-of-the-art source-free domain adaptation methods using multiple target domain samples. Our code is available at ***.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8798
Loading