Text-Aware Real-World Image Super-Resolution via Diffusion Model with Joint Segmentation Decoders

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Image Super-Resolution, Text Structure Preservation, Text Segmentation, Multi-task Learning, Diffusion Model
TL;DR: We propose TADiSR, a text-aware diffusion model with joint image super-resolution and segmentation decoders, achieving accurate full-image text SR under real-world degradations.
Abstract: The introduction of generative models has significantly advanced image super-resolution (SR) in handling real-world degradations. However, they often incur fidelity-related issues, particularly distorting textual structures. In this paper, we introduce a novel diffusion-based SR framework, namely TADiSR, which integrates text-aware attention and joint segmentation decoders to recover not only natural details but also the structural fidelity of text regions in degraded real-world images. Moreover, we propose a complete pipeline for synthesizing high-quality images with fine-grained full-image text masks, combining realistic foreground text regions with detailed background content. Extensive experiments demonstrate that our approach substantially enhances text legibility in super-resolved images, achieving state-of-the-art performance across multiple evaluation metrics and exhibiting strong generalization to real-world scenarios. Our code is available at [here](https://github.com/mingcv/TADiSR).
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 2436
Loading