Keywords: super-resolution, artifact detection, computer vision, deep learning
TL;DR: We introduce a dataset and method for modeling the prominence of super-resolution artifacts, enabling more perceptually relevant evaluation than binary artifact masks.
Abstract: Generative image super-resolution (SR) is rapidly advancing in visual quality and detail restoration. As the capacity of SR models expands, however, so does their tendency to produce artifacts: incorrect, visually disturbing details that reduce perceived quality. Crucially, their perceptual impact varies: some artifacts are barely noticeable while others strongly degrade the image. We argue that artifacts should be characterized by their \emph{prominence} to human observers rather than treated as uniform binary defects. Motivated by this, we present a novel dataset of 1302 artifact examples from 11 contemporary image-SR methods, where each artifact is paired with a crowdsourced prominence score. Building on this dataset, we train a lightweight regressor that produces spatial prominence heatmaps and outperforms existing methods at detecting prominent artifacts. We release the dataset and code to facilitate prominence-aware evaluation and mitigation of SR artifacts.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 17038
Loading