Keywords: Histopathology, Diffusion Models, Artifact Detection, Synthetic Data Generation, Image Segmentation
Abstract: Histopathology image datasets frequently suffer from artifacts like out-of-focus blur due to inconsistent microscopy procedures, significantly compromising the reliability of downstream analysis. While identifying and segmenting these artifacts is essential for robust quality control, the development of supervised detection models is hindered by a scarcity of pixel-level annotations for blurred regions. To address this data limitation, we introduce a novel framework for synthesizing realistic blur artifacts utilizing deep generative modeling. Unlike conventional statistical approaches, which fail to capture the stochastic and spatially variant nature of optical aberrations, our approach leverages Conditional Generative Adversarial Networks (cGANs) and Conditional Denoising Diffusion Probabilistic Models (cDDPMs). These models are trained to translate sharp histological images into their blurred counterparts, preserving the textural semantics of the tissue. We demonstrate that segmentation networks trained on this synthetically generated data exhibit superior generalization in identifying blur artifacts compared to models trained on statistically degraded data or limited data
Primary Subject Area: Image Synthesis
Secondary Subject Area: Segmentation
Registration Requirement: Yes
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 397
Loading