Keywords: Diffusion Model, Wekaly Supervised Learning
TL;DR: A unified diffusion framework that robustly trains class-conditional models using noisy, ambiguous, or incomplete weak annotations.
Abstract: Conditional diffusion models have achieved remarkable success in generative tasks, yet their standard training typically relies on clean labeled data. In contrast, real-world scenarios frequently involve weak annotations such as noisy, ambiguous, or incomplete supervision, which will lead to biased score estimation and significantly degrade generation performance. To address this challenge, we propose DELTA, a unified framework for robustly training Diffusion ModELs with weak annoTAtions. To our knowledge, this is the first systematic study unifying these diverse weak annotations within diffusion models. Grounded in likelihood maximization, our framework decomposes the training objective into two synergistic components: a generative term that recovers clean conditional scores via posterior-weighted score matching, and a classification term that infers reliable class-posterior probabilities using the diffusion model itself. To improve computational efficiency, we further develop an optimized timestep sampling strategy for the diffusion classifier. Extensive experiments across multiple tasks demonstrate DELTA's effectiveness in overcoming the limitations of weak annotations.
Submission Number: 32
Loading