Keywords: Diffusion model, Interpretability, Class semantics, Feature distillation, Adversarial robustness, Generalization
TL;DR: We identify latent codes in conditional diffusion models that preserve the core class semantics with minimal irrelevant information, which can be used to improve the robustness of downstream classifiers.
Abstract: Conditional diffusion models (CDMs) have shown impressive performance across a range of generative tasks. Their ability to model the full data distribution has opened new avenues for analysis-by-synthesis in downstream discriminative learning. However, this same modeling capacity causes CDMs to entangle the class-defining features with irrelevant context, posing challenges to extracting robust and interpretable representations. To this end, we introduce Canonical Latent Representation Identifier (CLARID), a training-free procedure to identify Canonical Latent Representations (CanoReps), latent codes whose internal CDM features preserve essential categorical information while discarding non-discriminative signals. When decoded, CanoReps produce representative samples for each class, offering an interpretable and compact summary of the core class semantics with minimal irrelevant details. Exploiting CanoReps, we develop a novel diffusion-based feature distillation paradigm, CaDistill. While the student has full access to the training set, the CDM as teacher transfers core class knowledge only via CanoReps, which amounts to merely 10% of the training data in size. After training, the student achieves strong adversarial robustness and generalization ability, focusing more on the class signals instead of spurious background cues. Our findings suggest that CDMs can serve not just as image generators but also as compact, interpretable teachers that can drive robust representation learning.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 12014
Loading