Keywords: interpretability, interpretable diffusion model, diffusion models, transparency, prototypical network, shortcut learning, bias detection
Abstract: Uncovering the opacity of diffusion-based generative models is urgently needed, as their applications continue to expand while their underlying procedures largely remain a black box.
With a critical question -- how can the diffusion generation process be interpreted and understood? -- we proposed \textit{Patronus}, an interpretable diffusion model that incorporates a prototypical network to encode semantics in visual patches, revealing \textit{what} visual patterns are learned and \textit{where} and \textit{when} they emerge throughout denoising.
This interpretability of Patronus provides deeper insights into the generative mechanism, enabling the detection of shortcut learning via unwanted correlations and the tracing of semantic emergence across timesteps. We evaluate \textit{Patronus} on four natural image datasets and one medical imaging dataset, demonstrating both faithful interpretability and strong generative performance. With this work, we open new avenues for understanding and steering diffusion models through prototype-based interpretability.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 5778
Loading