XDDPM: EXPLAINABLE DENOISING DIFFUSION PROB- ABILISTIC MODEL FOR SCIENTIFIC MODELING

Published: 03 Mar 2024, Last Modified: 04 May 2024AI4DiffEqtnsInSci @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: explainable, generative models, Information Bottleneck, scientific modeling
Abstract: In recent years, diffusion models have emerged as powerful tools for generatively predicting high-dimensional observations across various scientific and engineering domains, including fluid dynamics, weather forecasting, and physics. Typically, researchers not only want the models to have faithful generation, but also want to explain these high-dimensional generations with accompanying signals such as measurements of force, currents, or pressure. However, such explainable generation capability is still lacking in existing diffusion models. Here we introduce Explainable Denoising Diffusion Probabilistic Model (xDDPM), a simple variant to the standard DDPM that enables the generation of samples in an explainable manner, focusing solely on generating components that are pertinent to the given signal. The key feature of xDDPM is that it trains the denoising network to exclusively denoise these relevant parts while leaving non-relevant portions noisy. It achieves this by incorporating an Information Bottleneck loss in its learning objective, which facilitates the discovery of relevant components within the samples. Our experimental results, conducted on two cell dynamics datasets and one fluid dynamics dataset, consistently demonstrate xDDPM's capability for explainable generation. For instance, when provided with force measurements on a jellyfish-like robot, xDDPM accurately generates the relevant pressure fields surrounding the robot while effectively disregarding distant fields.
Submission Number: 38
Loading