Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality
Keywords: Causal interventions, Foundational work, Understanding high-level properties of models
TL;DR: We identify causal minimality as the unifying principle for vision and language foundation models.
Abstract: Deep generative models, while revolutionizing fields like image and text generation, largely operate as opaque ``black boxes'', hindering human understanding, control, and alignment. Current empirical interpretability tools often lack theoretical guarantees, risking subjective or unreliable insights. In this work, we tackle this challenge by establishing a principled foundation for interpretable and controllable generative models. We demonstrate that the principle of causal minimality -- favoring the simplest causal explanation -- can endow the latent representations of diffusion vision and autoregressive language models with clear causal interpretation and robust, component-wise identifiable control. We introduce a novel theoretical framework for hierarchical selection models, where higher-level concepts emerge from the constrained composition of lower-level variables, better capturing the complex dependencies in data generation. Under theoretically derived minimality conditions (manifesting as sparsity or compression constraints), we show that learned representations can be equivalent to the true latent variables of the data-generating process. Empirically, applying these constraints to leading generative models allows us to extract their innate hierarchical concept graphs, offering fresh insights into their internal knowledge organization. Furthermore, these causally grounded concepts serve as effective levers for fine-grained steering of model outputs, paving the way for more transparent, reliable systems.
Submission Number: 246
Loading