Keywords: Circuit analysis, Diffusion models, Causal interventions, Relational representation, scene composition, generalization
TL;DR: Different Text encoder of T2I diffusion leads to different circuit mechanism for relational object generation.
Abstract: Although Diffusion Transformers (DiTs) have greatly advanced text-to-image generation, models still struggle to generate the correct spatial relations between objects as specified in the text prompt. Although mechanistic interpretability studies have been adopted to explain neural networks’ behavior in language and vision transformers from the perspective of the internal computation of representations, they have not yet been used to study how a DiT can generate correct spatial relations between objects. In this study, we investigate this open problem in a controlled setting. We train, from scratch, DiTs of different sizes with different text encoders to learn to generate images containing two objects whose attributes and spatial relations are specified in the text prompt. We find that, although all the models can learn this task to near-perfect accuracy, the underlying mechanisms differ drastically depending on the text encoder. When using random text embeddings, we find that the spatial-relation information is passed to image tokens through a two-stage circuit, involving two cross-attention heads that separately read the spatial relation and single-object attributes in the text prompt. When using a pretrained text encoder (T5), we find that the DiT uses a different circuit that leverages information fusion in the text tokens, reading spatial-relation and single-object information together from a single text token. We further show that, although the in-domain performance is similar for the two settings, their robustness to out-of-domain perturbations differs, potentially suggesting the difficulty of generating correct relations in real-world scenarios.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2026/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9797
Loading