Concept Reachability in Diffusion Models: Beyond Dataset Constraints

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite significant advances in quality and complexity of the generations in text-to-image models, *prompting* does not always lead to the desired outputs. Controlling model behaviour by directly *steering* intermediate model activations has emerged as a viable alternative allowing to *reach* concepts in latent space that may otherwise remain inaccessible by prompt. In this work, we introduce a set of experiments to deepen our understanding of concept reachability. We design a training data setup with three key obstacles: scarcity of concepts, underspecification of concepts in the captions, and data biases with tied concepts. Our results show: (i) concept reachability in latent space exhibits a distinct phase transition, with only a small number of samples being sufficient to enable reachability, (ii) *where* in the latent space the intervention is performed critically impacts reachability, showing that certain concepts are reachable only at certain stages of transformation, and (iii) while prompting ability rapidly diminishes with a decrease in quality of the dataset, concepts often remain reliably reachable through steering. Model providers can leverage this to bypass costly retraining and dataset curation and instead innovate with user-facing control mechanisms.
Lay Summary: Creating images from text descriptions using AI has come a long way, but these systems still don’t always generate exactly what users ask for. Even well-crafted prompts can fail to generate certain concepts in images. In our work, we explore two key questions: when prompting falls short, can alternative methods still recover those concepts? And are there limitations in the training data so severe that some concepts can’t be generated at all? To investigate this, we use synthetic data in controlled experiments that let us compare concept extraction methods. Our setup targets three common issues: concepts that are very rare, concepts present in images but missing from captions (such as backgrounds), and cases where concepts are biased or entangled with others. Our results show that training data quality is critical. We show that a small number of examples can often make a concept reachable, interventions in the model’s internal process can at times succeed even when prompting fails due to poor data, and success depends heavily on where in the model you intervene to extract concepts. These insights suggest that developers can focus on giving users more direct and flexible tools for controlling image generation as an alternative to retraining.
Link To Code: https://github.com/martaaparod/concept_reachability
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Diffusion models, concept learning, steering vectors
Submission Number: 11483
Loading