Taming the Tri-Space Tension: ARC-Guided Hallucination Modeling and Control for Text-to-Image Generation
Keywords: Text-to-Image Diffusion Models, Alignment Risk Code, Tri-Space Tension Modeling, Real-time Trajectory Control, Hallucination Control
TL;DR: We reveal that diffusion models exhibit structured tension across multiple axes of alignment. By sensing and modulating this tension via ARC and a plug-and-play controller, we steer generation toward faithful, hallucination-free imagery.
Abstract: Despite remarkable progress in image quality and prompt fidelity, text-to-image (T2I) diffusion models continue to exhibit persistent "hallucinations", where generated content subtly or significantly diverges from the intended prompt semantics. While previous works often regarded them as unpredictable artifacts, we argue that these failures reflect deeper, structured misalignments within the model's generation process. In this work, we reinterpret hallucinations as trajectory drift within a latent alignment space. By tracking internal representations over time and analyzing diffusion trajectories across diverse prompts, we discover that hallucinated samples consistently deviate along structured paths that cluster into three separable failure modes. These emergent clusters correspond to distinct cognitive tensions: semantic coherence, structural alignment, and knowledge grounding. We then formalize this three-axis space as the Hallucination Tri-Space and introduce the Alignment Risk Code (ARC): a dynamic vector representation that quantifies real-time alignment tension during generation. The magnitude of ARC captures overall misalignment, its direction identifies the dominant failure axis, and its imbalance reflects tension asymmetry. Based on this formulation, we develop the TensionModulator (TM-ARC): a lightweight controller that operates entirely in latent space. TM-ARC monitors ARC signals and applies targeted, axis-specific interventions during the sampling process. Extensive experiments on standard T2I benchmarks demonstrate that our approach significantly reduces hallucination without compromising image quality or diversity. This framework offers a unified and interpretable approach for understanding and mitigating generative failures in T2I systems.
Primary Area: generative models
Submission Number: 13752
Loading