TAG: Tangential Amplifying Guidance for Hallucination-Resistant Diffusion Sampling

ICLR 2026 Conference Submission14964 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion models, Self-guidance, On-manifold sampling
TL;DR: We propose a new perspective that mitigates hallucinations in diffusion models by amplifying only the tangential update, and we prove it monotonically increases local log-likelihood under mild assumptions without extra sampling cost.
Abstract: Diffusion models achieve the state-of-the-art samples in image generation but often suffer from semantic inconsistencies or *hallucinations*. While various inference-time guidance methods can enhance generation, they often operate *indirectly* by relying on external signals or architectural modifications, which introduces additional computational overhead. In this paper, we propose **T**angential **A**mplifying **G**uidance (**TAG**), a more efficient and *direct* guidance method that operates solely on trajectory signals without modifying the underlying diffusion model. TAG leverages an intermediate sample as a projection basis and amplifies the tangential components of the estimated scores with respect to this basis to correct the sampling trajectory. We formalize this guidance process by leveraging a first-order Taylor expansion, which demonstrates that amplifying the tangential component steers the state toward higher-probability regions, thereby reducing inconsistencies and enhancing sample quality. TAG is a plug-and-play, architecture-agnostic module that improves diffusion sampling fidelity with minimal computational addition, offering a new perspective on diffusion guidance.
Primary Area: generative models
Submission Number: 14964
Loading