TruthFlow: Truthful LLM Generation via Representation Flow Correction

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) are known to struggle with consistently generating truthful responses. While various representation intervention techniques have been proposed, these methods typically apply a universal representation correction vector to all input queries, limiting their effectiveness against diverse queries in practice. In this study, we introduce TruthFlow, a novel method that leverages the Flow Matching technique for query-specific truthful representation correction. Specifically, TruthFlow first uses a flow model to learn query-specific correction vectors that transition representations from hallucinated to truthful states. Then, during inference, the trained flow model generates these correction vectors to enhance the truthfulness of LLM outputs. Experimental results demonstrate that TruthFlow significantly improves performance on open-ended generation tasks across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained TruthFlow model exhibits strong transferability, performing effectively on other unseen hallucination benchmarks.
Lay Summary: Hallucination, which refers to seemingly plausible but factually inaccurate generation, is a challenging problem for LLMs. We developed an effective mitigation method to accommodate the diversity of input queries better. This will help correct the potential mistakes resulting from hallucinations based on different user inputs, making LLMs more trustworthy.
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Hallucination, Representation Intervention, Flow Matching
Submission Number: 13040
Loading