Keywords: Large Language Models, Contextual Faithfulness, Hallucination Mitigation, Structured Reasoning
Abstract: Large Language Models (LLMs) have shown strong capabilities across a wide range of tasks. However, they remain vulnerable to noisy or adversarial contexts, often producing unfaithful or hallucinatory outputs. To address these weaknesses, recent work has integrated LLMs with Retrieval-Augmented Generation (RAG) and external tools. While effective, these approaches still suffer from error propagation, as existing structured reasoning methods cannot reliably detect and correct mistakes during intermediate steps.
We propose FaithThinker, a reasoning framework designed to improve contextual faithfulness. At its core is Self-Questioning and Verification (SQV), a reasoning paradigm inspired by dialectical thinking. SQV allows models to question, verify, and revise intermediate reasoning steps in a single pass. To extend this capability, we introduce SQV-Alignment, an adversarial context–augmented fine-tuning method that efficiently transfers SQV from large to smaller models.
Experiments demonstrate that FaithThinker achieves state-of-the-art robustness under both clean and noisy conditions. SQV reduces hallucinations by up to 30.6\% compared with Chain-of-Thought, and generates reasoning paths 4× shorter than iterative methods such as Self-Refine. These results highlight FaithThinker’s ability to enhance contextual faithfulness, mitigate hallucinations, and improve efficiency in challenging environments.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 6974
Loading