Keywords: hallucination detection, financial NLP, retrieval-augmented generation, claim verification, atomic fact-checking, knowledge distillation, large language models, financial document QA, grounded generation, EU AI Act compliance
TL;DR: A three-stage pipeline for financial document QA that decomposes answers into atomic claims, verifies each against table-and-text evidence, and rewrites unsupported claims with citations.
Abstract: Financial AI systems must produce answers grounded in specific regulatory filings, yet current LLMs fabricate metrics, invent citations, and miscalculate derived quantities. These errors carry direct regulatory consequences as the EU AI Act's high-risk enforcement deadline approaches (August 2026). Existing hallucination detectors treat all claims uniformly, missing 43% of computational errors that require arithmetic re-verification against structured tables. We present FinGround, a three-stage verify-then-ground pipeline for financial document QA. Stage 1 performs finance-aware hybrid retrieval over text and tables. Stage 2 decomposes answers into atomic claims classified by a six-type financial taxonomy and verified with type-routed strategies including formula reconstruction. Stage 3 rewrites unsupported claims with paragraph- and table-cell-level citations. To cleanly isolate verification value from retrieval quality, we propose retrieval-equalized evaluation as standard methodology for RAG verification research: when all systems receive identical retrieval, FinGround still reduces hallucination rates by 68% over the strongest baseline ($p < 0.01$). The full pipeline achieves a 78% reduction relative to GPT-4o. An 8B distilled detector retains 91.4% F1 at 18$\times$ lower per-claim latency, enabling \$0.003/query deployment, supported by qualitative signals from a four-week analyst pilot.
Submission Type: Emerging
Copyright Form: pdf
Submission Number: 453
Loading