Keywords: Grounded Factuality Evaluation, Small Language Model
Abstract: Grounded claim factuality checking is important for large language model (LLM) applications such as retrieval-augmented generation, as it helps users assess the correctness of generated outputs.
Existing metrics using entailment classifiers require dataset-specific threshold tuning, while LLM-based approaches often use direct prompting, which underutilises the reasoning capabilities of LLMs.
We address this by formulating grounded claim factuality checking as a true/false reading comprehension task and prompting LLMs with explicit test-taking strategies for efficient reasoning.
Our method reduces token usage by over 80\% compared to unguided open-ended reasoning, and achieves competitive performance to more expensive alternatives across two factuality benchmarks, setting a new state of the art on one.
To further reduce inference cost, we train small language models (SLMs) to replace LLMs in the checking pipeline.
Using supervised fine-tuning (SFT) and a self-revision mechanism, the SLMs learn to improve their factuality judgements.
Experimental results show that the resulting SLMs perform on par with strong baselines,
combining low inference costs with generating supporting rationales to support interpretability.
Code and datasets will be released upon acceptance.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact checking
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 7842
Loading