HAMV: A Heterogeneous Adaptive Multi-Model Verification Framework for Efficient and Reliable Fact-Checking

ACL ARR 2026 January Submission3250 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Hallucination Mitigation, Fact-Checking, Multi-Model Collaboration, Mixture-of-Experts, Efficient LLM Verification
Abstract: Large language models often exhibit hallucinations, while single-model self-verification lacks the knowledge and reasoning diversity required to achieve both reliability and cost-efficiency. We propose HAMV, a framework that reframes multi-model collaboration as a cross-model sparse expert scheduling problem. Inspired by the Mixture-of-Experts (MoE) paradigm, HAMV employs a dynamic routing mechanism to adaptively assign roles (generation, evaluation, verification, and aggregation) based on task and model profiles. It incorporates Dempster–Shafer-based confidence fusion to trigger conditional verification for controlled computational cost. On HalluQA and TruthfulQA, HAMV consistently outperforms representative baselines across varying budgets. Further analysis confirms that dynamic scheduling mitigates position sensitivity, validating the effectiveness of modeling factual verification as a structured scheduling problem.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact checking, misinformation detection
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: Chinese, English
Submission Number: 3250
Loading