Keywords: Reasoning, Fact Verification, Misinformation Detection
Abstract: Multi-hop fact verification evaluates the truthfulness of claims by reasoning over multiple facts, serving as a fundamental task in identifying misinformation.
Previous approaches tend to directly feed claims to be verified into Large Language Models (LLMs) and reason for an answer, without explicitly teaching LLMs how to comprehend and reason over multiple pieces of evidence.
Consequently, these often yield flawed reasoning trajectories and overlook critical evidentiary details. To bridge this gap, we propose CogFact, a framework that inspires LLMs by mirroring human mindset for multi-hop facts.
Specifically, CogFact consists of three stages that mirror key aspects of human cognitive processes.
Firstly, CogFact performs implicit entity resolution over multi-hop claims, then conducts semantic decomposition on each component of the claim, and ultimately constructs an integrative logical chain to reason.
CogFact can significantly enhance the multi-hop reasoning capabilities of LLMs while also offering excellent interpretability and scalability. We evaluate CogFact on two public multi-hop fact verification datasets, where it achieves state-of-the-art performance compared to multiple baselines.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact checking, rumor/misinformation detection
Languages Studied: English
Submission Number: 5366
Loading