EviGenerate: Generative Evidence in Automated Fact-Checking

Published: 13 Jan 2025, Last Modified: 26 Feb 2025AAAI 2025 PDLM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Evidence generation, Knowledge extraction, Automatic fact-checking
TL;DR: Proposed approach for using LLM generated evidence for fact-checking
Abstract: In the era of widespread misinformation, the imperative tasks of fact verification and correction have become essential, especially in the realm of online social media. Traditional manual fact-checking, while crucial, is time-consuming, emphasizing the need for innovative approaches. This research introduces an automated fact-checking system leveraging sophisticated language models for evidence generation to dynamically adapt to the evolving information landscape. The proposed system \textit{EviGenerate} employs a novel evidence generation pipeline, integrating strategies such as named entity hints, question formulation, relation explanation, cross-examination, and a truthful critic. Utilizing a modified FEVER - a widely used automatic fact-checking dataset, the approach achieves a F$1$ score of $0.912$ for claim verification based on DeBERTa. Our best claim correction result based on T$5$-$3$B gives a SARI Keep score of $0.721$. The contribution of this work lies in its evidence generation approach and prompting strategies, fostering accuracy and adaptability in automated fact-checking systems.
Submission Number: 18
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview