Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM OutputDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: Fact-checking framework to detect and correct factual errors of LLM outputs, and a benchmark to evaluate the automatic fact-checkers
Abstract: The increased use of large language models (LLMs) across a variety of real-world applications calls for mechanisms to verify the factual accuracy of their outputs. In this work, we present a holistic end-to-end solution for annotating the factuality of LLM-generated responses, which encompasses a multi-stage annotation scheme designed to yield detailed labels concerning the verifiability and factual inconsistencies found in LLM outputs. We build an annotation tool to speed up the labelling procedure and ease the workload of raters. It allows flexible incorporation of automatic results in any stage, e.g., automatically-retrieved evidence. We further construct an open-domain document-level factuality benchmark in three-level granularity: claim, sentence and document. Preliminary experiments show that FacTool, FactScore and Perplexity.ai are struggling to identify false claims, with the best F1=0.63 by GPT-4. Annotation tool, benchmark and code are available at URL withheld.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
0 Replies

Loading