Abstract: In this paper, we evaluate the ability of Large Language Models (LLMs) to assess the veracity of claims in ''news reports'' generated by themselves or other LLMs. Our goal is to determine whether LLMs can effectively fact-check their own content, and identify limitations that future research needs to focus on to build systems to fact-check LLM-generated content.
Our findings indicate that LLMs are more effective at assessing claims in national or international news stories than in local news stories, better at evaluating static information than dynamic information, and better at verifying true claims compared to false ones. We hypothesize that this disparity arises because the former types of claims are better represented in the training data.
Additionally, we find that incorporating search engine results in a Retrieval-Augmented Generation (RAG) setup significantly reduces the number of unassessable claims. However, it also increases incorrect assessments, due to both irrelevant retrievals and LLM reasoning errors.
This diagnostic evaluation highlights the need for future research on fact-checking machine-generated content to prioritize (i) improving the precision and relevance of retrieved information, (ii) improving the reasoning abilities of LLMs, and (ii) building human-in-the-loop systems when no supporting information can be found.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact-checking
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 2216
Loading