A Survey on Misinformation Prevention and Detection methods in Large Language Models

ACL ARR 2024 June Submission2864 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid advancement of large language models (LLMs) has significantly impacted various fields within natural language processing (NLP). However, the issue of misinformation has become increasingly prominent, necessitating urgent solutions. Recent studies have categorized misinformation into two types: unintentional misinformation, often resulting from hallucinations, and intentional misinformation, which is deliberately created and spread by malicious actors. This paper provides a comprehensive survey of recent approaches to mitigating both types of misinformation in LLMs. It explores internal and external prevention methods, along with various techniques for misinformation tracing and detection. By evaluating the strengths and weaknesses of these approaches, this survey aims to illuminate the direction for future research in addressing misinformation in LLMs.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Language Modeling, NLP Applications
Contribution Types: Surveys
Languages Studied: English
Submission Number: 2864
Loading