Abstract: The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems, leading to the prevalence of AI-generated content and challenges in detecting misinformation and managing conflicting information, or "inter-evidence conflicts." This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation scenarios. We evaluate conflict detection methods, including Natural Language Inference (NLI) models, factual consistency (FC) models, and LLMs, on these conflicts (\textbf{RQ1}) and analyze LLMs' conflict resolution behaviors (\textbf{RQ2}).
Our key findings include: (1) NLI and LLM models exhibit high precision in detecting answer conflicts, though weaker models suffer from low recall; (2) FC models struggle with lexically similar answer conflicts, while NLI and LLM models handle these better; and (3) stronger models like GPT-4 show robust performance, especially with nuanced conflicts. For conflict resolution, LLMs often favor one piece of conflicting evidence without justification and rely on internal knowledge if they have prior beliefs.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: textual entailment; natural language inference; compositionality
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 5747
Loading