Abstract: Examining logical inconsistencies among multiple statements—such as collections of sentences or question-answer pairs—is a crucial challenge in machine learning, particularly for ensuring the safety and reliability of models. Traditional methods that rely on pairwise comparisons often fail to capture inconsistencies that only emerge when more than two statements are evaluated collectively. To address this gap, we introduce the task of \textit{set-consistency verification}, an extension of natural language inference (NLI) that assesses the logical coherence of entire sets rather than isolated pairs. Building on this task, we present the \textit{Set-Consistency Energy Network} (SC-Energy), a novel model that employs a contrastive loss framework to learn the compatibility among a collection of statements. Our approach not only efficiently verifies inconsistencies and pinpoints the specific statements responsible for logical contradictions, but also significantly outperforms existing methods—including prompting-based LLM models. Furthermore, we release two new datasets: Set-LConVQA and Set-SNLI for set-consistency verification task.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: fact checking, contrastive learning, NLP datasets
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 7725
Loading