A Straightforward Pipeline for Targeted Entailment and Contradiction Detection

NeurIPS 2025 Workshop CauScien Submission20 Authors

23 Aug 2025 (modified: 18 Oct 2025)Submitted to NeurIPS 2025 Workshop CauScienEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Natural Language Inference (NLI), Attention Mechanisms, Argument Mining, Contradiction Detection
TL;DR: This work presents a pipeline that leverages transformer attention to efficiently filter candidate sentences for a Natural Language Inference model, enabling targeted detection of premises and contradictions.
Abstract: Finding the relationships between sentences in a document is crucial for tasks like fact-checking, argument mining, and text summarization. A key challenge is to identify which sentences act as premises or contradictions for a specific claim. Existing methods often face a trade-off: transformer attention mechanisms can identify salient textual connections but lack explicit semantic labels, while Natural Language Inference (NLI) models can classify relationships between sentence pairs but operate independently of contextual saliency. In this work, we introduce a method that combines the strengths of both approaches for a targeted analysis. Our pipeline first identifies candidate sentences that are contextually relevant to a user-selected target sentence by aggregating token-level attention scores. It then uses a pretrained NLI model to classify each candidate as a premise (entailment) or contradiction. By filtering NLI-identified relationships with attention-based saliency scores, our method efficiently isolates the most significant semantic relationships for any given claim in a text.
Submission Number: 20
Loading