Abstract: Textual entailment, or the ability to deduce whether a proposed hypothesis is logically supported by a given premise, has historically been applied to the evaluation of language modeling efficiency in tasks like question answering and text summarization.
However, we believe that these zero-shot entailment evaluations can extend to a sequential evaluation of entailment on a sentence-by-sentence basis within a larger text. We refer to this approach as ``entailment progressions''.
Additionally, entailment progressions shed light on the intentional logical approaches authors typically employ to construct their arguments, illustrating the points at which authors choose to integrate contradiction and entailment. Our results suggest that entailment progressions can both identify consistency in logical structures and establish a connection between this consistency and how humans typically author texts, as opposed to more formulaic approaches.
Paper Type: long
Research Area: Semantics: Sentence-level Semantics, Textual Inference and Other areas
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis, Theory
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading