Do Language Models Understand Implicit Logical Meaning? A Case Study of Scope Ambiguity Resolution in Context
Keywords: Scope Ambiguity; Contextual Interpretation; Transformer Language Models; Semantic Representations; Probing Methods
Abstract: Scope ambiguity arises when a sentence admits multiple interpretations depending on how logical operators (e.g., negation and indefinites) interact. In contrast to prior work by Kamath et al. (2024), which adopts an ambiguity-first, evidence-following diagnostic and focuses primarily on surface-level behavioral sensitivity without directly examining internal mechanisms, we introduce an evidence-first, ambiguity-following diagnostic that allows the hidden representations of a scope-ambiguous sentence to be adaptively shaped by preceding contextual evidence.
To support this analysis, we present SCOPEx, an extension of the dataset of Kamath et al. (2024), constructed by semi-automatically generating preamble context sentences corresponding to different scope readings and by introducing passivized sentences as syntactically less ambiguous controls. Using SCOPEx, we find evidence that transformer language models (LMs) (i) exhibit systematic sensitivity to scope ambiguity as a function of preceding contextual evidence, and (ii) encode scope interpretations in their internal representations in ways that support reliable discrimination between inverse and surface scope readings.
Paper Type: Long
Research Area: Semantics: Lexical, Sentence-level Semantics, Textual Inference and Other areas
Research Area Keywords: Semantics: Lexical and Sentence-Level, Linguistic Theories, Cognitive Modeling, and Psycholinguistics,
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 6307
Loading