Generalized Quantifiers as a Source of Error in Multilingual NLU BenchmarksDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=V58-YGdAQFZ
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today's NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.
Dataset: zip
Copyright Consent Signature (type Name Or NA If Not Transferrable): Ruixiang Cui
Copyright Consent Name And Address: University of Copenhagen, Nørregade 10, 1165 Copenhagen, Denmark
Presentation Mode: This paper will be presented in person in Seattle
0 Replies

Loading