Cognitive Parallels in Metaphor Processing: Human Acquisition vs. Large Models

ACL ARR 2025 February Submission1372 Authors

13 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Metaphor comprehension is a complex cognitive task in language acquisition that requires reasoning between surface structures and deeper semantic representations. Prior research has predominantly treated metaphor acquisition and automatic metaphor detection as separate topics, lacking a direct comparative analysis. This paper systematically reviews studies on metaphor acquisition in linguistics and identifies four cognitive aspects that align with the capabilities of large language models: aptness, language proficiency, transferable comprehension, and literal salience hypothesis. Experimental results reveal significant parallels between large model performance and human metaphor learning. Specifically, large models achieve higher accuracy on highly aptness metaphor samples. Language proficiency is reflected in their capacity for metaphor comprehension, which benefits from richer corpora, larger parameter scales, and more efficient architectures. Furthermore, large models exhibit sensitivity to transferable comprehension, as demonstrated by the substantial influence of cross-linguistic knowledge on metaphor processing. Similarly, they align with the literal salience hypothesis, prioritizing literal meanings over metaphorical ones, a pattern evident in their higher accuracy in metaphor detection.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: Linguistic Theories, Cognitive Modeling, and Psycholinguistics,Semantics: Lexical and Sentence-Level,Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English,Spanish,Chinese,Slovenian
Submission Number: 1372
Loading