A rebuttal of two common deflationary stances against LLM cognition

ACL ARR 2025 February Submission3432 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) are arguably the most predictive models of human cognition available. Despite their impressive human-alignment, LLMs are often labeled as "*just* next-token predictors" that purportedly fall short of genuine cognition. We argue that these deflationary claims need further justification. Drawing on prominent cognitive and artificial intelligence research, we critically evaluate two forms of "Justaism" that dismiss LLM cognition by labeling LLMs as "just" simplistic entities without specifying or substantiating the critical capacities they supposedly lack. Our analysis highlights the need for a more measured discussion of LLM cognition, aiming to better inform future research and the development of artificial intelligence.
Paper Type: Short
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: cognitive modeling, computational psycholinguistics
Contribution Types: Position papers, Surveys, Theory
Languages Studied: English
Submission Number: 3432
Loading