Meaning without reference in large language modelsDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023nCSI WS @ NeurIPS 2022 OralReaders: Everyone
Keywords: language, meaning, understanding
TL;DR: A review and perspective on the question of meaning in ungrounded (text-only) language models
Abstract: The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from *conceptual role*. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like.
4 Replies