Context-Sensitive Semantic Reasoning in Large Language Models

ICLR 2024 Workshop Re-Align Submission70 Authors

Published: 02 Mar 2024, Last Modified: 02 May 2024ICLR 2024 Workshop Re-Align PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: short paper (up to 5 pages)
Keywords: context, attention, large language models, semantic reasoning, semantic cognition, semantic knowledge
Abstract: The development of large language models (LLMs) holds promise for increasing the scale and breadth of experiments probing human cognition. LLMs will be useful for studying the human mind to the extent that their behaviors and their representations are aligned with humans. Here we test this alignment by mea- suring the degree to which LLMs reproduce the context-sensitivity demonstrated by humans in semantic reasoning tasks. We show in two simulations that, like humans, the behavior of leading LLMs is sensitive to both local context and task context, reasoning about the same item differently when it is presented in different contexts or tasks. However, the representations derived from LLM text embedding models do not exhibit the same degree of context sensitivity. These results suggest that LLMs may provide useful models of context-dependent human behavior, but cognitive scientists should be cautious when assuming that embeddings reflect the same context sensitivity.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 70
Loading