Do large language models solve verbal analogies like children do?

Published: 24 May 2025, Last Modified: 24 May 2025CoNLL 2025 ConditionalEveryoneRevisionsBibTeXCC BY 4.0
Keywords: analogical reasoning, cognitive development, large language models
TL;DR: We find that LLMs generally outperform children on the verbal analogies task. LLMs solve many verbal analogies using association like young children do. However, association doesn't fully explain their success.
Abstract: Analogy-making lies at the heart of human cognition. Adults solve analogies such as "horse belongs to stable like chicken belongs to …?" by mapping relations ("kept in") and answering "chicken coop". In contrast, young children often use association, e.g., answering "egg". This paper investigates whether large language models (LLMs) solve verbal analogies in A:B::C:? form using associations, similar to what children do. We use verbal analogies extracted from an online learning environment, where 14,006 7-12 year-olds from the Netherlands solved 872 analogies in Dutch. The seven tested LLMs perform at or above the level of children. However, when we control for solving by association this picture changes. We conclude that the LLMs we tested rely heavily on association like young children do. However, LLMs make different errors than children, and association doesn't fully explain their superior performance on this children's verbal analogy task.
Supplementary Material: pdf
Submission Number: 217
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview