Tracking Cognitive Development of Large Language Models

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Cognitive ability, benchmark, Large Language Models, Piaget's theory of cognitive development
TL;DR: We construct a benchmark (CogLM) based on Piaget's Theory of Cognitive Development to reveal the development of cognitive abilities in Large Language Models
Abstract: Large Language Models (LLMs) have recently shown tremendous performance on a large variety of Natural Language Processing tasks, ranging from text comprehension to mathematical problems. However, the mechanism regarding why and how such performance has been achieved remains unknown, and it is unclear whether LLMs can achieve human-like cognitive abilities or whether these models are still fundamentally limited. To bridge this gap, we introduce Piaget's Theory of Cognitive Development (PTC) as a tool to reveal the development of cognitive abilities of LLMs. We construct a benchmark (CogLM) based on the scenario experiments in PTC to evaluate the cognitive level of LLMs, covering 10 abilities and 1220 questions created by more than 20 human experts. Through extensive experiments across multiple LLMs on CogLM, we find that: (1) Human-like cognitive abilities have emerged in State-of-the-art LLMs (GPT4), comparable to those of 20-year-old humans. (2) The parameter size and optimization objective are two key factors affecting the cognitive abilities of LLMs. (3) The ability of downstream tasks highly depends on the level of cognitive abilities. These findings provide guidance for the future development of advanced abilities of LLMs from the perspective of ability evolution, and shed light on the mystery behind the emergence of advanced abilities of LLMs.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9374
Loading