Implicit In-Context Learning: Evidence from Artificial Language Experiments

Published: 08 Jul 2025, Last Modified: 26 Aug 2025COLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: implicit learning, artificial language learning, in-context learning, psycholinguistics, cognitive science
TL;DR: LLMs show human-like implicit learning during inference, but different models excel in different linguistic domains, pointing to distinct in-context learning mechanisms influenced by model architecture.
Abstract: Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While large language models (LLMs) demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted three classic artificial language learning experiments spanning morphology (regular/irregular plural marking), morphosyntax (context-dependent determiners), and syntax (finite state grammar) to systematically evaluate implicit learning at inferencing level in two state-of-the-art Openai models: gpt-4o (optimized for general language tasks) and o3-mini (specifically fine-tuned for explicit reasoning). This comparison allowed us to examine whether models trained to articulate reasoning processes differ in their ability to extract implicit patterns. Our findings reveal a complex picture: o3-mini demonstrated human-like probabilistic learning in morphological regularization, while gpt-4o showed stronger performance in finite state grammar acquisition. Neither model successfully replicated human patterns in the morphosyntax task. Post-experiment probes revealed correlations between models' performance and their ability to articulate underlying patterns, suggesting alignment between implicit recognition and explicit awareness. These results indicate that different LLMs implement distinct in-context processing mechanisms, with architecture and training objectives influencing pattern extraction across linguistic domains. Our study contributes to understanding in-context learning in LLMs and provides a novel framework for evaluating these models through the lens of cognitive science, highlighting both similarities and differences between human implicit learning and machine in-context pattern recognition.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Flagged For Ethics Review: true
Ethics Comments: Only a minor thing. The paper uses probably copyrighted images from other papers in Figure 1 and 3. They should produce new figures with the original numbers as they did in Figure 2.
Submission Number: 1248
Loading