Keywords: Language Models, Neuroscience, Cognitive Science, Brain-Inspired AI, Mechanistic Interpretability, Algorithmic Correspondence, Representational Similarity, Sparse Connectivity, Compositional Reasoning, Interactive Learning
TL;DR: This paper explores the algorithmic differences between language models and the human brain, and proposes ways to incorporate brain-inspired properties into LMs for more human-like language processing.
Abstract: Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear. This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis, emphasizing the importance of looking beyond input-output behavior to examine and compare the internal processes of these systems. We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models. Furthermore, we explore the role of scaling laws in bridging the gap between LMs and human cognition, highlighting the need for efficiency constraints analogous to those in biological systems. By developing LMs that more closely mimic brain function, we aim to advance both artificial intelligence and our understanding of human cognition.
Submission Number: 33
Loading