An Analysis for Reasoning Bias of Language Models with Small Initialization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Transformer-based Large Language Models (LLMs) have revolutionized Natural Language Processing by demonstrating exceptional performance across diverse tasks. This study investigates the impact of the parameter initialization scale on the training behavior and task preferences of LLMs. We discover that smaller initialization scales encourage models to favor reasoning tasks, whereas larger initialization scales lead to a preference for memorization tasks. We validate this reasoning bias via real datasets and meticulously designed anchor functions. Further analysis of initial training dynamics suggests that specific model components, particularly the embedding space and self-attention mechanisms, play pivotal roles in shaping these learning biases. We provide a theoretical framework from the perspective of model training dynamics to explain these phenomena. Additionally, experiments on real-world language tasks corroborate our theoretical insights. This work enhances our understanding of how initialization strategies influence LLM performance on reasoning tasks and offers valuable guidelines for training models.
Lay Summary: Large language models have revolutionized our daily life and work, particularly through their ability to perform reasoning tasks. However, a critical question remains: Do these models truly possess reasoning capabilities, or do they merely memorize answers? And how can we develop language models that prioritize genuine reasoning? Our research reveals that a model's initialization settings significantly influence its learning bias. We demonstrate that for identical tasks, certain initialization configurations lead the model to memorize answers, while others enable it to truly grasp underlying principles and rules. This work helps developers design smarter models by adjusting initial settings. These insights offer a roadmap to train more efficient AI systems.
Primary Area: Deep Learning->Large Language Models
Keywords: initialization scale, reasoning bias, language model, embedding space, training dynamics
Submission Number: 1338
Loading