Exploring Coding Spot: Understanding Parametric Contributions to LLM Coding Performance

ACL ARR 2025 July Submission421 Authors

28 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) demonstrate strong code generation and comprehension abilities, yet the extent to which different programming languages are processed independently or within a shared parametric space remains unclear. Inspired by cognitive neuroscience, we introduce Coding Spot, a specialized parametric region that facilitates coding capacity in LLMs. Our findings show that targeted modifications to this subset significantly affect coding performance while largely preserving non-coding functionalities, suggesting that LLMs exhibit parametric specialization similar to function-specific brain regions. This indicates that coding knowledge may not be uniformly distributed across the model but instead concentrated in distinct regions that play a crucial role in task-specific performance. Enhancing our understanding of how LLMs internalize coding knowledge offers new directions for optimizing model architectures and improving code-related applications.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: hierarchical & concept explanations, knowledge tracing/discovering/inducing, probing
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 421
Loading