Disentangling Linguistic Competence and Factual Knowledge in LLMs: A Survey

ACL ARR 2026 January Submission2883 Authors

03 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models (LLMs) ; Linguistic–Knowledge Separation; Factuality and Hallucinations; Survey Paper
Abstract: Maintaining factual accuracy is becoming increasingly expensive as Large Language Models (LLMs) scale. This has spurred a modular perspective that decouples linguistic competence from factual knowledge in LLMs, which enables targeted fact updates without full retraining. Yet a coherent synthesis to guide this emerging line of work is still lacking. To fill this gap, we present a comprehensive survey through the lens of Linguistic–Knowledge Separation (LKS), consolidating methods and evaluations into a unified framework. We make four contributions: (1) We clarify the conceptual distinction between linguistic and factual knowledge. (2) We summarize representative benchmarks and metrics for the linguistic and knowledge sides, enabling side-specific evaluation. (3) We provide a comprehensive summary of LKS methodologies and develop a systematic taxonomy that organizes them into coherent categories. (4) Finally, we outline future directions and open challenges toward robust, generalizable LKS.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: knowledge tracing/discovering/inducing; model editing; probing;
Contribution Types: Surveys
Languages Studied: English
Submission Number: 2883
Loading