Mitigating Structural Knowledge Collapse in Domain-Specific LLMs via Morpheme-Aware KV-Aggregation

ACL ARR 2026 January Submission8166 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Morpheme-aware modeling, subword compositionality, domain-specific LLMs, parameter-efficient fine-tuning
Abstract: Standard tokenizers over-fragment domain terms, disrupting morpheme semantics. We characterize this representational misalignment as Structural Knowledge Collapse (SKC), where attention mechanisms fail to reconstruct coherent concepts from fragmented inputs. While existing input-centric solutions like vocabulary expansion address this, they necessitate expensive embedding retraining and neglect internal attention compositionality. To this end, we introduce Morpheme-aware KV-aggregation Attention (MorphKA), a lightweight adapter that dynamically consolidates fragments without tokenizer changes. Bypassing tokenizer retraining, MorphKA employs a dual-phase strategy, Input-Level Morpheme Aggregation (IMA) and Context-Aware KV-Aggregation (AMRF), to stabilize morpheme spans and synthesize higher-order concepts. Experiments on medical and legal benchmarks show MorphKA outperforms vocabulary adaptation baselines by 3.2--4.6\%, reaching 7.9\% on high-fragmentation terms. Moreover, MorphKA reduces catastrophic interference on general capabilities by 18--22\% with $\sim$80\% fewer parameters than embedding retraining approaches.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: transfer learning / domain adaptation, representation learning, generalization; optimization methods
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 8166
Loading