Keywords: Specialized Generalist Models, Large Language Models
Abstract: Large Language Models (LLMs) excel at general language tasks but struggle in specialized domains. Specialized Generalist Models (SGMs) address this by preserving broad capabilities while adapting to target domains. However, existing architectures provide limited support for task-guided specialized memory mechanisms.
In this work, we introduce Nirvana, an SGM featuring specialized memory, linear-time complexity, and test-time task information extraction. Central to Nirvana are:
(1) Task-Aware Memory Trigger ($\textit{Trigger}$), which treats each input as a self-supervised fine-tuning task and adjusts task-related parameters on the fly; and
(2) Specialized Memory Updater ($\textit{Updater}$), which dynamically consolidates task-relevant context.
Nirvana matches or surpasses LLM baselines on general benchmarks and achieves the lowest perplexity across specialized domains including biomedicine, finance, and law. On the challenging task of Magnetic Resonance Imaging (MRI), we attach lightweight codecs to the frozen Nirvana backbone and fine-tune them on paired k-space signals and images. Nirvana achieves higher-fidelity reconstructions than conventional LLM-based models, with Trigger providing effective domain-specific adaptation.
Ablation studies confirm that removing Trigger leads to substantial degradation across all tasks, underscoring its essential role in task-aware specialization.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: NLP Applications, Language Modeling
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Theory
Languages Studied: English, Chinese, French, German, Spanish
Submission Number: 571
Loading