Structure-aware Domain Knowledge Injection for Large Language Models

ACL ARR 2024 December Submission2191 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper introduces a pioneering methodology, termed StructTuning, to efficiently transform foundation Large Language Models (LLMs) into domain specialists. It significantly reduces the training corpus needs to a mere 0.3% while achieving an impressive 50% of traditional knowledge injection performance. Motivated by structured human education, we propose a novel two-stage strategy for knowledge injection and alignment: Structure-aware Continual Pre-Training (SCPT) and Structure-aware Supervised Fine-Tuning (SSFT). In the SCPT phase, we automatically extract the domain knowledge taxonomy and reorganize the training corpora, enabling LLMs to effectively link textual segments to targeted knowledge points within the taxonomy. In the SSFT phase, we explicitly prompt models to elucidate the underlying knowledge structure in their outputs, leveraging the structured domain insight to address practical problems. Our ultimate method was extensively evaluated across model architectures and scales on LongBench and MMedBench datasets. We also explored our method’s scalability across different training corpus sizes, laying the foundation to enhance domain-specific LLMs with better data utilization. Code is available at this anonymous URL: https://anonymous.4open.science/r/StructTuning/.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Large Language Models, Knowledge Injection, Domain Adaptation, Knowledge Structure
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English, Chinese, Japanese, French, Russian, Spanish
Submission Number: 2191
Loading