Abstract: LLMs excel in general NLP yet struggle in specialized domains such as finance and regulation. This study proposes a sequential fine-tuning framework for multitask learning in a unified LLM, structuring tasks into foundational, question-answering, and stylistic-answer knowledge to mitigate catastrophic forgetting and enhance knowledge transfer. Evaluations on the COLING 2025 Regulations Challenge dataset demonstrate significant improvements, with notable gains in financial QA and MOF license abbreviation recognition. Unlike Chain-of-Thought inference-based methods, this approach integrates reasoning during training, reducing inference costs and improving scalability. While challenges remain with sparse and context-dependent data, the findings highlight structured task sequencing as a promising strategy for domain-adapted LLM.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: fine-tuning, prompting, domain adaptation, financial/business NLP, legal NLP, applications
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 963
Loading