Task Adaptation in Large Language Models: A Unified Survey of Weight-Based, Prompt-Based, and Embedding-Based Adaptations

ACL ARR 2026 January Submission5579 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, task adaptation, unified perspective, parameter-efficient fine-tuning, in-context learning, task embedding injection
Abstract: As Large Language Models (LLMs) are increasingly deployed across diverse downstream tasks, efficient task adaptation has emerged as a central challenge. In response, a wide range of task adaptation methods have been proposed, spanning parameter-efficient fine-tuning (PEFT), in-context learning (ICL), and embedding-injection approaches. However, existing research has evolved largely in isolation within each paradigm, resulting in fragmented terminology, assumptions, and evaluation practices. This survey presents a unified framework for understanding task adaptation in LLMs, where task adaptation methods are categorized according to where task-relevant information is encoded: model weights, input prompts, or injected task embeddings. We provide a comprehensive taxonomy that integrates these paradigms, analyze trade-offs along key practical dimensions, including applicability to proprietary models, performance, efficiency, and task-switching overhead, and highlight open problems for future research.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: fine-tuning, prompting, model editing, transfer, robustness
Contribution Types: Surveys
Languages Studied: English
Submission Number: 5579
Loading