Technical Briefing on Parameter Efficient Fine-Tuning of (Large) Language Models for Code-Intelligence

Published: 01 Jan 2024, Last Modified: 02 Oct 2024ICSE Companion 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fine-tune these models fully, it is a computationally heavy and time-consuming process that is not accessible to all. More importantly, with billions of parameters in the models, fully fine-tuning them for new tasks or domains is infeasible or inefficient. This technical briefing covers the alternative approach -Parameter Efficient Fine Tuning (PEFT), discussing the state-of-the-art techniques and reflecting on the few studies of using PEFT in Software Engineering and how changing the current PEFT architectures in natural language processing could enhance the performance for code-related tasks.
Loading