Reparameterization-Based Parameter-Efficient Fine-Tuning Methods for Large Language Models: A Systematic Survey

Zezhou Chen, Zhaoxiang Liu, Kai Wang, Shiguo Lian

Published: 2024, Last Modified: 09 Jan 2026NLPCC (3) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The rapid advancement of Large Language Models (LLMs) has revolutionized both academia and industry, leveraging Transformer architectures and pre-training objectives to achieve unprecedented performance. To fully exploit the potential of LLMs, fine-tuning LLMs on specific downstream tasks is essential. However, traditional full fine-tuning methods pose significant computational challenges, prompting the emergence of Parameter-Efficient Fine-Tuning (PEFT) methods, especially reparameterization-based PEFT methods. In this survey, we delve into reparameterization-based PEFT methods, which aim to fine-tune LLMs with reduced computational costs while preserving their knowledge. We systematically analyze their design principles and divide these methods into six categories. We analyze the training parameter complexity, GPU memory consumption, training time costs, accuracy and limitations of each method. We summarize challenges within the re-parameterization-based PEFT methods and propose future directions.
Loading