Flexora: Flexible Low-Rank Adaptation for Large Language Models

Published: 10 Oct 2024, Last Modified: 30 Oct 2024FITML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Language Models \ Fine-tuning \ Hyperparameter optimization
TL;DR: Flexora is a novel method that enhances Large Language Model fine-tuning efficiency by selectively adapting only the most critical layers, using hyperparameter optimization and unrolled differentiation to curb overfitting and improve performance.
Abstract: Large language models (LLMs) have revolutionized artificial intelligence, but their performance on specific tasks is often limited by knowledge boundaries. While fine-tuning techniques like low-rank adaptation (LoRA) aim to address this, they can suffer from overfitting. We propose flexible low-rank adaptation (Flexora), a novel method that automatically selects the most critical layers for fine-tuning to optimize performance across diverse downstream tasks. Flexora formulates layer selection as a hyperparameter optimization problem, employs unrolled differentiation for efficient solving, and identifies the most impactful layers based on optimized hyperparameters. Extensive experiments across various pre-trained models and natural language tasks demonstrate that Flexora consistently outperforms existing baselines. We provide theoretical insights and comprehensive ablation studies to elucidate the effectiveness of Flexora. Therefore, Flexora offers a robust solution to enhance LoRA fine-tuning for LLMs, potentially advancing the field of adaptive language model optimization.
Submission Number: 39
Loading