Low-Rank Adapters Meet Neural Architecture Search for LLM Compression

AAAI 2025 Workshop CoLoRAI Submission13 Authors

22 Nov 2024 (modified: 03 Feb 2025)AAAI 2025 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-rank representations, neural architecture search
TL;DR: Retrospective paper that provides a comprehensive discussion of innovative approaches that synergize low-rank representations with Neural Architecture Search (NAS) techniques.
Abstract: The rapid expansion of Large Language Models (LLMs) has posed significant challenges regarding the computational resources required for fine-tuning and deployment. Recent advancements in low-rank adapters have demonstrated their efficacy in parameter-efficient fine-tuning (PEFT) of these models. This retrospective paper comprehensively discusses innovative approaches that synergize low-rank representations with Neural Architecture Search (NAS) techniques, particularly weight-sharing super-networks. Robust solutions for compressing and fine-tuning large pre-trained models are developed by integrating these methodologies. Our analysis highlights the potential of these combined strategies to democratize the use of LLMs, making them more accessible for deployment in resource-constrained environments. The resulting models exhibit reduced memory footprints and faster inference times, paving the way for more practical and scalable applications of LLMs.
Submission Number: 13
Loading