Abstract: Recent advances have highlighted the benefits of scaling language models to enhance performance across a wide range of NLP tasks. However, these approaches still face limitations in effectiveness and efficiency when applied to domain-specific tasks, particularly for small edge-side models. We propose the LoRA-Gen framework, which utilizes a large cloud-side model to generate LoRA parameters for edge-side models based on task descriptions. By employing the reparameterization technique, we merge the LoRA parameters into the edge-side model to achieve flexible specialization. Our method facilitates knowledge transfer between models while significantly improving the inference efficiency of the specialized model by reducing the input context length. Without specialized training, LoRA-Gen outperforms conventional LoRA fine-tuning, which achieves competitive accuracy and a 2.1x speedup with TinyLLaMA-1.1B in reasoning tasks.
Besides, our method delivers a compress ratio of 10.1x with Gemma-2B on intelligent agent tasks.
Lay Summary: Recent advances show that scaling language models improves NLP performance, but effectiveness and efficiency remain limited for small edge-side models. We introduce LoRA-Gen, a framework that leverages a large cloud model to generate LoRA parameters for edge models based on task descriptions. Using reparameterization, we integrate these parameters for flexible specialization, enabling efficient knowledge transfer and reducing input length for faster inference.
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Model, LoRA, Specializing Model
Submission Number: 5959
Loading