Keywords: peft, fine-tuning, neural networks, deep learning, gpt, LLM, AI, dynamical systems
TL;DR: Solo Connection is a parameter-efficient fine-tuning method using long skip connections and homotopy-inspired adaptation. It outperforms LoRA on NLG tasks with up to 59\% fewer parameters and 99\% vs. full fine-tuning.
Abstract: Parameter-efficient fine-tuning (PEFT) is a versatile and extensible approach for adapting a Large Language Model (LLM) for newer tasks. One of the most prominent PEFT approaches, Low-Rank Adaptation (LoRA), primarily focuses on adjusting the attention weight matrices within individual decoder blocks of a Generative Pre-trained Transformer (GPT-2). In contrast, we introduce Solo Connection—a novel method that adapts the representation at the decoder-block level rather than modifying individual weight matrices. Not only does Solo Connection outperform LoRA on E2E natural language generation benchmarks, but it also reduces the number of trainable parameters by 59\% relative to LoRA and by more than 99\% compared to full fine-tuning of GPT-2 one of the earliest versions of large language models (LLMs). Another key motivation for Solo Connection comes from homotopy theory, where we introduce a trainable linear transformation that gradually interpolates between a zero vector and the task-specific representation, enabling smooth and stable adaptation over time.
While skip-connections in the original 12-layer GPT-2 are typically confined to individual decoder blocks, subsequent GPT-2 variants scale up to 48 layers, and even larger language models can include 128 or more decoder blocks. These expanded architectures underscore the need to revisit how skip connections are employed during fine-tuning. This paper focuses on "long skip connections" that link outputs of different decoder blocks, potentially enhancing the model's ability to adapt to new tasks while leveraging pre-trained knowledge.
Submission Number: 20
Loading