Keywords: parameter-efficient fine-tuning, low-rank adaptation, foundation model
TL;DR: We propose Lily (Low-Rank Interconnected Adaptation across Layers) as a novel parameter-efficienct fine-tuning method with cross-layer connections, which enables information access to all layers as well as high-rank weight updates.
Abstract: Low-rank adaptation (LoRA) is a powerful parameter-efficient fine-tuning method that utilizes low-rank projectors $A$ and $B$ to learn weight updates $\Delta W$ for adaptation targets $W$. However, while the low-rank structure of $A$ and $B$ enables high hardware efficiency, it also restricts the overall weight update to be low-rank, which limits the adaptation performance. In this paper, we propose \underline{l}ow-rank \underline{i}nterconnected adaptation across \underline{l}a\underline{y}ers (Lily). Specifically, we employ a hierarchical framework where low-dimensional projectors (LPs) retained for downward projection at a particular level, while globally-shared high-dimensional projector (HP) experts perform upward projection across all levels of layers. This interconnected asymmetric structure makes the adaptation much more dynamic and breaks the low-rank weight-update constraint of LoRA when using the same parameters budget. Furthermore, Lily's cross-layer connections facilitate the capture of intricate information and dependencies across different layers, thereby enhancing the model's representational capabilities. Experiments across various modalities, architectures, and model sizes underscore Lily's great performance and efficiency.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3759
Loading