Adding 32 Parameters to a LLM can improve fine-tuned classification performance by up to 1.5-6 percentage points

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: LLM finetuning, LLM classification
Abstract: In this paper, we introduce and analyze an architectural augmentation for Large Language Models (LLMs) that enhances their performance in fine-tuned classification tasks through a minimalistic yet effective approach. By incorporating one additional learnable parameter per transformer block, we facilitate a depth-wise pooling mechanism that leverages the hierarchical information encoded in the layers of the pre-trained model. We demonstrate that this method significantly improves classification accuracy, with an observed increase of 1.5-6 percentage points in some models, at a negligible compute cost during training. Our experiments span various models and datasets, underscoring the universality and adaptability of the proposed technique. The augmentation proves particularly effective under data-scarce conditions, highlighting its practical utility in real-world applications. The simplicity and efficacy of our approach advocate for its integration in fine-tuning LLMs for classification, promising enhanced performance and new insights into model interpretability and optimization for diverse NLP tasks.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3557
Loading