Surgical Feature-Space Decomposition of LLMs: Why, When and How?Download PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Low-rank approximations, of the weight and feature space can enhance the performance of deep learning models, whether in terms of improving generalization or reducing the latency of inference. However, there is no clear consensus yet on how, when and why these approximations are helpful for large language models (LLMs). In this work, we empirically study the efficacy of weight and feature space decomposition in transformer-based LLMs. We demonstrate that surgical decomposition not only provides critical insights into the trade-off between compression and language modelling performance, but also sometimes enhances commonsense reasoning performance of LLMs. Our empirical analysis identifies specific network segments that intrinsically exhibit a low-rank structure. Furthermore, we extend our investigation to the implications of low-rank approximations on model bias. Overall, our findings offer a novel perspective on optimizing LLMs, presenting the low-rank approximation not only as a tool for performance enhancements, but also as a means to potentially rectify biases within these models.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Model analysis & interpretability, Approaches low compute settings-efficiency
Languages Studied: English
Preprint Status: There is no non-anonymous preprint and we do not intend to release one.
A1: yes
A1 Elaboration For Yes Or No: 6
A2: n/a
A3: yes
A3 Elaboration For Yes Or No: 1
B: no
B1: n/a
B3: n/a
B4: n/a
B5: n/a
B6: yes
B6 Elaboration For Yes Or No: Appendix A
C: yes
C1: yes
C1 Elaboration For Yes Or No: 5
C2: yes
C2 Elaboration For Yes Or No: 5
C3: yes
C3 Elaboration For Yes Or No: 5
C4: n/a
D: no
D1: n/a
D2: n/a
D3: n/a
D4: n/a
D5: n/a
E: no
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview