Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models

Published: 20 Jan 2025, Last Modified: 20 Jan 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid advancement in Large Language Models (LLMs) has markedly enhanced the capabilities of language understanding and generation. However, the substantial model size poses hardware challenges, affecting both memory size for serving and inference latency for token generation. To address those challenges, we propose Dependency-aware Semi-structured Sparsity (DaSS), a new method for the recent prevalent GLU-based LLMs pruning, which incorporates structural dependency into the weight magnitude-based unstructured pruning. We introduce an MLP-specific pruning metric that evaluates the importance of each weight by jointly considering its magnitude and its corresponding MLP intermediate activation norms. DaSS facilitates a balance between the adaptability offered by unstructured pruning and the structural consistency inherent in dependency-based structured pruning. Empirical evaluations on LLaMA2, Mistral, and Gemma model families demonstrate that DaSS not only achieves superior perplexity and accuracy compared to SparseGPT and Wanda in achieving hardware-friendly N:M sparsity patterns but also maintains the computational efficiency of Wanda.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We revised the paper based on the "Requested revisions".
Code: https://github.com/guozhiyu/glu_dass
Supplementary Material: zip
Assigned Action Editor: ~Marwa_El_Halabi1
Submission Number: 3475
Loading