Abstract: The rapid advancement in Large Language Models (LLMs) has markedly enhanced the capabilities of language understanding and generation. However, the substantial model size poses hardware challenges, affecting both memory size for serving and inference latency for token generation. To address those challenges, we propose Dependency-aware Semi-structured Sparsity (DaSS), a novel method for the recent prevalent GLU-based LLMs pruning, which incorporates structural dependency into the weight magnitude-based unstructured pruning. We introduce an MLP-specific pruning metric that evaluates the importance of each weight by jointly considering its magnitude and its corresponding MLP intermediate activation norms. DaSS facilitates a balance between the adaptability offered by unstructured pruning and the structural consistency inherent in dependency-based structured pruning. Empirical evaluations on LLaMA2, Mistral, and Gemma model families demonstrate that DaSS not only outperforms both SparseGPT and Wanda in achieving hardware-friendly N:M sparsity patterns but also maintains the computational efficiency of Wanda.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Indicated the context length for perplexity evaluation.
2. Added a limitation section about the current software support for 2:4 sparse matrix transposition, and more flexible N:M sparsity patterns.
3. Additional ablation study results using Mistral-7B.
4. Updated the caption in Table 6.
5. Added LlaMA3.1-8B results.
Assigned Action Editor: ~Marwa_El_Halabi1
Submission Number: 3475
Loading