Prompt-prompted Adaptive Structured Pruning for Efficient LLM Generation

Published: 21 Jun 2024, Last Modified: 26 Jul 2024ES-FoMo-II 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: structured pruning, sparsity, large language model, transformer, adaptive pruning, mixture of experts, efficiency, generation, feedforward
TL;DR: We propose a novel cheap adaptive structured pruning method that exploits consistent activations in feedforward block, known as flocking.
Abstract: Large language models (LLMs) have remarkable utility, but this comes at a considerable computational cost at deployment. Fortunately, some methods such as pruning or mixture of experts exploit sparsity in transformer feedforward (FF) blocks to gain boosts in speed and reduce memory, yet these techniques can be costly and inflexible in practice, as they often require training or are restricted to specific types of architectures. To address this, we introduce GRIFFIN, a novel training-free method that selects unique FF experts at the sequence level for efficient generation across a plethora of LLMs with different non-ReLU activation functions. This is possible due to a critical observation that many trained LLMs naturally produce highly structured FF activation patterns within a sequence, which we call flocking. GRIFFIN maintains the original model's performance with little to no degradation on a variety of tasks, all while improving latency (e.g. 1.29$\times$ and 1.25$\times$ speed-ups in Gemma 7B and Llama 2 13B, respectively, on an NVIDIA L40). Code can be found at \url{https://github.com/hdong920/GRIFFIN}.
Submission Number: 64
Loading