Mixture-of-Channels: Exploiting Sparse FFNs for Efficient LLMs Pre-Training and Inference

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, pre-training, inference, memory-efficient, sparsity
TL;DR: We propose a simple yet efficient method to selectively activate most relevant channels for each input token in FFNs, thereby substantially reducing activation memory footprint in pre-training and accelerating model inference.
Abstract: Large language models (LLMs) have demonstrated remarkable success across diverse artificial intelligence tasks, driven by scaling laws that correlate model size and training data with performance improvements. However, this scaling paradigm incurs substantial memory overhead, creating significant challenges for both training and deployment. While existing research has primarily addressed parameter and optimizer state memory reduction, activation memory—particularly from feed-forward networks (FFNs)—has become the critical bottleneck, especially when FlashAttention is implemented. In this work, we conduct a detailed memory profiling of LLMs and identify FFN activations as the predominant source to activation memory overhead. Motivated by this, we introduce the Mixture-of-Channels, a novel FFN architecture that selectively activates only the top-$K$ most relevant channels per token determined by SwiGLU’s native gating mechanism. MoC substantially reduces activation memory during pre-training and improves inference efficiency by reducing memory access through partial weight loading into GPU SRAM. Extensive experiments validate that MoC delivers significant memory savings and throughput gains while maintaining competitive model performance.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 12395
Loading