TMA-Adaptive FP8 Grouped GEMM: Eliminating Padding Requirements in Low-Precision Training and Inference on Hopper
Keywords: FP8 Grouped GEMM, Low-Precision Computation, Tensor Memory Accelerator, Padding-Free GEMM, High-Performance Computing, MOE
Abstract: Current FP8 grouped GEMM implementations require padding each group to a fixed alignment (e.g., 128), incurring memory and computational overhead. We propose \textit{TMA-Adaptive FP8 Grouped GEMM}, which eliminates padding by dynamically adapting to variable group dimensions via (1) a TMA descriptor pool with $\log_2(block_M)$ preconfigured descriptors to handle all residual row cases through dynamic runtime selection and dual-phase load-store operations, achieving comprehensive coverage with minimal overhead, and (2) TMA-alignment-aware management to satisfy 16-byte global memory alignment and 128-byte shared memory alignment. Experiments demonstrate 1.7\% to 20.4\% speed up with up to 23.8\% memory reduction compared to padding operation plus state-of-the-art FP8 grouped GEMM, while maintaining full numerical equivalence for valid data. The source code is publicly available at an anonymous repository: \url{https://github.com/sukoncon/TMA-Adaptive-FP8-Grouped-GEMM}.
Submission Number: 24
Loading