Fast Vision Mamba: Pooling Spatial Dimensions for Accelerated Processing

Published: 01 Feb 2025, Last Modified: 13 Feb 2025OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: State Space Models (SSMs) with selective scan (Mamba) have been adapted into efficient vision models. Mamba, unlike Vision Transformers, achieves linear complexity for token interactions through a recurrent hidden state process. This sequential processing is enhanced by a parallel scan algorithm, which reduces the computational time of recurrent steps from L sequential steps to log(L) parallel steps with respect to the number of input tokens (L). In this work, we propose Fast Vision Mamba (FastVim), that further reduces the computational time of the SSM block by reducing the number of recurrent steps in Vision Mamba models while still retaining model performance. By alternately pooling tokens along image dimensions across Mamba blocks, we obtain a 2× reduction in the number of parallel steps in SSM block. Our model offers up to 72.5% speedup in inference speed compared to baseline Vision Mamba models on high resolution (2048×2048) images. Our experiments demonstrate state-of-the-art performance with dramatically improved throughput in a range of tasks such as image classification, cell perturbation prediction, segmentation, and object detection. Code is made available at github.com/insitro/FastVim
Loading