Sparse Mixture-of-Experts for Multi-Channel Imaging: Are All Channel Interactions Required?

Published: 24 Sept 2025, Last Modified: 15 Oct 2025NeurIPS2025-AI4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: Computer Vision, Sparse Mixture of Experts, Multi-Channel Imaging
Abstract: Vision Transformers (ViTs) have become the backbone of vision foundation models, yet their optimization for multi-channel domains—such as cell painting or satellite imagery—remains underexplored. A key challenge in these domains is capturing interactions between channels, as each channel carries different information. While existing works have shown efficacy by treating each channel independently during tokenization, this approach naturally introduces a major computational bottleneck in the attention block: channel-wise comparisons leads to a quadratic growth in attention, resulting in excessive FLOPs and high training cost. In this work, we shift focus from efficacy to the overlooked \textit{efficiency challenge} in cross-channel attention and ask: ``Is it necessary to model all channel interactions?". Inspired by the philosophy of Sparse Mixture-of-Experts (MoE), we propose MoE-ViT, a Mixture-of-Experts architecture for multi-channel images in ViTs, which treats each channel as an expert and employs a lightweight router to select only the most relevant experts per patch for attention. Proof-of-concept experiments on real-world datasets—JUMP-CP and So2Sat—demonstrate that MoE-ViT achieves substantial efficiency gains without sacrificing, and in some cases enhancing, performance, making it a practical and attractive backbone for multi-channel imaging.
Submission Number: 177
Loading