VidEoMT: Your ViT is Secretly Also a Video Segmentation Model

Published: 23 Feb 2026, Last Modified: 24 Feb 2026CVPR 2026EveryonearXiv.org perpetual, non-exclusive license
Abstract: Existing online video segmentation models typically combine a per-frame segmenter with complex specialized tracking modules. While effective, these modules introduce significant architectural complexity and computational overhead. Recent studies suggest that plain Vision Transformer (ViT) encoders, when scaled with sufficient capacity and large-scale pre-training, can conduct accurate image segmentation without requiring specialized modules. Motivated by this observation, we propose the _Video Encoder-only Mask Transformer (VidEoMT)_, a simple encoder-only video segmentation model that eliminates the need for dedicated tracking modules. To enable temporal modeling in an encoder-only ViT, VidEoMT introduces a lightweight query propagation mechanism that carries information across frames by reusing queries from the previous frame. To balance this with adaptability to new content, it employs a query fusion strategy that combines the propagated queries with a set of temporally-agnostic learned queries. As a result, VidEoMT attains the benefits of a tracker without added complexity, achieving competitive accuracy while being 5$\times$--10$\times$ faster, running at up to 160 FPS with a ViT-L backbone. Code: \href{https://www.tue-mps.org/videomt/}{https://www.tue-mps.org/videomt/}.
Loading