Keywords: 3d object detection, vision transformers, sparse attention, token sparsity, query sparsity, transformer acceleration, computer vision, robotics, autonomous driving
TL;DR: SToRe3D achieves 3x faster multi-view 3D object detection with minimal accuracy loss by jointly sparsifying image tokens and 3D queries in ViTs.
Abstract: Vision Transformers (ViTs) enable strong multi-view 3D detection but are limited by high inference latency from dense token and query processing across multiple views and large 3D regions. Prior sparsity methods, designed mainly for 2D vision, prune or merge image tokens but do not extend to full-model sparsity or address 3D object queries. We introduce SToRe3D, a relevance-aligned sparsity framework that jointly selects 2D image tokens and 3D object queries while storing filtered features for selective reuse. Mutual 2D-3D relevance heads allocate compute to driving-critical content and preserve other embeddings. Evaluated on nuScenes and our new nuScenes-Relevance benchmark, SToRe3D delivers up to 3x faster inference with marginal accuracy loss, establishing real-time 3D detection with large scale ViTs while maintaining accuracy on planning-critical agents.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 23523
Loading