SPoT: Subpixel Placement of Tokens in Vision Transformers

TMLR Paper6378 Authors

04 Nov 2025 (modified: 08 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Vision Transformers naturally accommodate sparsity, yet standard tokenization methods confine features to discrete patch grids. This constraint prevents models from fully exploiting sparse regimes, forcing awkward compromises. We propose Subpixel Placement of Tokens (SPoT), a novel tokenization strategy that positions tokens continuously within images, effectively sidestepping grid-based limitations. With our proposed oracle-guided search, we uncover substantial performance gains achievable with ideal subpixel token positioning, drastically reducing the number of tokens necessary for accurate predictions during inference. SPoT provides a new direction for flexible, efficient, and interpretable ViT architectures, redefining sparsity as a strategic advantage rather than an imposed limitation.
Submission Type: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: 6169
Changes Since Last Submission: Updated link to anonymous GitHub repo. No other changes.
Assigned Action Editor: ~Hankook_Lee1
Submission Number: 6378
Loading