Keywords: Masked Image Modeling, Self-supervised Learning, Vision Transformers, Attentive Probing, Attention, Pre-training, Evaluation Protocols
TL;DR: Linear probing underperforms, and fine-tuning is increasingly impractical. We introduce Efficient Probing (EP) — a lightweight, high-performance attentive probing method that consistently achieves the best accuracy–efficiency trade-off.
Abstract: As fine-tuning (FT) becomes increasingly impractical at scale, probing is emerging as the preferred evaluation protocol for self-supervised learning (SSL). Yet, the standard linear probing (LP) fails to adequately reflect the potential of models trained with Masked Image Modeling (MIM), due to the distributed nature of patch tokens. This motivates the need for attentive probing, an alternative that uses attention to selectively aggregate patch-level features. Despite its growing adoption, attentive probing remains underexplored, with existing methods suffering from excessive parameterization and poor computational efficiency.
In this work, we revisit attentive probing through the lens of the accuracy–efficiency trade-off. We conduct a systematic study of existing methods, analyzing their mechanisms and benchmarking their performance. We introduce efficient probing (EP), a multi-query cross-attention mechanism that eliminates redundant projections, reduces the number of trainable parameters, and achieves up to a 10$\times$ speed-up over conventional multi-head attention. Despite its simplicity, EP outperforms LP and prior attentive probing approaches across seven benchmarks, generalizes well beyond MIM to diverse pretraining paradigms, produces interpretable attention maps, and achieves strong gains in low-shot and layer-wise settings. Code available at https://github.com/billpsomas/efficient-probing.
**Topic**: Vision and Learning
Submission Number: 145
Loading