Representation Learning with Adaptive Superpixel Coding

Published: 14 Jun 2025, Last Modified: 16 Aug 2025MKLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Submission Type: Archive
Keywords: Vision Transformers, Adaptive Superpixels, Self-Supervised Learning, Image Representation, Deep Learning
TL;DR: We introduce Adaptive Superpixel Coding, a self-supervised Transformer that adaptively groups image tokens for better visual representations and improved downstream performance.
Abstract: Deep learning vision models are typically tailored for specific modalities and often rely on domain-specific assumptions, such as the grid structures used by nearly all existing vision models. In this work, we propose a self-supervised model based on Transformers, which we call Adaptive Superpixel Coding (ASC). The key insight of our model is to overcome the limitations of traditional Vision Transformers, which depend on fixed-size and non-adaptive patch partitioning. Instead, ASC employs adaptive superpixel layers that dynamically adjust to the underlying image content. We analyze key properties of the approach that make it effective, and find that our method outperforms widely-used alternatives on standard image downstream task benchmarks.
Submission Number: 18
Loading