LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision
Keywords: deep learning, computer vision, adaptive computation
TL;DR: We introduce a selector-extractor framework that extracts high-res features without ever seeing full high-res images to save compute.
Abstract: Vision transformers are ever larger, more accurate, and more expensive to compute.
At high resolution, the expense is even more extreme as the number of tokens grows quadratically in the image size.
We turn to adaptive computation to cope with this cost by learning to predict where to compute.
Our LookWhere method divides the computation between a low-resolution selector and a high-resolution extractor without ever processing the full high-resolution input.
We jointly pretrain the selector and extractor without task supervision by distillation from a self-supervised teacher, in effect learning where and what to compute at the same time.
Unlike prior token reduction methods, which pay to save by pruning already-computed tokens, and prior token selection methods, which
require complex and expensive per-task optimization, LookWhere economically and accurately selects and extracts transferrable representations of images.
We show that LookWhere excels at sparse recognition on high-resolution inputs (Traffic Signs), maintaining accuracy while reducing FLOPs by 17x and time by 4x, and standard recognition tasks that are global (ImageNet classification) and local (ADE20K segmentation), improving accuracy while reducing time by 1.36x.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 24617
Loading