Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Vision task-level zero-shot generalization and instruction-level zero-shot generalization
Abstract: Computer Vision (CV) has yet to fully achieve the zero-shot task generalization observed in Natural Language Processing (NLP), despite following many of the milestones established in NLP, such as large transformer models, extensive pre-training, and the auto-regression paradigm, among others. In this paper, we rethink the reality that CV adopts discrete and terminological task definitions (e.g., "image segmentation"), and conjecture it is a key barrier that hampers zero-shot task generalization. Our hypothesis is that without truly understanding previously-seen tasks—due to these terminological definitions—deep models struggle to generalize to novel tasks. To verify this, we introduce Explanatory Instructions, which provide an intuitive way to define CV task objectives through detailed linguistic transformations from input images to outputs. We create a large-scale dataset comprising 12 million "image input $\to$ explanatory instruction $\to$ output" triplets, and train an auto-regressive-based vision-language model (AR-based VLM) that takes both images and explanatory instructions as input. By learning to follow these instructions, the AR-based VLM achieves instruction-level zero-shot capabilities for previously-seen tasks and demonstrates strong zero-shot generalization for unseen CV tasks. Code and dataset will be open-sourced.
Lay Summary: Large Vision (Vision-Language) Models excel at specific vision tasks like recognizing objects, but they struggle to generalize these skills to new, unseen tasks—unlike humans who can adapt quickly. This gap exists because current models rely on rigid, predefined task definitions (e.g., "segment the image") rather than understanding the underlying objectives. To bridge this gap, we introduced Explanatory Instructions, which describe vision tasks in natural language (e.g., "highlight the river in blue and mark the rocks in red"). We created a large dataset with 12 million image-instruction-output examples and trained a model to follow these instructions. This approach allows the model to generalize to new tasks without additional training, achieving zero-shot capabilities for both familiar and novel vision tasks. Our work moves toward more flexible and human-like computer vision systems, enabling models to tackle diverse tasks by simply understanding descriptive instructions—just like humans do.
Link To Code: https://github.com/SEU-VIPGroup/Understanding_Vision_Tasks
Primary Area: Deep Learning
Keywords: Vision Tasks Understanding, Task-level Zero-shot Generalization
Submission Number: 8820
Loading