Finding Visual Task Vectors

Published: 24 Jun 2024, Last Modified: 24 Jun 2024ICML 2024 MI Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: computer vision, visual prompting, MAE-VQGAN, task vectors, activation patching, REINFORCE
TL;DR: We analyze the activation space of the visual prompting model MAE-VQGAN and identify task-related activations, equipping them to improve model performance on downstream tasks by using the REINFORCE algorithm to find optimal patching positions.
Abstract: Visual Prompting is a technique for teaching models to perform a visual task via in-context examples, and without any additional training. In this work, we analyze the activations of MAE-VQGAN, a recent Visual Prompting model (Bar et al., 2022), and find task vectors, activations that encode task specific information. Equipped with this insight, we demonstrate that it is possible to identify the task vectors and use them to guide the network towards performing different tasks without providing any input-output examples. To find task vectors, we compute the average intermediate activations per task and use the REINFORCE (Williams, 1992) algorithm to search for the subset of task vectors. The resulting task vectors can guide the model towards performing a task competitively with the original model while reducing the need for input-output examples.
Supplementary Material: pdf
Submission Number: 113
Loading