VisualScratchpad: Inference-time Visual Concepts Analysis in Vision Language Models
Keywords: mechanistic interpretability, interactive interface, vision-language models, sparse autoencoders
TL;DR: We introduce an interactive interface that supports a practical pipeline of sparse autoencoder-based visual concept analysis in vision language models (VLMs), along with case studies revealing failure modes of VLMs.
Abstract: High-performing vision language models still produce incorrect answers, yet their failure modes are often difficult to explain. To make model internals more accessible and enable systematic debugging, we introduce VisualScratchpad, an interactive interface for visual concept analysis during inference. We apply sparse autoencoders to the vision encoder and link the resulting visual concepts to text tokens via text-to-image attention, allowing us to examine which visual concepts are both captured by the vision encoder and utilized by the language model. VisualScratchpad also provides a token-latent heatmap view that suggests a sufficient set of latents for effective concept ablation in causal analysis. Through case studies, we reveal three underexplored failure modes: limited cross-modal alignment, misleading visual concepts, and unused hidden cues.
Submission Number: 262
Loading