VLM’s Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models

Published: 22 Apr 2025, Last Modified: 22 Apr 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Vision language models (VLMs) have shown promising reasoning capabilities across various benchmarks; however, our understanding of their visual perception remains limited. In this work, we propose an eye examination process to investigate how a VLM perceives images, focusing on key aspects of visual recognition, ranging from basic color and shape to semantic understanding. We introduce a dataset, LENS, to guide VLMs to follow the examination and check its readiness. Once the model is ready, we conduct the examination. We quantify and visualize VLMs' sensitivities to color and shape, and semantic matching. Our findings reveal that VLMs have varying sensitivity to different colors while consistently showing insensitivity to green across different VLMs. Also, we found different shape sensitivity and semantic recognition depending on LLM's capacity despite using the same fixed visual encoder. Our analyses and findings have the potential to inspire the design of VLMs and the pre-processing of visual input to VLMs for improving application performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=G8nt8UCgMY
Changes Since Last Submission: We revise and organize the sections, including abstract and experiments, to reflect the comments and results from the rebuttal.
Code: https://github.com/kaist-ami/eye-examination
Assigned Action Editor: ~Steven_Stenberg_Hansen1
Submission Number: 3384
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview