SNAP: Testing the Effects of Capture Conditions on Fundamental Vision Tasks

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: dataset bias, camera parameters, human experiment, image classification, object detection, VQA
TL;DR: SNAP is a new dataset of images taken with many different camera parameters under 2 lighting conditions. Testing vision models on SNAP showed that they are affected by exposure and camera settings.
Abstract: Generalization of deep-learning-based (DL) computer vision algorithms to various image perturbations is hard to establish and remains an active area of research. The majority of past analyses focused on the images already captured, whereas effects of the image formation pipeline and environment are less studied. In this paper, we address this issue by analyzing the impact of capture conditions, such as camera parameters and lighting, on DL model performance on 3 vision tasks---image classification, object detection, and visual question answering (VQA). To this end, we assess capture bias in common vision datasets and create a new dataset, $\textbf{SNAP}$ (for $\textbf{S}$hutter speed, ISO se$\textbf{N}$sitivity, and a$\textbf{P}$erture), consisting of images of objects taken under controlled lighting conditions and with densely sampled camera settings. We then evaluate a large number of DL vision models and show the effects of capture conditions on each selected vision task. Lastly, we conduct an experiment to establish a human baseline for the VQA task. Our results show that computer vision datasets are significantly biased, the models trained on this data do not reach human accuracy even on the well-exposed images, and are susceptible to both major exposure changes and minute variations of camera settings.
Primary Area: datasets and benchmarks
Submission Number: 14077
Loading