Ambiguous Images With Human Judgments for Robust Visual Event ClassificationDownload PDF

Published: 17 Sept 2022, Last Modified: 23 May 2023NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: Datasets, Computer Vision, Cognitive Science, Uncertainty, Perception, Model Calibration
Abstract: Contemporary vision benchmarks predominantly consider tasks on which humans can achieve near-perfect performance. However, humans are frequently presented with visual data that they cannot classify with 100% certainty, and models trained on standard vision benchmarks achieve low performance when evaluated on this data. To address this issue, we introduce a procedure for creating datasets of ambiguous images and use it to produce SQUID-E ("Squidy"), a collection of noisy images extracted from videos. All images are annotated with ground truth values and a test set is annotated with human uncertainty judgments. We use this dataset to characterize human uncertainty in vision tasks and evaluate existing visual event classification models. Experimental results suggest that existing vision models are not sufficiently equipped to provide meaningful outputs for ambiguous images and that datasets of this nature can be used to assess and improve such models through model training and direct evaluation of model calibration. These findings motivate large-scale ambiguous dataset creation and further research focusing on noisy visual data.
Author Statement: Yes
TL;DR: We introduce a procedure for creating datasets of ambiguous images and use it to produce DAI (Dataset of Ambiguous Images), a collection of noisy images extracted from videos and corresponding human uncertainty judgments.
Supplementary Material: pdf
Dataset Url: https://github.com/katesanders9/squid-e
License: CC-BY 4.0 https://creativecommons.org/licenses/by/4.0/
Contribution Process Agreement: Yes
In Person Attendance: Yes
URL: https://katesanders9.github.io/ambiguous-images/
32 Replies

Loading