Perceptual Score: What Data Modalities Does Your Model Perceive?Download PDF

May 21, 2021 (edited Oct 28, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: VQA, Visual Question Answering, VQA-CP, Perceptiveness, Multimodal, Evaluation, Visual dialog, SocialIQ
  • TL;DR: We introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features.
  • Abstract: Machine learning advances in the last decade have relied significantly on large-scale datasets that continue to grow in size. Increasingly, those datasets also contain different data modalities. However, large multi-modal datasets are hard to annotate, and annotations may contain biases that we are often unaware of. Deep-net-based classifiers, in turn, are prone to exploit those biases and to find shortcuts. To study and quantify this concern, we introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features, i.e., modalities. Using the perceptual score, we find a surprisingly consistent trend across four popular datasets: recent, more accurate state-of-the-art multi-modal models for visual question-answering or visual dialog tend to perceive the visual data less than their predecessors. This is concerning as answers are hence increasingly inferred from textual cues only. Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions. We hope to spur a discussion on the perceptiveness of multi-modal models and also hope to encourage the community working on multi-modal classifiers to start quantifying perceptiveness via the proposed perceptual score.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/itaigat/perceptual-score
17 Replies

Loading