Harmonizing the object recognition strategies of deep neural networks with humansDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 14 Dec 2022, 19:28NeurIPS 2022 AcceptReaders: Everyone
Keywords: Cognitive science, human vision, explainable AI, models of biological vision, AI alignment, scaling laws
TL;DR: The scaling laws that are improving deep neural network performance on ImageNet are leading to worse models of human object recognition.
Abstract: The many successes of deep neural networks (DNNs) over the past decade have largely been driven by computational scale rather than insights from biological intelligence. Here, we explore if these trends have also carried concomitant improvements in explaining the visual strategies humans rely on for object recognition. We do this by comparing two related but distinct properties of visual strategies in humans and DNNs: where they believe important visual features are in images and how they use those features to categorize objects. Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition. \textit{State-of-the-art DNNs are progressively becoming less aligned with humans as their accuracy improves}. We rectify this growing issue with our neural harmonizer: a general-purpose training routine that both aligns DNN and human visual strategies and improves categorization accuracy. Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision. We release our code and data at https://serre-lab.github.io/Harmonization to help the field build more human-like DNNs.
Supplementary Material: pdf
19 Replies

Loading