Saccader: Improving Accuracy of Hard Attention Models for VisionDownload PDF

Gamaleldin F Elsayed, Simon Kornblith, Quoc Le

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, they are often regarded as black boxes. Because they compute a nonlinear function of the entire input image, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is hard attention, which selects only relevant portions of the image. However, training hard attention models with only image-level class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader, as well as a self-supervised pretraining procedure for this model that does not suffer from optimization challenges. Our approach achieves high accuracy on ImageNet while providing more interpretable predictions.
Code Link: https://github.com/google-research/google-research/tree/master/saccader
CMT Num: 354
1 Reply

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview