Attention Normalization Impacts Cardinality Generalization in Slot Attention

TMLR Paper2796 Authors

04 Jun 2024 (modified: 17 Jul 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Object-centric scene decompositions are important representations for downstream tasks in fields such as computer vision and robotics. The recently proposed Slot Attention module, already leveraged by several derivative works for image segmentation and object tracking in videos, is a deep learning component which performs unsupervised object-centric scene decomposition on input images. It is based on an attention architecture, in which latent slot vectors, which hold compressed information on objects, attend to localized perceptual features from the input image. In this paper, we demonstrate that design decisions on normalizing the aggregated values in the attention architecture have considerable impact on the capabilities of Slot Attention to generalize to a higher number of slots and objects as seen during training. We propose and investigate alternatives to the original normalization scheme which increase the generalization capabilities of Slot Attention to varying slot and object counts, resulting in performance gains on the task of unsupervised image segmentation. The newly proposed normalizations represent minimal and easy to implement modifications of the usual Slot Attention module, changing the value aggregation mechanism from a weighted mean operation to a scaled weighted sum operation.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Revised abstract to include motivations, strike inaccurate statement - Clarified formulations regarding Proposition 1 - Expanded related work section - Added citation for vMF clustering - Added pseudocode to the appendix - Added visualizations to the appendix - Added experiments for property prediction to the appendix
Assigned Action Editor: ~Mathieu_Salzmann1
Submission Number: 2796
Loading