Efficient Rotation Invariance in Deep Neural Networks through Artificial Mental Rotation

TMLR Paper3762 Authors

26 Nov 2024 (modified: 20 Feb 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Humans and animals recognize objects irrespective of the beholder’s point of view, which may drastically change their appearance. Artificial pattern recognizers also strive to achieve this, e.g., through translational invariance in convolutional neural networks (CNNs). However, both CNNs and vision transformers (ViTs) perform very poorly on rotated inputs. Here we present artificial mental rotation (AMR), a novel deep learning paradigm for dealing with in-plane rotations inspired by the neuro-psychological concept of mental rotation. Our simple AMR implementation works with all common CNN and ViT architectures. We test it on ImageNet, Stanford Cars, and Oxford Pet. With a top-1 error (averaged across datasets and architectures) of 0.743, AMR outperforms the current state of the art (rotational data augmentation, average top-1 error of 0.626) by 19%. We also easily transfer a trained AMR module to a downstream task to improve the performance of a pre-trained semantic segmentation model on rotated CoCo from 32.7 to 55.2 IoU.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Spell check and textual improvements
Assigned Action Editor: ~Venkatesh_Babu_Radhakrishnan2
Submission Number: 3762
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview