Human-like Geometric Abstraction in Large Pre-Trained Neural Networks

ICLR 2024 Workshop Re-Align Submission72 Authors

Published: 02 Mar 2024, Last Modified: 02 May 2024ICLR 2024 Workshop Re-Align PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: geometric reasoning, multimodal models, abstraction
TL;DR: Large neural networks show similar biases to humans in geometric reasoning tasks, suggesting that such abilities may not rely on discrete symbolic mental representations.
Abstract: Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry. Recent research in cognitive science suggests neural networks do not share this capacity, instead proposing that human geometric reasoning abilities come from discrete symbolic structure in human mental representations. However, progress in artificial intelligence (AI) suggests that neural networks begin to demonstrate more human-like reasoning after scaling up standard architectures in both model size and amount of training data. In this study, we revisit empirical results in cognitive science on geometric visual processing and identify three key biases in geometric visual processing: a sensitivity towards complexity, a preference for regularity, and the perception of parts and relations. We test tasks from the literature that probe these biases in humans and find that large pre-trained neural network models used in modern forms of AI demonstrate human-like biases in abstract geometric processing.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 72
Loading