Track: long paper (up to 8 pages)
Keywords: Analogical reasoning, latent space geometry, geometric inductive bias, representation learning, abstraction, ARC benchmark, linear scaling, interpretability.
TL;DR: Analogical reasoning can be realized as a simple geometric translation in latent space, yielding interpretable structure, improved generalization, and linear computational scaling.
Abstract: Analogical reasoning is the ability to infer and apply a consistent relation across examples. This ability imposes a simple geometric constraint: example pairs should differ by a shared displacement in representation space. Yet most neural architectures do not enforce this structure explicitly, instead relying on implicit attention-based mechanisms. We study analogical reasoning from a geometric perspective and show that it can be realized by a minimal latent-space translation structure. We instantiate this principle with an Encoder–Reasoner–Decoder (ERD) architecture, where each example pair defines a latent difference vector and a task-level transformation is obtained by averaging these differences, corresponding to the least-squares solution of a shared linear relation. Using the Abstraction and Reasoning Corpus (ARC) as a diagnostic benchmark, we show that explicitly enforcing this geometric constraint improves generalization relative to implicit attention-based reasoning approaches, while reducing computation from quadratic to linear in the number of examples. The learned representations exhibit clear geometric structure: difference vectors cluster by task and form consistent parallelogram relations, providing direct evidence of explicit analogical geometry. These results suggest that analogical reasoning does not require complex symbolic machinery, large-scale attention, or auxiliary training losses, but can emerge from enforcing a simple and interpretable geometric structure in latent space.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 65
Loading