Probabilistic Abduction for Visual Abstract Reasoning via Learning Rules in Vector-symbolic Architectures

Published: 28 Oct 2023, Last Modified: 13 Nov 2023MATH-AI 23 PosterEveryoneRevisionsBibTeX
Keywords: Visual abstract reasoning, Raven's progressive matrices, vector-symbolic architectures, neural networks
TL;DR: We present a learnable vector-symbolic architecture-based rule formulation to solve visual abstract reasoning tasks via probabilistic abduction.
Abstract: Abstract reasoning is a cornerstone of human intelligence, and replicating it with artificial intelligence (AI) presents an ongoing challenge. This study focuses on efficiently solving Raven's progressive matrices (RPM), a visual test for assessing abstract reasoning abilities, by using distributed computation and operators provided by vector-symbolic architectures (VSA). Instead of hard-coding the rule formulations associated with RPMs, our approach can learn the VSA rule formulations (hence the name Learn-VRF) with just one pass through the training data. Yet, our approach, with compact parameters, remains transparent and interpretable. Learn-VRF yields accurate predictions on I-RAVEN's in-distribution data, and exhibits strong out-of-distribution capabilities concerning unseen attribute-rule pairs, significantly outperforming pure connectionist baselines including large language models. Our code is available at https://github.com/IBM/learn-vector-symbolic-architectures-rule-formulations.
Submission Number: 20
Loading