Position: Efficient General Intelligence requires Neuro-Symbolic Integration: Pillars, Benchmarks, and Beyond
TL;DR: Monolithic black-box models (e.g., LLMs) are inefficient at general intelligence, so we argue that more effective systems can be built using multi-component neuro-symbolic approaches.
Abstract: Recent breakthroughs in Large Language Model (LLM) development have rekindled hopes for broadly capable artificial intelligence. Yet, these models still exhibit notable limitations -- particularly in deductive reasoning and efficient skill acquisition. In contrast, *neuro-symbolic* approaches, which integrate sub-symbolic pattern extraction with explicit logical structures, offer more robust generalization across diverse tasks. We argue that additional factors -- such as modular transparency, flexible representations, and targeted prior knowledge -- are crucial to further enhance this generalization. Our analysis of both historical and contemporary AI methods suggests that **a multi-component neuro-symbolic implementation strategy is necessary for efficient general intelligence**. This position is reinforced by the latest performance gains on the ARC-AGI benchmark and by concrete case studies demonstrating how neuro-symbolic designs address gaps left by purely neural or purely symbolic systems.
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: Neuro-Symbolic AI, NeSys, abstraction and reasoning corpus, ARC-AGI, artifical general intelligence, AGI
Submission Number: 241
Loading