The Pillars of Skill-Acquisition and Generalization; Why efficient General Intelligence requires Multi-Component Integration
Keywords: Artifical General Intelligence, AGI, Generalization, Neuro-Symbolic AI, NeSys, Abstraction and Reasoning Corpus, ARC-AGI
TL;DR: Monolithic black-box models (e.g., LLMs) are inefficient at general intelligence, so we argue that more effective systems can be built by cleverly utilizing synergy effects of multiple concepts.
Abstract: Breakthroughs of Large Language Models (LLMs) have rekindled hopes for broadly capable artificial intelligence (i.e., Artificial General Intelligence (AGI)). Yet, these models still exhibit notable limitations – particularly in deductive reasoning and *efficient* skill acquisition. In contrast, neuro-symbolic approaches can exhibit more robust generalization across diverse tasks, as they integrate sub-symbolic pattern extraction with explicit logical structures. In this position paper, *we go a step further* and dissect generalizing systems into *six pillars*: well-defined model specificity, (human) capability encoding, dynamic knowledge acquisition & transfer, meaningful representations, abstraction & hierarchies, as well as the synergy effects resulting from component interactions. Based on historical and contemporary Artificial Intelligence (AI) approaches, we conclude that such **a multi-component implementation strategy is necessary for efficient general intelligence**. Our position is reinforced by the latest performance gains on the Abstraction and Reasoning Corpus (ARC) generalization benchmark.
Submission Number: 200
Loading