【Proposal】Logic Extraction: Enhancing AI Generalization in Abstraction and Reasoning Corpus Tasks

20 Oct 2024 (modified: 05 Nov 2024)THU 2024 Fall AML SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AGI, Logic Extraction, Information Bottleneck, Data Augmentation, Emergent Language.
TL;DR: We propose a framework that mimics human cognitive processes to enhance AI generalization in ARC tasks through logic extraction and data augmentation.
Abstract: The Abstraction and Reasoning Corpus (ARC) benchmark is a key challenge in evaluating an AI system’s ability to generalize from limited examples, simulating a fundamental aspect of human intelligence. Despite progress in AI, current state-of-the-art models lag behind human performance on ARC tasks, with a significant gap in both accuracy and generalization capacity. In this research, we propose a novel framework that enhances AI generalization by extracting implicit “inner logic” from task examples and using it as an information bottleneck between a logic extraction model and a generation model. This approach emulates human cognitive processes in pattern recognition and reasoning. Our method incorporates data augmentation and curriculum training to improve model robustness across varying levels of task complexity. By promoting an emergent language that encapsulates this inner logic, we aim to enhance model performance while exploring more abstract representations of reasoning that go beyond explicit programming or natural language descriptions. This research could provide key insights into AI reasoning and abstraction, contributing to the development of more human-like generalization abilities in artificial systems.
Submission Number: 8
Loading