Keywords: Multi-Agent Systems, Logical Reasoning, LLM Orchestration, Symbolic Reasoning, Chain-of-Thought, Consistency Maintenance
TL;DR: We introduce a hierarchical multi-agent framework that significantly improves deductive accuracy and consistency by decomposing complex tasks into coordinated interactions among specialized agents.
Abstract: Large Language Models (LLMs) demonstrate impressive natural language capabilities but exhibit significant limitations in
logical and symbolic reasoning tasks. We argue that these limitations stem from fundamental architectural constraints: monolithic
models attempting to simultaneously maintain logical consistency, domain expertise, and multi-step deductive reasoning. We propose a hierarchical multi-agent framework that decomposes complex reasoning tasks into specialized agent collaborations, treating
logical inference as an emergent property of coordinated agent interactions rather than a single model’s capability. Our architecture organizes 18+ specialized agents across four tiers—perception, domain expertise, coordination, and strategic reasoning—with
explicit protocols ensuring logical consistency across agent boundaries. Through deployment on an educational reasoning platform serving users performing complex technical problem-solving, we demonstrate that agent orchestration achieves systematic
improvements in deductive accuracy, consistency maintenance, and multi-step reasoning compared to monolithic baselines. We
present architectural principles, coordination protocols, and empirical evidence showing that distributed reasoning through specialized agents offers a promising paradigm for addressing fundamental logical reasoning challenges in LLMs.
Submission Number: 85
Loading