Orchestrating Symbolic and Sub-Symbolic Reasoning: A Multi-Agent LLM Framework for Complex Scientific Problem-Solving
Keywords: Multi-Agent Systems, Large Language Models, Symbolic Reasoning, Logical Consistency, Tool-Use, Polymer Science, Scientific AI, Distributed Reasoning
TL;DR: We propose a multi-agent LLM framework that orchestrates symbolic and sub-symbolic reasoning to solve complex scientific problems, achieving high logical consistency but revealing challenges in cross-modal verification.
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in pattern recognition and text generation, yet they face significant challenges in complex logical and symbolic reasoning tasks. This limitation becomes particularly evident in scientific domains that require the integration of symbolic knowledge with sub-symbolic computation. To address these challenges, we present a multi-agent LLM framework where a central orchestrator (DeepSeek-V2) performs multi-turn interactions to coordinate a team of specialized agents, effectively using them as external tools for different reasoning modalities. We treat polymer science as a representative structured scientific testbed, illustrating how multi-agent reasoning frameworks can integrate symbolic constraints and numerical inference across diverse domains requiring formal logical consistency. Our framework demonstrates multi-agent reasoning through dynamic team formation and consensus mechanisms, while maintaining logical consistency via cross-agent verification protocols. We evaluate this system on polymer science—a domain rich with symbolic constraints and numerical data—showing significant performance improvements (0.76 success rate vs. 0.62 for single LLM) and robust task completion (100% in a 5-paper benchmark). However, a detailed failure case in biopolymer analysis reveals critical challenges in maintaining consistency across reasoning modalities, highlighting the need for more sophisticated verification mechanisms. Our work provides a blueprint for enhancing LLM reasoning through coordinated multi-agent systems and identifies key directions for future research in logical reasoning augmentation.
Submission Number: 42
Loading