Learning and Reasoning with Model-Grounded Symbolic Artificial Intelligence Systems

Published: 20 Apr 2025, Last Modified: 29 Aug 2025NeSy 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neurosymbolic AI, Large Language Models, Symbol Grounding
TL;DR: We reinterpret instruction-tuned LLMs as model-grounded symbolic AI, using natural language as the symbolic layer. Our approach improves learning efficiency and reasoning reliability.
Track: Neurosymbolic Generative Models
Abstract: Neurosymbolic artificial intelligence (AI) systems combine neural network and classical symbolic AI mechanisms to exploit the complementary strengths of large-scale, generalizable learning and robust, verifiable reasoning. Numerous classifications of neurosymbolic AI illustrate how these two components can be integrated in distinctly different ways. In this work, we propose reinterpreting instruction-tuned large language models as model-grounded symbolic AI systems—where natural language serves as the symbolic layer, and grounding is achieved through the model’s internal representation space. Within this framework, we investigate and develop novel learning and reasoning approaches that preserve structural similarities to traditional learning and reasoning paradigms. Comprehensive evaluations across complex mathematical reasoning procedures of varying difficulties provide insights into the effectiveness of our approach towards learning efficiency and reasoning reliability.
Paper Type: Long Paper
Software: https://github.com/AniruddhaChattopadhyay/research-metatuning
Submission Number: 35
Loading