Enhancing Logical Consistency in Language Models through Neuro-Symbolic Feedback and Structured Reasoning

Published: 15 Nov 2025, Last Modified: 08 Mar 2026AAAI 2026 Bridge LMReasoningEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Logical Reasoning, Symbolic Reasoning, Explainable AI (XAI)
Abstract: Large Language Models (LLMs) have achieved remarkable progress in understanding and generating natural language, yet they continue to struggle with tasks that require explicit logical, deductive, and symbolic reasoning. This limitation stems from their reliance on statistical correlations in human written text rather than structured reasoning grounded in formal logic. In this work, we explore a hybrid neuro symbolic perspective that bridges statistical language modeling with structured reasoning paradigms. We investigate how decoupled token representations and feedback guided adaptation originally developed to enhance multimodal understanding can also serve as mechanisms for improving logical consistency and inference in LLMs. Specifically, we propose integrating symbolic reasoning modules, external logic solvers, and constraint based inference layers within transformer architectures to align neural activations with logical entailment structures.Our empirical analysis demonstrates that incorporating logic informed feedback during fine tuning enhances both deductive and inductive reasoning capabilities while significantly reducing self contradiction, semantic drift, and hallucination across multi turn reasoning tasks. We further show that explicit reasoning traces, when combined with learned representations, improve interpretability and trustworthiness in model outputs. These findings support the growing view that hybrid neurosymbolic systems combining the flexibility of neural networks with the rigor of formal logic represent a crucial step toward reasoning aware foundation models
Submission Number: 7
Loading