Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations

Published: 29 Aug 2025, Last Modified: 29 Aug 2025NeSy 2025 - Phase 2 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: logical reasoning, large language models, formal semantics, paraconsistency
TL;DR: An approach to logical reasoning with LLMs that integrates an LLM directly into the interpretation function of a formal semantics for a paraconsistent logic.
Abstract: Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs’ broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neurosymbolic reasoning that leverages an LLM’s knowledge while preserving the underlying logic’s soundness and completeness properties.
Track: Neurosymbolic Generative Models
Paper Type: Long Paper
Resubmission: No
Software: https://github.com/bradleypallen/bilateral-factuality-evaluation
Publication Agreement: pdf
Submission Number: 23
Loading