LLM-Augmented Symbolic NLU System for More Reliable Continuous Causal Statement Interpretation

Published: 17 Sept 2025, Last Modified: 06 Nov 2025ACS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: natural language understanding, knowledge representation, qualitative reasoning
TL;DR: We used a hybrid approach where the symbolic NLU system integrates LLMs to update lexicon and rephrase sentences to interpret continuous causal statements.
Abstract: Despite the broad applicability of large language models (LLMs), their reliance on probabilistic inference makes them vulnerable to errors such as hallucination in generated facts and inconsistent output structure in natural language understanding (NLU) tasks. By contrast, symbolic NLU systems provide interpretable understanding grounded in curated lexicons, semantic resources, and syntactic & semantic interpretation rules. They produce relational representations that can be used for accurate reasoning and planning, as well as incremental debuggable learning. However, symbolic NLU systems tend to be more limited in coverage than LLMs and require scarce knowledge representation and linguistics skills to extend and maintain. This paper explores a hybrid approach that integrates the broad-coverage language processing of LLMs with the symbolic NLU capabilities of producing structured relational representations to hopefully get the best of both approaches. We use LLMs for rephrasing and text simplification, to provide broad coverage, and as a source of information to fill in knowledge gaps more automatically. We use symbolic NLU to produce representations that can be used for reasoning and for incremental learning. We evaluate this approach on the task of extracting and interpreting quantities and causal laws from commonsense science texts, along with symbolic- and LLM-only pipelines. Our results suggest that our hybrid method works significantly better than the symbolic-only pipeline.
Paper Track: Technical paper
Submission Number: 61
Loading