Enhancing Natural Language Understanding in Large Language Models by Symbolic Representation

Published: 29 Jun 2024, Last Modified: 10 Jul 2024KiL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Domain Knowledge, Semantic Parsing, Symbolic Representation
TL;DR: We propose a framework to enhance the capabilities of large models by fusing neural networks and symbolic representations.
Abstract: This paper presents the Symbolically Enhanced Neural Inference Framework (SENIF), which enhances the natural language understanding (NLU) capabilities of large language models (LLMs) such as GPT-4 by combining large language models with symbolic representations. The proposed method aims to improve the performance of LLMs by enabling them to infer based on formalized statements. The framework employs Assertional Logic (AL) as its foundational representation. Initially, the framework translates natural language utterances into logical expressions after developing a Concept-Operator diagram (CO) within the domain. We propose a zero-shot parser that enables smaller language models to yield high-quality parsing results for a given Concept-Operator Diagram. We then design a Chain-of-Thought (CoT) prompt that utilizes both the original text and the parsing results from the preceding step as inputs. Experimental results show that LLMs, like GPT-4, can greatly benefit from these high-quality parsing results. Our framework exhibits substantial improvement in GPT-4's performance, elevating the most challenging measure, C@90, by 46.67\% (40\% $\rightarrow$ 86.67\%). Meanwhile, we have also verified its feasibility in modeling in different fields and medium language models. This research provides a promising direction for enhancing the inference capabilities of large language models.
Submission Number: 14
Loading