Semantic Priming via Knowledge graphs to analyze and treat language model's Honest Lies

Published: 01 Jan 2024, Last Modified: 15 May 2025ICIS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advancements in Large Language Models (LLMs) have demonstrated significant capabilities in Natural Language Generation tasks, including summarization, question-answering, machine translation, etc. Despite their vast applications in diverse fields such as scientific research, medical, political, and legal, concerns about the reliability of LLMs have appeared due to their tendency to generate non-factual or irrelevant outputs, known as "hallucinations." In this paper, we propose the DualEval method to detect and mitigate such factual hallucinations, where we integrate semantic priming and structured knowledge graphs to enhance the fidelity of LLMs in producing fact-based outputs. In addition, we introduced an evaluation metric, DualityMatrix, specifically curated to measure the occurrence of hallucinations in response to non-factual prompts across different applications. The result shows that priming with knowledge graph structures can significantly reduce factual inaccuracies and improve the model's ability to rebut non-factual prompts, thus enhancing the reliability of LLMs in critical applications.
Loading