LINKED: Eliciting, Filtering and Integrating Knowledge in Large Language Model for Commonsense Reasoning

ACL ARR 2024 June Submission2355 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) sometimes demonstrate poor performance on knowledge-intensive tasks, commonsense reasoning is one of them. Researchers typically address these issues by retrieving related knowledge from knowledge graphs or employing self-enhancement methods to elicit knowledge in LLMs. However, noisy knowledge and invalid reasoning issues hamper their ability to answer questions accurately. To this end, we propose a novel method named $\textbf{L}$iciting, f$\textbf{I}$ltering and i$\textbf{N}$tegrating $\textbf{K}$nowledge in large languag$\textbf{E}$ mo$\textbf{D}$el ($\mathbb{LINKED}$). In it, we design a reward model to filter out the noisy knowledge and take the marginal consistent reasoning module to reduce invalid reasoning. With our comprehensive experiments on two complex commonsense reasoning benchmarks, our method outperforms SOTA baselines (up to $\textbf{9.0}$% improvement of accuracy). Besides, to measure the positive and negative impact of the injected knowledge, we propose a new metric called effectiveness-preservation score for the knowledge enhancement works. Finally, through extensive experiments, we conduct an in-depth analysis and find many meaningful conclusions about LLMs in commonsense reasoning tasks.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: commonsense QA,reasoning,few-shot QA
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2355
Loading