Expanding Horizons or Hitting Walls? Limits and Potentials of LLMs in Augmenting Lexical Knowledge BasesDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: This paper investigates the potential of Large Language Models (LLMs) to augment lexical knowledge bases (KBs) and to address their common limitations, such as static nature, limited coverage, and labor-intensive creation and maintenance. We propose a methodology that leverages LLMs to accurately reconstruct information from a source KB and generate new knowledge. Then, we evaluate this methodology using various LLMs and prompting techniques across three separate KBs. The results suggest that LLMs can accurately provide information when given ample contextual cues and when dealing with high-specificity concepts. However, they are prone to errors and inconsistencies when asked for rare or generic knowledge. The findings also indicate that LLMs can contribute to KB management by reducing the need for manual intervention. This study highlights the potential and limitations of LLMs in lexical semantics and emphasizes the importance of novel approaches to KB creation, maintenance, and integration.
Paper Type: long
Research Area: Generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: english
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview