Concept Understanding in Large Language Models: An Empirical StudyDownload PDF

01 Mar 2023 (modified: 31 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Large Language Model, Concept Understanding
TL;DR: Explore the capability of Large Language Models for understanding concepts.
Abstract: Large Language Models (LLMs) have demonstrated their superior comprehension and expressiveness across a wide range of tasks, and exhibited remarkable capabilities in real-world applications. Hence, it is crucial to investigate their potential and limitations for trustworthy performance in both academia and industry. In this paper, we focus on exploring LLMs' ability to understand concepts, especially abstract and concrete ones. To this end, we construct a WordNet-based dataset containing a subset for abstract concepts and a subset for concrete concepts. We select six pre-trained LLMs and conduct a classic NLP task, hypernym discovery, as evidence of LLMs' comprehension ability in understanding concepts. The experimental results suggest that the LLM's understanding of abstract concepts is significantly weaker than that of concrete concepts.
6 Replies

Loading