EpiK-Eval: Evaluation for Language Models as Epistemic Models

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Theme Track: Large Language Models and the Future of NLP
Submission Track 2: Machine Learning for NLP
Keywords: large language models, large language model, LLMs, LLM, language model, language models, LM, LMs, EpiK-Eval, knowledge consolidation, story, benchmark, knowledge-base, KB, theory-of-mind, epistemic, hallucination, hallucinate, dataset, task, scale, scaling, knowledge representation, reasoning, consolidation, knowledge, context, evaluation, limitations, limitation, narrative, narratives, training objective, causal language modeling, masked language modeling
TL;DR: First study to investigates LMs' capability to combine information seen in different training documents.
Abstract: In the age of artificial intelligence, the role of large language models (LLMs) is becoming increasingly central. Despite their growing prevalence, their capacity to consolidate knowledge from different training documents—a crucial ability in numerous applications—remains unexplored. This paper presents the first study examining the capability of LLMs to effectively combine such information within their parameter space. We introduce EpiK-Eval, a novel question-answering benchmark tailored to evaluate LLMs' proficiency in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations across various LLMs reveal significant weaknesses in this domain. We contend that these shortcomings stem from the intrinsic nature of prevailing training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs. Our code and benchmark are available at https://github.com/chandar-lab/EpiK-Eval
Submission Number: 1856
Loading