Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability

Published: 10 Jul 2024, Last Modified: 26 Aug 2024COLMEveryoneRevisionsBibTeXCC BY 4.0
Research Area: Science of LMs, LMs and the world
Keywords: Language models, Knowledge graph, Hallucination, Scaling
TL;DR: We trained language models on the Knowledge Graph to investigate models' hallucination behavior and detectability with regards to scale.
Abstract: While many capabilities of language models (LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood. Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations where a correct answer appears verbatim in the training set. To fully control the training data content, we construct a knowledge graph (KG)-based dataset, and use it to train a set of increasingly large LMs. We find that fora fixed dataset, larger and longer-trained LMs hallucinate less. However, hallucinating on≤5% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, than Hoffmann et al. (2022) reported was optimal. Given this costliness, we study how hallucination detectors depend on scale. While we see detector size improves performance on fixed LM’s outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1209
Loading