Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination

Published: 27 Oct 2023, Last Modified: 24 Apr 2024ICBINB 2023EveryoneRevisionsBibTeX
Keywords: Finance, Large Language Models, Hallucinations
TL;DR: An empirical examination on hallucinations of Large Language Models in the domain of Finance
Abstract: The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs’ hallucination behaviors in financial tasks. First, we empirically investigate LLM model’s ability of explaining financial concepts and terminologies. Second, we assess LLM models’ capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs’ hallucination.
Submission Number: 19
Loading