Keywords: llms, code-generation, llm for code, llm hallucinations
TL;DR: A deep dive into how prompt variations can effect library hallucinations rates in LLMs, including user descriptions of libraries, misspellings in the library name, and popular prompting strategies.
Abstract: Large language models (LLMs) are increasingly used to generate code, yet they continue to hallucinate, often inventing non-existent libraries.
Such library hallucinations are not just benign errors: they can mislead developers, break builds, and expose systems to supply chain threats such as slopsquatting.
Despite increasing awareness of these risks, little is known about how real-world prompt variations affect hallucination rates.
Therefore, we present the first systematic study of how user-level prompt variations impact library hallucinations in LLM-generated code.
We evaluate six diverse LLMs across two hallucination types: library name hallucinations (invalid imports) and library member hallucinations (invalid calls from valid libraries).
We investigate how realistic user language extracted from developer forums and how user errors of varying degrees (one- or multi-character misspellings and completely fake names/members) affect LLM hallucination rates.
Our findings reveal systemic vulnerabilities: one-character misspellings trigger hallucinations in up to 26% of tasks, fake libraries are accepted in up to 99% of tasks, and time-related prompts lead to hallucinations in up to 84% of tasks.
Prompt engineering shows promise for mitigating hallucinations, but remains inconsistent and LLM-dependent.
Our results underscore the fragility of LLMs to natural prompt variation and highlight the urgent need for safeguards against library-related hallucinations and their potential exploitation.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 8923
Loading