Keywords: LLM, LLM hallucinations, detection
Abstract: Large Language Models (LLMs) have become increasingly important recently in our daily lifes. However, these models can sometimes give false or misleading answers, called hallucinations. Therefore, it is important to detect these hallucinations in the generated text of LLM. In this project, we propose a method to detect these hallucinations.
Submission Number: 27
Loading