[Proposal-ML] LLM Hallucination Detection: Fine-Tuning Gemma2

26 Oct 2024 (modified: 05 Nov 2024)THU 2024 Fall AML SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, LLM hallucinations, detection
Abstract: Large Language Models (LLMs) have become increasingly important recently in our daily lifes. However, these models can sometimes give false or misleading answers, called hallucinations. Therefore, it is important to detect these hallucinations in the generated text of LLM. In this project, we propose a method to detect these hallucinations.
Submission Number: 27
Loading