Abstract: This study investigates the effectiveness of Large Language Models (LLMs) in detecting deception using a Retrieval Augmented Generation (RAG) framework for few-shot learning in domain-agnostic settings. Our approach combines the sophisticated reasoning capabilities and extensive knowledge base of LLMs to identify deceptive statements across various contexts, with a focus on the explainability of the detection process. This emphasis on explainability enables a detailed analysis of the model's methodologies in distinguishing between truthful and deceptive statements. Additionally, we examine the impact of different definitions of deception, from overt falsehoods to subtle misrepresentations, on the model's accuracy. Our main contributions include providing initial insights into the adaptability of LLMs for deception detection and highlighting the challenges faced in this endeavor, thereby encouraging further exploration in this area.
Loading