Can Deception Detection Go Deeper? Dataset, Evaluation, and Benchmark for Deception Reasoning

ACL ARR 2025 February Submission3351 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deception detection has garnered increasing attention due to its significance in real-world scenarios, with its main goal being to identify lies in an individual's external behaviors. However, these bases are often subjective and linked to personal habits. To this end, we extend deception detection to *deception reasoning*, further providing objective evidence to support subjective judgment. Specifically, we provide potential lies and basic facts and then analyze inconsistencies in the facts and the underlying intentions to explore why a statement may be a lie. Compared with traditional deception detection, this task is more applicable to real-world scenarios. For example, in interrogation, the police should judge whether a person is lying based on solid evidence. This paper presents our initial attempts at this task, including constructing datasets and defining evaluation metrics. Meanwhile, this task can serve as a benchmark for evaluating the reasoning capability of large language models. Our code and data are provided in the supplementary material.
Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: deception reasoning, dataset, evaluation, benchmark
Contribution Types: Data resources, Data analysis
Languages Studied: Chinese, English
Submission Number: 3351
Loading