DetectBench: Can LLMs Piece Together Implicit Evidence for Long-Context Multi-Hop Reasoning?

ACL ARR 2024 April Submission199 Authors

15 Apr 2024 (modified: 14 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Detecting evidence in the context is a key step in the process of reasoning. Evaluating and enhancing the capabilities of large language models (LLMs) in evidence detection can further strengthen context-based reasoning performance. Based on this, this paper propose a benchmark called DetectBench for verifying the ability to detect and piece together implicit evidence within long context. DetectBench contains 3,928 multiple-choice questions, with an average of 190.6 tokens per question. Each question contains an average of 4.7 pieces of implicit evidence, and solving the problem typically requires relying on this evidence to make 8.9 logical jumps to find the correct answer. To enhance the performance of LLMs in evidence detection, this paper further propose Detective Reasoning Prompt and Finetuning methods. Experiments demonstrate that the existing LLMs' abilities to detect evidence in long contexts are far inferior to humans. However, the Detective Reasoning Prompt effectively enhance the capability of powerful LLMs in evidence detection, while the Finetuning method shows significant effects in enhancing the performance of weaker LLMs. Moreover, when the abilities of LLMs in evidence detection are improved, their final reasoning performance are also enhanced accordingly.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: large language model, evidence detection, multi-hop commonsense reasoning
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English, Chinese
Submission Number: 199
Loading