Benchmarking Chinese Commonsense Reasoning with a Multi-hop Reasoning Perspective

ACL ARR 2025 May Submission6550 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While Large Language Models (LLMs) have demonstrated advanced reasoning capabilities, their comprehensive evaluation in general Chinese-language contexts remains understudied. To bridge this gap, we propose **C**hinese **C**ommonsense **M**ulti-h**O**p **R**easoning (CCMOR), a novel benchmark designed to evaluate LLMs' ability to integrate Chinese-specific factual knowledge with multi-step logical reasoning. Specifically, we first construct a domain-balanced seed set from existing QA datasets, then develop an LLM-powered pipeline to generate multi-hop questions anchored on factual unit chains. To ensure the quality of resulting dataset, we implement a human-in-the-loop verification system, where domain experts systematically validate and refine the generated questions. Using CCMOR, we evaluate state-of-the-art LLMs, demonstrating persistent limitations in LLMs' ability to process long-tail knowledge and execute knowledge-intensive reasoning. Notably, retrieval-augmented architectures substantially mitigate these knowledge gaps, yielding significant performance gains.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Question Answering,Resources and Evaluation
Contribution Types: Data resources
Languages Studied: Chinese,English
Submission Number: 6550
Loading