Abstract: While Large Language Models (LLMs) have demonstrated advanced reasoning capabilities, their comprehensive evaluation in general Chinese-language contexts remains understudied.
To bridge this gap, we propose **C**hinese **C**ommonsense **M**ulti-h**O**p **R**easoning (CCMOR), a novel benchmark designed to evaluate LLMs' ability to integrate Chinese-specific factual knowledge with multi-step logical reasoning.
Specifically, we first construct a domain-balanced seed set from existing QA datasets, then develop an LLM-powered pipeline to generate multi-hop questions anchored on factual unit chains.
To ensure the quality of resulting dataset, we implement a human-in-the-loop verification system, where domain experts systematically validate and refine the generated questions.
Using CCMOR, we evaluate state-of-the-art LLMs, demonstrating persistent limitations in LLMs' ability to process long-tail knowledge and execute knowledge-intensive reasoning.
Notably, retrieval-augmented architectures substantially mitigate these knowledge gaps, yielding significant performance gains.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Question Answering,Resources and Evaluation
Contribution Types: Data resources
Languages Studied: Chinese,English
Submission Number: 6550
Loading