Keywords: Commonsense reasoning, Compositional reasoning, Logical reasoning, Benchmark datasets, Evaluation of language models, Negation understanding, Diagnostic evaluation
Abstract: Commonsense reasoning often involves evaluating multiple plausible interpretations rather than selecting a single atomic answer, yet most benchmarks rely on single-label evaluation, obscuring whether statements are jointly plausible, mutually exclusive, or jointly implausible. We introduce LOGICAL-COMMONSENSEQA, a benchmark that re-frames commonsense reasoning as logical composition over pairs of atomic statements using plausibility-level operators (AND, OR and NEITHER/NOR). Evaluating instruction-tuned, reasoning-specialized, and fine-tuned models under zero-shot, few-shot, and chain-of-thought prompting, we find that while models perform reasonably on conjunctive and moderately on disjunctive reasoning, performance degrades sharply on negation-based questions. LOGICAL-COMMONSENSEQA exposes fundamental reasoning limitations and provides a controlled framework for advancing compositional commonsense reasoning.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: corpus creation, benchmarking, evaluation, NLP datasets
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 5865
Loading