Complex Reasoning over Logical Queries on Commonsense Knowledge GraphsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Reasoning about events, their relationships, and inferring implicit context are crucial abilities of event commonsense reasoning, which state-of-the-art language models still struggle to perform. However, data scarcity makes it challenging to learn systems that can generate commonsense inferences for contexts and questions involving interactions between complex events. To address this demand, we present COM$^2$ (COMplex COMmonsense), a new dataset created by sampling multi-hop logical queries (e.g., the joint effect or cause of both event A and B, or the effect of the effect of event C) from an existing commonsense knowledge graph (CSKG), and verbalizing them using handcrafted rules and Large Language.Our experiments show that Language models trained on COM$^2$ exhibit significant improvements in complex reasoning ability, resulting in enhanced zero-shot performance in both in-domain and out-of-domain tasks for question answering and generative commonsense reasoning, without expensive human annotations.
Paper Type: long
Research Area: Semantics: Sentence-level Semantics, Textual Inference and Other areas
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview