Keywords: commonsense QA, knowledge graph
Abstract: Leveraging pre-trained language models (LMs) together with knowledge graphs (KGs) has become a common paradigm for commonsense question answering (CommonsenseQA), as it combines linguistic understanding with structured factual knowledge. In this setting, the core challenge lies not in the availability of external knowledge, but in selecting the candidate answer that best satisfies both linguistic semantic constraints and structured knowledge constraints. Existing language model–knowledge graph (LM–KG) approaches predominantly rely on cross-modal alignment, assuming that increased representational agreement leads to improved reasoning. However, in CommonsenseQA, semantic discrepancies between language models and knowledge graphs frequently indicate constraint violations, where linguistic plausibility conflicts with graph structure or vice versa. By smoothing out such discrepancies, alignment-centric methods obscure critical reasoning signals.
To address this limitation, we propose Semantic-Aware Reasoning Network (DCRN), which explicitly models semantic discrepancy between LMs and KGs as a reasoning signal rather than noise. DCRN leverages discrepancy-aware learning to support more reliable joint reasoning for Question Answering.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: commonsense QA, knowledge base QA
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2233
Loading