Abstract: Commonsense knowledge graphs (CKGs) store massive commonsense knowledge as triples whose nodes consist of free-form texts. CKG reasoning aims to predict missing nodes in incomplete commonsense triples, which is challenging as it requires more accurate embeddings for reasoning. Compared to conventional knowledge graphs (KGs), CKGs have deficient structural information due to their sparsity and contain nodes indistinguishable due to the conceptual diversity. These issues limit the performance of previous reasoning methods, because they face difficulties obtaining precise CKG representations. To address these issues, we propose a context-aware CKG reasoning framework with path-guided explanations, named CoRPe. First, CoRPe constructs context sentences based on the target commonsense triple using designed templates. The context captures reasoning paths instantiated from the first-order logic. Second, to improve CKG representations, CoRPe injects context semantics and employs a context-augmented tuning strategy on a pre-trained language model (PLM) via a synergistic optimization. Finally, CoRPe embeds structural information using a graph convolutional network (GCN) and associates the textual semantics for joint scoring. Extensive experiments on two CKGs show that CoRPe outperforms state-of-the-art KG and CKG reasoning baselines in terms of embedding and reasoning performance. Furthermore, the interpretability of CoRPe is reflected in the implicit logic during reasoning.
Loading