Do KG-augmented Models Leverage Knowledge as Humans Do?

Anonymous

17 Jan 2022 (modified: 05 May 2023)Submitted to BT@ICLR2022Readers: Everyone
Keywords: neural symbolic reasoning, knowledge graph, interpretability, model explanation, faithfulness, commonsense question answering, recommender system
Abstract: Knowledge Graphs (KGs) can help neural-symbolic models to improve performance on various knowledge-intensive tasks, like recommendation systems and question answering. Concretely, neural reasoning over KGs may "explain" which information is relevant for inference. However, as an old saying goes, "seeing is not believing," it is natural to ask the question, "do KG-augmented models really behave as we expect?" This post presents the historical perspectives of KG-augmented models and discusses a recent work raising this question. Interestingly, empirical results demonstrate that perturbed KGs can maintain the downstream performance, which subvert our cognition over KG-augmented models' ability. We believe this topic is necessary and important for neural-symbolic reasoning and can guide future work on designing KG-augmented models.
Submission Full: zip
Blogpost Url: yml
ICLR Paper: https://openreview.net/forum?id=b7g3_ZMHnT0
2 Replies

Loading