Learning to Deceive Knowledge Graph Augmented Models via Targeted PerturbationDownload PDF

28 Sep 2020 (modified: 25 Jan 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: neural symbolic reasoning, interpretability, model explanation, faithfulness, knowledge graph, commonsense question answering, recommender system
  • Abstract: Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure. Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
13 Replies

Loading