Interpretable Geometric Deep Learning via Learnable Randomness InjectionDownload PDF

Published: 21 Oct 2022, Last Modified: 03 Jul 2024AI4Science PosterReaders: Everyone
Keywords: Geometric Deep Learning, Interpretability, Graph Neural Networks
Abstract: Point cloud data is ubiquitous in scientific fields. Recently, geometric deep learning (GDL) has been widely applied to solve prediction tasks with such data. However, GDL models are often complicated and hardly interpretable, which poses concerns to scientists when deploying these models in scientific analysis and experiments. This work proposes a general mechanism named learnable randomness injection (LRI), which allows building inherently interpretable models based on general GDL backbones. LRI-induced models, once being trained, can detect the points in the point cloud data that carry information indicative of the prediction label. We also propose four datasets from real scientific applications that cover the domains of high energy physics and biochemistry to evaluate the LRI mechanism. Compared with previous post-hoc interpretation methods, the points detected by LRI align much better and stabler with the ground-truth patterns that have actual scientific meanings. LRI is grounded by the information bottleneck principle. LRI-induced models also show more robustness to the distribution shifts between training and test scenarios. Our code and datasets are available at https://github.com/Graph-COM/LRI.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/interpretable-geometric-deep-learning-via/code)
0 Replies

Loading