TabCBM: Concept-based Interpretable Neural Networks for Tabular Data

Published: 28 Jul 2023, Last Modified: 28 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Concept-based interpretability addresses the opacity of deep neural networks by constructing an explanation for a model's prediction using high-level units of information referred to as concepts. Research in this area, however, has been mainly focused on image and graph-structured data, leaving high-stakes tasks whose data is tabular out of reach of existing methods. In this paper, we address this gap by introducing the first definition of what a high-level concept may entail in tabular data. We use this definition to propose Tabular Concept Bottleneck Models (TabCBMs), a family of interpretable self-explaining neural architectures capable of learning high-level concept explanations for tabular tasks. As our method produces concept-based explanations both when partial concept supervision or no concept supervision is available at training time, it is adaptable to settings where concept annotations are missing. We evaluate our method in both synthetic and real-world tabular tasks and show that TabCBM outperforms or performs competitively compared to state-of-the-art methods, while providing a high level of interpretability as measured by its ability to discover known high-level concepts. Finally, we show that TabCBM can discover important high-level concepts in synthetic datasets inspired by critical tabular tasks (e.g., single-cell RNAseq) and allows for human-in-the-loop concept interventions in which an expert can identify and correct mispredicted concepts to boost the model's performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/mateoespinosa/tabcbm
Supplementary Material: zip
Assigned Action Editor: ~Pin-Yu_Chen1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1053
Loading