Keywords: Interpretable, Classification, Regression, Deep Generative Networks
TL;DR: This paper presents a framework for regression with feature attribution using deep generative methods.
Abstract: Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification and regression problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present an extension of the ICAM framework for creating prediction specific FA maps through image-to-image translation.
Paper Type: both
Primary Subject Area: Interpretability and Explainable AI
Secondary Subject Area: Application: Radiology
Paper Status: based on accepted/submitted journal paper
Source Code Url: https://github.com/CherBass/ICAM
Data Set Url: https://www.ukbiobank.ac.uk/
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Source Latex: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.02561/code)