Abstract: Recent advance of deep learning has been transforming the landscape in many domain, including health care. However, understanding the predictions of a deep network remains a challenge, which is especially sensitive in health care domains as interpretability is key. Techniques that rely on saliency maps -highlighting the region of an image that influence the classifier's decision the most- are often used for that purpose. However, gradients fluctuation make saliency maps noisy ant thus difficult to interpret at a human level. Moreover, models tend to focus on one particular influential region of interest (ROI) in the image, even though other regions might be relevant for the decision. We propose a new framework that refines those saliency maps to generate segmentation masks over the ROI on the initial image. In a second contribution, we propose to apply those masks over the original inputs, then evaluate our classifier on the masked inputs to identify previously unidentified ROI. This iterative procedure allows us to emphasize new region of interests by extracting meaningful information from the saliency maps.
Keywords: Machine Learning, Deep Learning, Saliency Maps, Iterative Segmentation, Interpretability
Author Affiliation: Montréal Institute for Learning Algorithms (MILA), Université de Montréal, Imagia Cybernetics