Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal ImagesDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: linear probes, interpretability, medical imaging
Abstract: We analyze the relationships and shared structure among different prediction tasks on a dataset of retinal images using linear probes: linear regression models trained on some "target'' task, using embeddings from a deep convolutional (CNN) model trained on some "source'' task as input. We use this method across all possible pairings of 101 tasks in the UK Biobank dataset of retinal images, leading to $\sim$193k different models. We analyze the performance of these linear probes by source and target task and by layer depth. We observe that representations from the middle layers of the network are more generalizable. We find that some target tasks are easily predicted irrespective of the source task, and that some other target tasks are more accurately predicted from correlated source tasks than from embeddings trained on the same task. We then try to understand the principles that might be at work using synthetic experiments: images generated based on a "dead leaves" model.
One-sentence Summary: We use linear probes as a tool to interpret CNN model predictions on retinal images.
6 Replies

Loading