Causal ATTention Multiple Instance Learning for Whole Slide Image Classification

Published: 05 Sept 2024, Last Modified: 16 Oct 2024ACML 2024 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal intervention, front-door adjustment, multiple instance learning, whole slide image classification
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
Abstract: We propose a new multiple instance learning (MIL) method called Causal ATTention Multiple Instance Learning (CATTMIL) to alleviate the dataset bias for more accurate classification of whole slide images (WSIs). There are different kinds of dataset bias due to confounders that are rooted in data generation and/or pre-training dataset of MIL. Confounders might mislead MIL models to learn spurious correlations between instances and bag label. Such spurious correlations, in turn, impede the generalization ability of models and hurt the final performance. To fight against the negative impacts of confounders, CATTMIL exploits the causal intervention using the front-door adjustment with a Causal ATTention (CATT) mechanism. This enables CATTMIL to remove the spurious correlations so as to estimate the causal effect of instances on the bag label. Unlike previous deconfounded MIL methods, our CATTMIL does not need to approximate confounder values. Therefore, CATTMIL is able to bring further performance boosting to existing schemes and achieve the state-of-the-art in WSI classification. Extensive experiments on classification of the two widely-used datasets of TCGA-NSCLC and CAMELYON16 show CATTMIL's effectiveness in suppressing the dataset bias and enhancing the generalization capability as well.
A Signed Permission To Publish Form In Pdf: pdf
Supplementary Material: zip
Primary Area: General Machine Learning (active learning, bayesian machine learning, clustering, imitation learning, learning to rank, meta-learning, multi-objective learning, multiple instance learning, multi-task learning, neuro-symbolic methods, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 289
Loading