Cross Attention Transformers for Unsupervised Whole-Body PET Anomaly Detection with Multi-modal ConditioningDownload PDF

09 Dec 2021 (modified: 16 May 2023)Submitted to MIDL 2022Readers: Everyone
Keywords: Transformers, Unsupervised Anomaly Detection, Cross-Attention, Multi-modal, Vector Quantized Variational Autoencoder, Whole-Body
Abstract: Cancers can have highly heterogeneous uptake patterns best visualised in positron emission tomography. These patterns are essential to detect, diagnose, stage and predict the evolution of cancer. Due to this heterogeneity, a general-purpose cancer detection model can be built using unsupervised learning anomaly detection models; these models learn a healthy representation of tissue and detect cancer by predicting deviations from healthy appearances. This task alone requires models capable of accurately learning long-range interactions between organs, imaging patterns, and other abstract features with high levels of expressivity. Such characteristics are suitably satisfied by transformers, and have been shown to generate state-of-the-art results in unsupervised anomaly detection by training on healthy data. This work expands upon such approaches by introducing multimodal conditioning of the transformer via cross-attention, i.e. supplying anatomical reference information from paired CT images to aid the PET anomaly detection task. Using 83 whole-body PET/CT samples containing various cancer types, we show that our anomaly detection method is robust and capable of achieving accurate cancer localisation results even in cases where healthy training data is unavailable. Furthermore, the proposed model uncertainty, in conjunction with a kernel density estimation approach, is shown to provide a statistically robust alternative to residual-based anomaly maps. Overall, superior performance of the proposed is demonstrated against state-of-the-art alternatives, drawing attention to the potential of these approaches in anomaly detection tasks.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: validation/application paper
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Detection and Diagnosis
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
0 Replies

Loading