PRISM-3D: Periskeletal Region-aware Imaging with Segmentation-guided Modeling Using 3D Deep Learning for NSCLC Survival Prediction

Published: 08 Mar 2026, Last Modified: 08 Mar 2026ICCSIC 2026 OralEveryoneRevisionsBibTeXCC BY 4.0
Track: Track 5: Explainable AI, Causal Inference, and Model Transparency
Keywords: Non-small cell lung cancer, survival prediction, periskeletal anatomy, segmentation-guided learning, 3D convolutional neural networks, nnU-Net, model interpretability
TL;DR: A segmentation-guided 3D deep learning model that uses periskeletal anatomy from chest CT to predict two-year NSCLC survival with interpretable, anatomically grounded attention.
Abstract: Non-small cell lung cancer (NSCLC) exhibits substantial survival heterogeneity even among patients with similar clinical staging, limiting the effectiveness of tumor-centric prognostic models. We propose PRISM-3D, a periskeletal region-aware, segmentation-guided 3D deep learning framework for NSCLC survival prediction that explicitly incorporates patient-level anatomical context. Using 422 chest CT scans from the Lung1 cohort of The Cancer Imaging Archive, periskeletal anatomical regions were manually annotated and used to train an nnU-Net segmentation model achieving a Dice score of 0.946. The resulting masks were integrated as an explicit spatial prior for a 3D ResNet-18 classifier trained to predict two-year survival. On a held-out internal validation cohort of 122 patients, PRISM-3D achieved an AUC of approximately 0.72, matching or exceeding reported performance of prior radiomics- and deep learning–based LUNG1 benchmarks. Grad-CAM analysis demonstrates that model predictions are driven by periskeletal anatomical regions rather than intrapulmonary features alone, indicating that periskeletal context provides complementary and interpretable prognostic information beyond tumor-centric imaging representations.
Submission Number: 75
Loading