Insights from GNN and Cellular Explanations: A Multistage Framework to Interpret Automatic Diagnosis in Histopathology
Keywords: Whole Slide Imaging, Digital Pathology, Graph Neural Networks, Explainable Artificial Intelligence, GNNExplainer, HoVer-Net, Multiscale Interpretability
Abstract: Whole Slide Imaging has transformed digital pathology by capturing tissue architecture at cellular resolution, yet its gigapixel scale and complex spatial organization challenge automated analysis. Traditional deep learning methods often overlook these spatial dependencies, limiting their diagnostic reliability. Graph-based learning addresses this limitation by representing tissue as interconnected cellular and structural entities, preserving spatial and morphological context essential for accurate cancer diagnosis. This paper investigates how Graph Neural Networks and Explainable Artificial Intelligence can jointly enhance the performance and interpretability of histopathological diagnosis. By modeling both cellular and tissue level relationships, GNN capture biologically meaningful patterns, while methods such as GNNExplainer reveal the rationale behind predictions. The integration with HoVer-Net further enables multiscale interpretability, reflecting the hierarchical reasoning process of pathologists. Extensive experiments show that attention-based GNN architectures outperform standard graph convolutional models while remaining efficient and interpretable. Beyond accuracy, the results demonstrates that graph-based learning combined with XAI provides a robust, biologically grounded foundation for reliable diagnostic systems in computational pathology.
Primary Subject Area: Interpretability and Explainable AI
Secondary Subject Area: Application: Histopathology
Registration Requirement: Yes
Visa & Travel: Yes
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 237
Loading