C-Algl Net: Pathological Images Generate Diagnostic Results

Published: 01 Jan 2020, Last Modified: 13 Nov 2024ISBI Workshops 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The lack of a clear correspondence between feature of lesion areas and corresponding pathological characteristics and the scarcity of high-quality histopathological image sets pose a great challenge to the establishment of interpretable computer-aided diagnostic systems. Therefore, we propose a new deep learning-based model, named as C-ALGL model (CNN-AttendLSTM-GenerateLSTM), which is able to generate visual image results with diagnostic descriptions from input histopathological images in one pass. We use an improved recurrent neural network-based structure that incorporates attentional mechanisms in the LSTM interlayer with altered LSTM parameter delivery pathways. The structure generates visualization results at the attentional mechanism and diagnostic text at the end-connected full-connected layer. We conducted a large number of experiments on the PATHOLOGY-11 skin pathology image dataset and the experimental results proved that the C-ALGL model performed better than benchmark models on this task.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview