Abstract: Although deep learning techniques have obtained remarkable results in clinical text analysis, the delicacy of this application domain requires also that these models can be easily understood by the hospital staff. The attention mechanism, which assigns numerical weights representing the contribution of each word to the predictive task, can be exploited for identifying the textual evidence the prediction is based on. In this paper, we investigate the explainability of an attention-based classification model for radiology reports collected from an Italian hospital. The identified explanations are compared with a set of manual annotations made by the domain experts in order to analyze the usefulness of the attention mechanism in our context.
0 Replies
Loading