Abstract: Dementia, particularly Alzheimer's disease (AD), affects millions of elderly individuals worldwide. Traditionally, interview data, including audio recordings and transcripts, is used to train Artificial Intelligence models for the automatic detection of AD patterns. In this work, we introduce a novel attention-weighted image set, where each image integrates text-image relevance with focused areas from the Cookie Theft picture, derived from the corresponding description. Furthermore, we propose a novel multimodal architecture, Hybrid Self-Aligned Fusion with Dual-Weight Attention Network (HSAF-DWAN), to predict AD, using audio recordings, transcripts, and corresponding attention-weighted images. This architecture consists of two key modules: an Intra-Modality Self-Alignment (IMSA) module, which captures relationships within a single modality, and a Dual-Weight Cross-Modality Attention (DW-CMA) module, which effectively fuses cross-modality data through a dual-weight mechanism, incorporating an optimized cross-attention and secondary weighting. Extensive experiments conducted on the Cookie Theft corpus from DementiaBank demonstrate that our method outperforms state-of-the-art models, achieving an accuracy of 86.71% and an F1 score of 88.15%.
External IDs:dblp:journals/spl/WangWGC25
Loading