LP-DWLA-ViT: Light-Patch and Dynamic window local attention Vision Transformer network for Alzheimer's disease classification
Abstract: Alzheimer’s disease (AD) is the leading cause of dementia in the elderly, and its numbers are rising rapidly. In recent years, many researchers have used convolution neural networks and vision transformer to classify AD. However, most networks do not have a good balance between classification performance and efficiency. To solve this problem, this paper proposes a new Light-Patch and Dynamic window local attention Vision Transformer network (LP-DWLA-ViT) to classify AD. The network includes a Light-Patch (LP) module and a Dynamic window local attention (DWLA) module. LP module uses convolution layer and smaller patch, reduces computation and improves classification efficiency. DWLA module achieves both classification performance and efficiency by dividing the attention computing window and dynamically changing the window size. The LP-DWLA-ViT network has been extensively tested on ADNI datasets. Its accuracy up to 99.36%, specificity up to 99.71% and sensitivity up to 99.46%.
Loading