Multigranularity feature aggregation in vision transformer for lung disease classification and segmentation
Abstract: Deep learning techniques are crucial for efficient and accurate lung disease diagnosis, reducing human error. To address the limitations of current vision transformers in terms of local feature extraction and spatial information encoding for lung images, we propose a multigranularity feature aggregation in vision transformer (MFA-ViT), which is applied to both lung disease classification and semantic segmentation of lung images. MFA-ViT employs convolution operations to divide lung images into equal-sized patches and designs a multigranularity outlook attention mechanism to extract multigranular features and spatial information by reshaping feature maps and computing outlook attention values across different windows. Subsequently, implicit spatial positional encoding is implemented to build detailed spatial descriptions for each patch by aggregating multigranular spatial information and fusing corresponding max-pooling outputs. Finally, the vision transformer captures global features. Evaluations conducted on multiple public datasets from Kaggle and GitHub, along with our integrated multiclass lung disease dataset, demonstrate superior performance. Our method achieves 99.05% accuracy in binary classification on the covid-chestxray-dataset and 97.17% accuracy in multiclass tasks. For semantic segmentation, Dice coefficients reach 96.76%, 95.46%, and 91.44% on the complete Montgomery and Shenzhen datasets and a subset of the lung mask image dataset, respectively. Experimental results confirm MFA-ViT’s effectiveness.
Loading