PAH2T-Former: Paired-Attention Hybrid Hierarchical Transformer for Synergistically Enhanced FMT Reconstruction Quality and Efficiency
Abstract: Fluorescence molecular tomography (FMT) is a sensitive optical imaging technique that can achieve three-dimensional (3D) tomographic images at the molecular and cellular levels. However, reconstructing the internal 3D distribution of fluorescent targets from surface two-dimensional (2D) fluorescence projection data remains a challenging task. In recent years, deep learning-based FMT reconstruction has received considerable attention, demonstrating superior performance compared to conventional methods, particularly combined with Transformers. Unlike convolutional architectures that emphasize local context, Transformers leverage self-attention mechanisms to excel at capturing long-range dependencies, thereby enhancing FMT reconstruction accuracy. Nevertheless, the quadratic computational complexity of self-attention poses a bottleneck, particularly pertinent in 3D FMT reconstructions. This paper aims to propose a novel Transformer-based FMT reconstruction algorithm that not only delivers high-quality reconstruction accuracy but also maintains excellent performance in efficiency and inference speed. The key design involves introducing a novel Spatial-Channel Paired Attention Module (SC-PAM), which employs a pair of interdependent branches based on spatial and channel attention, thus effectively learn discriminative features in both spatial and channel domains, meanwhile exhibiting linear complexity relative to the input projection size. Furthermore, to facilitate data transmission between the spatial and channel branches, we share the weights of the query and key mapping functions, which provides a complementary paired attention without elevating complexity. Extensive evaluations through numerical simulations and in vivo experiments were performed to validate effectiveness of the proposed model. The results show that our PAH2T-Former method achieves the highest Dice while reducing model parameters and complexity.
External IDs:doi:10.1109/tci.2025.3559431
Loading