Surface Mesh Reconstruction From Medical Images via Enrichment Feature Learning and Mesh Contour Loss
Abstract: Surface mesh reconstruction from medical images plays a vital role in various computer vision and medical image analysis tasks. Recent research has explored direct reconstruction approaches from images using deep learning, achieving fast and reasonable surface reconstruction. However, challenges remain, including the insufficient capability for feature extraction and tackling indistinct tissue boundaries, hindering accurate reconstruction. To address these limitations, we propose a Robust Feature Extraction Network (RFENet), possessing robust feature extraction capabilities from given medical images and spatial representations. Specifically, we introduce the Feature Interaction (FI) module to incorporate multi-scale interaction features, mitigating the loss of useful features when mapping from voxel features to mesh features. Additionally, the Element-wise Dot Deformation (EDD) module sufficiently extracts features from irregular mesh data via a dual-branch structure capturing local and global features and establishing long-range dependencies. We also employ a mesh contour loss strategy to map the mesh to the regular voxel image domain for improved contour extraction and boundary identification. Evaluations on the OASIS dataset demonstrate reduced average symmetric surface distance (ASSD) and Hausdorff distance (HD) metrics by 0.053mm and 0.179mm, respectively, compared to state-of-the-art approaches. Moreover, we evaluate our RFENet on rectum surface mesh reconstruction in the WORD dataset for the first time, achieving favorable outcomes and highlighting the potential advantages of our proposed method. The source code and detailed implementation are available at https://github.com/VCL-HNU/RFENet.
External IDs:doi:10.1109/tmrb.2026.3654256
Loading