IC-CViT: Inverse-Consistent Convolutional Vision Transformer for Diffeomorphic Image Registration

Published: 01 Jan 2023, Last Modified: 17 Apr 2025IJCNN 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Diffeomorphic registration plays a crucial role in medical image analysis due to the invertible and one-to-one mapping transformation. In recent years, with the development of deep learning technology, convolutional neural networks (CNNs) have been a broad focus of research in medical image registration, and CNN-based methods have made great progress. However, the results of most existing methods generally are not necessarily diffeomorphic, generating implausibly bijective mappings between images due to the interpolation and discrete representation. Furthermore, the performances of CNNs may be limited by a lack of precise comprehension of global and long-range cross-image spatial relevance. Vision Transformer (ViT) is capable of enhancing the long-distance information interaction ability to identify the semantically anatomically correspondences of medical images. Compared with CNN, ViT has weak local feature extraction ability due to less inductive bias, especially in small-scale training datasets, meaning that the samples between adjacent pixels cannot be exploited adequately. To address the above challenges, we propose a novel Inverse-Consistent Convolutional Vision Transformer (IC-CViT) network for diffeomorphic image registration. Specifically, image pairs can explicitly conduct bi-directional registration through the predicted deformation filed, generated within the diffeomorphic mappings space and restricted by the proposed inverse consistent loss term. We verify our method on two 3D brain MRI scan datasets including OASIS and LPBA40. Comprehensive results demonstrate that IC-CViT achieves state-of-the-art registration accuracy while maintaining desired diffeomorphic properties.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview