Abstract: Multi-modal Entity Alignment (MMEA) aims to identify the same entities exhibited in different knowledge graphs (KGs), where the entities are enriched by structure and visual information. Existing MMEA methods learn multi-modal joint entity embeddings by encompassing both modality interaction and modality alignment. However, these approaches predominantly emphasize modality interaction and fail to adequately address the issue of modality heterogeneity. In this paper, we propose a novel approach OTMEA, which leverages optimal transport to mitigate modality heterogeneity from the perspective of modality distributions. Specifically, we view the modality alignment problem as a Wasserstein minimum distance problem involving multimodal distributions. Furthermore, our experiments indicate that employing entity-level attention weights significantly enhances modality alignment through optimal transport. The effectiveness of our method is validated through extensive experiments conducted on five public datasets. The source code is available at https://github.com/wonderCS1213/OTMEA.
External IDs:dblp:conf/icassp/WangWLLB25
Loading