Probing Relative Interaction and Dynamic Calibration in Multi-modal Entity Alignment

ACL ARR 2025 February Submission6107 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs. Current methods have made significant progress by improving embedding and cross-modal fusion. However, most of them depend on using loss functions to capture the relationship between modalities or adopt a one-time strategy to directly compute modality weights using attention mechanisms, which overlooks the relative interactions between modalities at the entity level and the accuracy of modality weights, thereby hindering the generalization to diverse entities. To address this challenge, we propose RICEA, a relative interaction and calibration framework for multi-modal entity alignment, which dynamically computes weights based on the relative interaction and recalibrates the weights according to their uncertainties. Among these, we propose a novel method called $ADC$ that utilizes attention mechanisms to perceive the uncertainty of the weight for each modality, rather than directly calculating the weight of each modality as in previous works. Across 5 datasets and 22 settings, our proposed framework significantly outperforms other baselines. Our code and data are available at \href{https://anonymous.4open.science/r/RICEA-12D7/}{https://anonymous.4open.science/r/RICEA-12D7/}.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Knowledge Graphs, Entity Alignment, Multi-Modal
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Theory
Languages Studied: English
Submission Number: 6107
Loading