Improving multimodal named entity recognition via text-image relevance prediction with large language models

Published: 01 Jan 2025, Last Modified: 16 Oct 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multimodal Named Entity Recognition (MNER) is a critical task in information extraction, which aims to identify named entities in text-image pairs and classify them into specific types such as person, organization and location. While existing studies have achieved moderate success by fusing visual and textual features through cross-modal attention mechanisms, two major challenges remain: (1) image-text mismatch, where the two modalities are not always semantically aligned in real-world scenarios; and (2) insufficient labeled data, which hampers the model’s ability to learn complex cross-modal associations and limits generalization. To overcome these challenges, we propose a novel framework that leverages the semantic comprehension and reasoning capabilities of Large Language Models (LLMs). Specifically, for the mismatch issue, we employ LLMs to generate the text-image relevance score with inference reason to guide the subsequent modules. Then we design Text-image Relationship Predicting (TRP) module, which determines the final feature fusion weights based on the relevance score provided by LLMs. To mitigate data scarcity, we prompt LLMs to identify the key entities in text and incorporate them into the original input. Additionally, we design Text-image Relevance Features Learning (TRFL) module to construct positive and negative samples based on the relevance score, employing a supervised contrastive learning method to further enhance the model’s ability to extract key features from image-text pairs. Experiments show that our proposed method achieves F1 scores of 75.32 % and 86.65 % on Twitter-2015 and Twitter-2017 datasets, respectively, demonstrating its effectiveness.
Loading