Abstract: POI tagging aims to annotate a point of interest (POI) with some informative tags, which facilitates many services related to POIs,
including search, recommendation, and so on. Most of the existing solutions neglect the significance of POI images and seldom fuse
the textual and visual features of POIs, resulting in suboptimal tagging performance. In this paper, we propose a novel Multi-Modal
Model for POI Tagging, namely M3PT, which achieves enhanced POI tagging through fusing the target POI’s textual and visual
features, and the precise matching between the multi-modal representations. Specifically, we first devise a domain-adaptive image
encoder (DIE) to obtain the image embeddings aligned to their gold tags’ semantics. Then, in M3PT’s text-image fusion module
(TIF), the textual and visual representations are fully fused into the POIs’ content embeddings for the subsequent matching. In addition,
we adopt a contrastive learning strategy to further bridge the gap between the representations of different modalities. To
evaluate the tagging models’ performance, we have constructed two high-quality POI tagging datasets from the real-world business
scenario of Ali Fliggy. Upon the datasets, we conducted the extensive experiments to demonstrate our model’s advantage over
the baselines of uni-modality and multi-modality, and verify the effectiveness of important components in M3PT, including DIE, TIF
and the contrastive learning strategy.
Loading