Abstract: In real-world applications, a cross-model retrieval model trained on multimodal instances without considering differences in data distributions among users, termed as user domain shift, usually cannot generalize well to unknown user domains. In this paper, we define a new task of user-generalized cross-modal retrieval, and propose a novel Meta-Learning Multimodal User Generalization (MLMUG) method to solve it. MLMUG simulates the user domain shift with meta-optimization, which aims to embed multimodal data effectively and generalize the cross-modal retrieval model to any unknown user domains. We design a cross-modal embedding network with a learnable meta covariant attention module to encode transferable knowledge among different user domains. A user-adaptive meta-optimization scheme is proposed to adaptively aggregate gradients and meta-gradients for fast and stable meta-optimization. We build two benchmarks for user-generalized cross-modal retrieval evaluation. Experiments on the proposed benchmarks validate the generalization of our method compared with several state-of-the-art methods.
0 Replies
Loading