MoDA: Mixture of Domain Adapters for Parameter-efficient Generalizable Person Re-identification

Published: 01 Jan 2025, Last Modified: 07 Oct 2025ACM Trans. Multim. Comput. Commun. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The Domain Generalizable Re-identification (DG ReID) task has attracted significant attention in recent years, as a challenging task but closely aligned with practical applications. Mixture-of-experts (MoE)-based methods have been studied for DG ReID to exploit the discrepancies and inherent correlations between diverse domains. However, most of DG ReID methods, especially MoE-based methods, have to fully fine-tune a large amount of parameters, which are not always practical in real-world scenarios. Considering this problem, we propose a novel MoE-based DG ReID method, named Mixture of Domain Adapters (MoDA), which utilizes many expert adapters and a global adapter to help MoE-based method scale to a much larger model but in a more parameter-efficient way. Furthermore, we conduct our approach with the large-scale vision-language pre-trained model CLIP, which exploits both visual and text encoders, to learn more robust representations based on multimodal information. Extensive experiments verify the effectiveness of our method and show that MoDA achieves competitiveness with state-of-the-art DG ReID methods with much fewer tunable parameters.
Loading