Adaptive Perception for Unified Visual Multimodal Object Tracking

Published: 01 Jan 2025, Last Modified: 04 Nov 2025IEEE Trans. Artif. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, many multimodal trackers have prioritized RGB as the dominant modality, treating other modalities as auxiliary, and fine-tuning separately various multimodal tasks. This imbalance in modality dependence limits the ability of methods to dynamically utilize complementary information from each modality in complex scenarios, making it challenging to fully perceive the advantages of multimodal. As a result, a unified parameter model often underperforms in various multimodal tracking tasks. To address this issue, we propose APTrack, a novel unified tracker designed for multimodal adaptive perception. Unlike previous methods, APTrack explores a unified representation through an equal modeling strategy. This strategy allows the model to dynamically adapt to various modalities and tasks without requiring additional fine-tuning between different tasks. Moreover, our tracker integrates an adaptive modality interaction (AMI) module that efficiently bridges cross-modality interactions by generating learnable tokens. Experiments conducted on five diverse multimodal datasets (RGBT234, LasHeR, VisEvent, DepthTrack, and VOT-RGBD2022) demonstrate that APTrack not only surpasses existing state-of-the-art unified multimodal trackers but also outperforms trackers designed for specific multimodal tasks.
Loading