Abstract: Vision-language object tracking integrates advanced linguistic information, enhancing its robustness and accuracy in complex scenarios. Nevertheless, current methods are constrained by a lack of sufficient vision-language data, making it challenging for the model to learn generalized knowledge. To alleviate this issue, we propose a new prompt-based framework for vision-language tracking, named ProVLT. This framework casts language information as a prompt for pretrained vision-based tracking models, thereby leveraging the knowledge from extensive tracking data. Experiments demonstrate that ProVLT achieves competitive performance while training only a fraction of parameters (approximately 29% of modal parameters). For instance, ProVLT achieves competitive performance, attaining AUC of 59.8% on TNL2K benchmark. Furthermore, we augment five mainstream vision-only tracking benchmarks with language annotations, and find that the inclusion of linguistic information consistently improves tracking performance. On these benchmarks, the linguistic information improves the performance by an average of 2.9% compared with the vision-based tracker. We will release the code, models, and benchmarks for the community.
External IDs:dblp:journals/tcsv/ZongZCLW25
Loading