Unifying Temporal Context and Multi-Feature With Update-Pacing Framework for Visual TrackingDownload PDFOpen Website

2020 (modified: 24 Apr 2023)IEEE Trans. Circuits Syst. Video Technol. 2020Readers: Everyone
Abstract: Model drifting is one of the knotty problems that seriously restricts the accuracy of discriminative trackers in visual tracking. Most existing works usually focus on improving the robustness of the target appearance model. However, they are prone to suffer from model drifting due to the inappropriate model updates during the tracking-by-detection. In this paper, we propose a novel update-pacing framework to suppress the occurrence of model drifting in visual tracking. Specifically, the proposed framework first initializes an ensemble of trackers, each of which updates the model in a different update interval. Once the forward tracking trajectory of each tracker is determined, the backward trajectory will also be generated by the current model to measure the difference with the forward one, and the tracker with the smallest deviation score will be selected as the most robust tracker for the remaining tracking. By performing such self-examination on trajectory pairs, the framework can effectively preserve the temporal context consistency of sequential frames to avoid learning corrupted information. To further improve the performance of the proposed method, a multi-feature extension framework is also proposed to incorporate multiple features into the ensemble of the trackers. The extensive experimental results obtained on large-scale object tracking benchmarks demonstrate that the proposed framework significantly increases the accuracy and robustness of the underlying base trackers, such as DSST, Struck, KCF, and CT, and achieves superior performance compared with the state-of-the-art methods without using deep models.
0 Replies

Loading