Mask-Guided Self-Distillation For Visual Tracking

Published: 01 Jan 2022, Last Modified: 13 Nov 2024ICME 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, Siamese-based visual trackers has been dramatically improved on performance, however, excessive parameters of tracking networks make models seriously hinder the practical deployment on edge devices. In this work, we present mask-guided self-distillation(MGSD) to compress the models of Siamese-based visual trackers, which enables Siamese-based visual trackers to capture crucial knowledge for effecting the performance of tracking. Specifically, MGSD consists of mask-guided semantic features self-distillation and decoupled tracking-head self-distillation, which discards the redundant convolution parameters in feature-dependency and task-dependency. It is worth noting that the proposed method compress exponentially the models of Siamese-based trackers under the condition of keeping even improving the performance of tracking. Extensive comprehensive experiments on tracking benchmarks including OTB2015, UAV123, LaSOT demonstrate that our proposed method can compress model size of Siamese-based visual trackers to 50% while achieving state-of-the-art performance. Our source code is available at: https://github.com/xl0312/MGSD.
Loading