DFMM: An Object Tracking Approach Based on Deep Feature Modification
Abstract: In complex tracking environments, existing trackers primarily encounter issues of redundant deep convolutional features and a shortage of positive samples in the target tracking process. To address these challenges, an attention mechanism model DFMM, the Deep Feature Modification Model, is proposed based on the fusion of spatial and channel domains. This model comprises three consecutive sub-modules: spatial self-attention, channel attention, and spatial attention. Building upon this, a deep convolutional network adaptable to various visual algorithms is constructed. Additionally, strategies for feature extraction and enhancement based on feature modification are designed to mitigate problems such as redundant feature negative feedback and a lack of positive samples. Experimental results demonstrate that integrating the feature modification module in mainstream ResNet target classification tasks significantly reduce Top-1 and Top-5 error rates without incurring additional computational overhead or necessitating network structure adjustments, achieving lightweight integration. Furthermore, incorporating the feature modification module in multiple related tracking algorithms enhances tracking performance and addresses discriminator overfitting issues.
Loading