MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Video Recognition, Vision-Language Model, Transfer Learning
TL;DR: This paper presents MoTE, a knowledge transfer framework that addresses the generalization and specialization trade-off problem when transferring VLMs to the video domain, achieving an optimal balance between close-set and zero-shot performance.
Abstract: Transferring visual-language knowledge from large-scale foundation models for video recognition has proved to be effective. To bridge the domain gap, additional parametric modules are added to capture the temporal information. However, zero-shot generalization diminishes with the increase in the number of specialized parameters, making existing works a trade-off between zero-shot and close-set performance. In this paper, we present MoTE, a novel framework that enables generalization and specialization to be balanced in one unified model. Our approach tunes a mixture of temporal experts to learn multiple task views with various degrees of data fitting. To maximally preserve the knowledge of each expert, we propose Weight Merging Regularization, which regularizes the merging process of experts in weight space. Additionally with temporal feature modulation to regularize the contribution of temporal feature during test. We achieve a sound balance between zero-shot and close-set video recognition tasks and obtain state-of-the-art or competitive results on various datasets, including Kinetics-400 \& 600, UCF, and HMDB. Code is available at https://github.com/ZMHH-H/MoTE.
Supplementary Material: zip
Primary Area: Machine vision
Submission Number: 4673
Loading