Federated Multimodal Fusion for Action Recognition Leveraging Vision-Language Embeddings and Spatio- Temporal CNNs

TMLR Paper5468 Authors

25 Jul 2025 (modified: 28 Aug 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning (FL) for Video Action Recognition (VAR) faces significant challenges in balancing privacy preservation, communication efficiency, and model performance. This paper introduces FLAMeST (Federated Learning for Action Recognition with Multimodal embeddings and Spacio-Temporal Fusion), a FL framework that synergizes Vision-Language Models (VLMs) and spatiotemporal CNNs to address these challenges. Unlike existing works that use BLIP (VLM) solely for caption generation, FLAMeST leverages BLIP in a dual manner. To enhance temporal modeling, complementary spatiotemporal features are extracted using a pre-trained 3D CNN (Slow network). These semantic (BLIP) and motion (Slow) embeddings are concatenated into a unified representation to train a lightweight Multi-Layer Perceptron (MLP). Within the FL paradigm, only the MLP parameters are shared with the server, ensuring raw video data and generated captions remain local. FLAMeST employs the FedAvg algorithm for model aggregation, achieving 99%(↓) lower communication overhead compared to full-model training. Experiments on UCF101 and HMDB51 datasets demonstrate the framework’s robustness, achieving improved accuracies of 5.13%(↑) and 2.71%(↑), respectively, against the baseline.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Main manuscript: Page #16, Section 6.4 and Table 8. Supplementary material: Page #25, 26, 27, Section A.4, Equations A-3 to A-11. As our initial submission did not have line numbers, for now we are referring to page numbers and section numbers. However the modified text is shown in blue color for easy reference.
Assigned Action Editor: ~Yu-Xiong_Wang1
Submission Number: 5468
Loading