Spatio-temporal Heterogeneous Federated Learning for Time Series Classification with Multi-view Orthogonal Training

Published: 20 Jul 2024, Last Modified: 05 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning (FL) is undergoing significant traction due to its ability to perform privacy-preserving training on decentralized data. In this work, we focus on sensitive time series data collected by distributed sensors in real-world applications. However, time series data introduce the challenge of dual spatial-temporal feature skew due to their dynamic changes across domains and time, differing from computer vision. This key challenge includes inter-client spatial feature skew caused by heterogeneous sensor collection and intra-client temporal feature skew caused by dynamics in time series distribution. We follow the framework of Personalized Federated Learning (pFL) to handle dual feature drifts to enhance the capabilities of customized local models. Therefore, in this paper, we propose a method FedST to solve key challenges through orthogonal feature decoupling and regularization in both training and testing stages. During training, we collaborate time view and frequency view of time series data to enrich the mutual information and adopt orthogonal projection to disentangle and align the shared and personalized features between views, and between clients. During testing, we apply prototype-based predictions and model-based predictions to achieve model consistency based on shared features. Extensive experiments on multiple real-world classification datasets and multimodal time series datasets show our method consistently outperforms state-of-the-art baselines with clear advantages.
Primary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: In this paper, we focus on the challenge of dual feature skew in federated time series learning. Time series learning is a significant issue in multimodal learning. Our perspective on solving challenges is also multimodal, both in the time and frequency domains. We obtain more dimensional information through cross-view analysis and achieve orthogonal constraint training. Our method can also be extended to multi-modal time series such as MOD, ACIDS and PAMAP2. In multi-modal time series, we can add modalities as views incrementally, as we did for the frequency view. Our experiments show the effectiveness of vanilla time series classification and multi-modal classification.
Supplementary Material: zip
Submission Number: 1694
Loading