Abstract: Federated learning enables collaborative model training across multiple devices without centralizing data, ensuring privacy preservation. However, traditional federated learning techniques struggle with heterogeneous data distributions and varying computational capabilities across nodes. We propose an adaptive federated learning framework that dynamically adjusts aggregation weights and optimizes local training strategies based on node-specific characteristics. Our method improves convergence speed, maintains model robustness across diverse data sources, and ensures privacy-preserving knowledge sharing. Experimental validation on healthcare and finance datasets demonstrates enhanced accuracy and reduced communication overhead compared to baseline federated learning methods.
Keywords: Federated Learning, Privacy-Preserving Machine Learning, Adaptive Optimization, Distributed Data Mining, Edge AI
Loading