Abstract: This paper introduces Federated Attention Fisher (FedAF), a new framework for privacy-preserving, decentralized model training in Federated Learning (FL). FL allows collaborative training across devices without centralizing raw data, addressing privacy and security concerns. However, FL struggles with data heterogeneity, communication costs, and security risks like model poisoning. FedAF tackles these issues by using Attention Mechanisms and Fisher Information Matrix (FIM) to improve model aggregation, communication efficiency, and robustness. Attention Mechanisms enable fine-grained, layer-wise aggregation based on parameter similarity, while Fisher Information helps extract knowledge from distributed data efficiently. Experiments on datasets like FashionMNIST, SVHN, CIFAR10, and CINIC10 show that FedAF outperforms existing FL methods in perplexity and communication rounds. Ablation studies confirm the importance of Attention Mechanisms and Fisher Information in boosting performance and efficiency. This work advances FL, enabling scalable, privacy-aware AI applications that comply with regulations and build user trust in decentralized systems.
External IDs:dblp:conf/icic/XiaoJP25
Loading