Keywords: Federated Learning, Class Variational Autoencoders, Feature Augmentation, Data Heterogeneity, Privacy Preservation, Communication Efficiency
TL;DR: We propose FLAIR, a novel federated learning approach that enhances feature representation using CVAEs to improve image classification model accuracy, while reducing communication costs and improving privacy in heterogeneous data settings.
Abstract: Federated Learning (FL) enables collaborative model training across decentralized clients while preserving data privacy. However, its performance declines in challenging heterogeneous data settings. To mitigate this, existing FL frameworks not only share locally trained parameters but also exchange additional information -- such as control variates, client features, and classifier characteristics -- to address the effects of class imbalance and missing classes. However, this leads to increased communication costs and heightened risks of privacy breaches. To strike a balance between communication efficiency, privacy protection, and adaptability to heterogeneous data distributions, we propose FLAIR, a novel FL approach with augmented and improved feature representations. FLAIR utilizes Class Variational Autoencoders (CVAE) for feature augmentation, mitigating class imbalance and missing class issues. It also incorporates Reptile meta-training to facilitate knowledge transfer between model updates, adapting to dynamic feature shifts. To generalize model update, FLAIR shares only local CVAE parameters instead of local model parameters, which reduces both communication costs and privacy risks. Our experiments on benchmark datasets -- such as MNIST, CIFAR-10, CIFAR-100, and TinyImageNet -- demonstrates a significant enhancement in model convergence and accuracy compared to state-of-the-art solutions, while reducing communication overhead and privacy risks.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10958
Loading