FLAME: Reducing Computation in Federated Learning via Sample-Adaptive Multi-Exit Training

ICLR 2026 Conference Submission19009 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, multi-exit models, efficient deep learning
Abstract: Federated learning (FL) enables a group of clients to collaboratively train a global machine learning model without sharing raw data. It is particularly suited to Internet-of-Things and similar environments involving small, heterogeneous devices. However, these clients often lack the computational resources needed to train the full global model locally, as the FL pipeline conventionally expects. Prior work addresses this challenge by assigning smaller sub-networks to resource-constrained clients, but such approaches have a key limitation: they do not adapt computational effort based on the needs of individual input samples. In this work, we introduce Federated Learning with sample-Adaptive Multi-Exiting (FLAME), the first method to incorporate sample-adaptive early exiting into local training for efficient FL. FLAME allows each training sample to exit at the earliest layer at which the model can confidently predict the sample’s output, which improves efficiency without sacrificing accuracy. We show that this use of sample-adaptiveness leads to better AUC than existing solutions because instead of uniformly saving computation across all samples, it strategically saves it on easier samples and preserves it for harder ones. Our empirical results demonstrate FLAME's ability to reduce per-client computation by up to 50% while maintaining or even improving model accuracy, and to outperform existing solutions in practical settings. We also show how FLAME’s success stems from FL’s collaborative nature and propose two optimizations that further enhance its efficiency and performance. Overall, this work introduces the novel concept of training-time sample-adaptiveness in the FL domain, which opens new avenues for improving the utilization of heterogeneous clients and for enhancing the FL paradigm.
Primary Area: optimization
Submission Number: 19009
Loading