ConvFL: Efficient Federated Learning of Heterogeneous Model Structures with Converters

Published: 01 Jan 2024, Last Modified: 17 Mar 2025SmartIoT 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the proliferation of edge devices like smart home appliances and video surveillance cameras, technologies are emerging to process data directly on these devices. This approach reduces communication costs and lowers the risk of privacy violations. However, the limited training data available to each device in an edge environment constrains model performance. Federated learning (FL), where multiple devices collaborate to train models while preserving privacy, can help achieve better performance. Additionally, methods have been proposed to implement FL with different pre-trained models on each device. However, in a cross-device environment with varying computational power, inference on multiple pre-trained models is computationally expensive. This paper assumes that each client maintains a single pre-trained model while performing FL for downstream models. In the proposed method, ConvFL, each client uses a converter to transform its pre-trained model's output into the output of another client's pre-trained model, enabling inference using only its own model. Experiments show that this approach incurs minimal performance degradation while reducing computational costs compared to FL without a converter.
Loading